Looking for PowerObjects? Don’t worry, you’re in the right place! We’ve been part of HCL for several years, and we’ve now taken the final step in our acquisition journey: moving our website to the HCL domain. Nothing else is changing – we are still fanatically focused on Microsoft Business Applications!

PowerObjects Blog 

for Microsoft Business Applications


Azure Data Factory (ADF) V2 - Pipeline

Post Author: Rahul Pingale |

ADF is a data integration service based on the cloud and is a part of Microsoft’s analytics suite. However, ADF Pipeline is used for an Extract Transform Load purpose (Data Integration/Data Migration between two systems (on prem or cloud) at a bigger scale).

A Pipeline is a data-driven workflow where data is orchestrated to transform as per Target system requirements. ADF Pipeline is both powerful and complex at the same time. Let’s explore creating a Pipeline...

Step 1

Login to Azure Portal and navigate to Azure Data Factory

Click Create Pipeline on screen shown below. Alternatively, you can click on the ‘Pencil’ icon to go to the Factory Resources. Right-click on the Pipelines tab and select New pipeline.

Step 2

Give a unique and relevant name for the Pipeline and provide appropriate description about the Pipeline purpose.

Step 3

Select Activities for the Pipeline.

Copy Activity – used to select data from source system as well as to load data into target system. Data can be stored either as a parking for transformation or directly to the final destination if data transformation is not needed.

Data Transformation – used for data massaging and transformation purpose, to make data ready for the target system.

*Parameters and Variable tab – set input params and variable for Data Transformation process.

e.g., You can pass on Pipeline Run ID (unique GUID) that can be used to configure unique file names for staging or transformed data files.

Step 4

Select Dataset for the source (Copy Activity). ADF supports a vast list of different source and Target datasets option. e.g., Blob Storage file, Data Lake, D365 or CDS, Azure DB, Oracle, SAP, spark, etc.

*If using D365 or CDS connector for a source, we can select CRM Entities or use Fetch XML to pull data.

*Select Source and Target datasets for direct data loading, where Data transformation is not required. Else, park a data for next Data Transformation activity.

Step 5

Select Data Flow activity and navigate to Settings property tab to click on + New. This opens a new tab in IDE to let you work on Data Transformation.

Step 6

Make use of various useful OOB functions for the transformation process or use Derived Column to put your own custom logic to massage/transform the data.

Step 7

Use Sink to generate a transformed data file. Use Mapping tab to configure out file columns and data alignments.

*Choosing Output to single file option lets you have transformed data into 1 file and you can configure the output file name using Visual expression builder (OOB functions and input Params and Variables).

Step 8

Use Copy Activity to load transformed data into target system. You can add additional columns to the dataset of String datatypes.

We hope that is helpful to you. Happy D365'ing!

Joe CRM
By Joe D365
Joe D365 is a Microsoft Dynamics 365 superhero who runs on pure Dynamics adrenaline. As the face of PowerObjects, Joe D365’s mission is to reveal innovative ways to use Dynamics 365 and bring the application to more businesses and organizations around the world.

Leave a Reply

Your email address will not be published. Required fields are marked *

PowerObjects Recommends