In this article we will look at the steps involved in copying the data between two Azure databases using Azure Data Factory (ADF).

Below are the steps involved in copying the data between two SQL databases using ADF.
- Create the linked service connections for the source and destination databases
- Create the input and output datasets from the source and destination databases
- Create the pipeline that copies data between source and destination databases.
- Validate and publish the newly created linked services, datasets and pipelines
- Trigger the pipeline manually or create the trigger that runs the pipeline on a schedule and monitor the pipeline execution status.
Let’s look at the above steps more closely.
Creating linked services:
1. Select Author tab after you open Azure Data Factory using Author & Monitor on Azure portal.

2. Select Connections -> Linked Services -> New and then select Azure SQL Database from the data store to create the source database connection.
3. Configure the Azure SQL Database linked service selected from above – provide the Azure subscription, Server name, Database name, Authentication type, User name and Password details and Test Connection to make sure the configuration is correct. Select Create when the connection is successful.


4. Similarly, create the destination database connection by repeating the steps 2 and 3 above.
Creating datasets
5. Select Datasets, click on the three dots and then select New dataset.
6. Select the Azure SQL database from the available datasets, and then select Continue.

7. Configure the input dataset properties – use the Linked Service created for the source database connection and then select the table name from which the data that has to be copied across.

8. Similarly, create the output database connection by repeating the steps 6 and 7 above and configure the dataset properties using the destination database connection.
Creating Pipeline
9. Select Pipelines, click on the three dots and then select New pipeline.
10. Select the Copy data activity and drag it to the pipeline designer surface.
11. Provide a name to the activity.

12. Configure the Source tab in the Copy Activity settings – select the Source dataset details.
13. Configure the Sink (Destination) tab in the Copy Activity settings – select the Sink dataset details.
14. Configure the Mapping tab in the Copy Activity settings – select the Import schemas for mapping the fields between source and destination table columns. When the column names are same in source and destination, mapping is done automatically, else we need to create the mapping between the source and destination columns manually.

Validating and publishing linked services, datasets and pipelines
15. Once after the required entities are create in the Azure Data Factory, select Validate all to make sure all the entities are correctly created and configured. Reported errors during validation needs to be fixed before publishing and running the pipeline.
16. Once validated successfully, select Publish all to deploy all the new entities into Azure Data Factory.

Trigger the pipeline manually and monitor the pipeline execution
17. Select the Add trigger on the pipeline and then click on Trigger now to run the pipeline immediately.

18. To check whether the pipeline has successfully executed or not, select Monitor tab, next Pipeline runs. You should now be able to see the execution status of the pipeline. If the execution has Failed, you should be able to see the Error details on the same run record.

For verification and testing, check for the data in the destination database to make sure the pipeline has copied the data from the source successfully.
Hope this helps!