Step 55 validating and copying link data
NET Data Provider for Teradata, version 14 or later, on the integration runtime machine.
You can use one of the following tools or SDKs to use the copy activity with a pipeline.
When you enable partitioned copy, Data Factory runs parallel queries against your Teradata source to load data by partitions.
The parallel degree is controlled by the to four, Data Factory concurrently generates and runs four queries based on your specified partition option and settings, and each query retrieves a portion of data from your Teradata database.
When copying data into file-based data store, it's recommanded to write to a folder as multiple files (only specify folder name), in which case the performance is better than writing to a single file. Partition column: Specify the column used for apply hash partition.
If not specified, Data Factory automatically detects the PK column of the table you specified in the Teradata dataset. Partition column: Specify the column used to partition data.
Note that the new path requires a different set of linked service, dataset, and copy source.
You can partition against the column with integer data type.