Data Factory Pipeline Tasks
  • 05 May 2022
  • 1 Minute to read
  • Contributors
  • Dark
    Light
  • PDF

Data Factory Pipeline Tasks

  • Dark
    Light
  • PDF

Introduction

A Data Factory pipeline is a logical grouping of activities that together perform a task. For example, a pipeline could contain a set of activities to perform a specific task, and then kick off a mapping data flow to analyze the same. The task that is performed can be a success or failure depending on the actions involved in the Data Factory run.

The significance of Data Factory Pipeline automated tasks is that a failed Data Factory run can be rerun at some point in time which might result in a successful run.

Thus, Data Factory Pipeline automated tasks aid in the process of rerunning the failed runs.

Rerun the failed pipeline runs

The Automated task can be configured to rerun only the runs that are required by using the following filters:

Failure Occurrence: The Data Factory Pipeline runs failed in the past x hours (specified in the automated task creation) will be chosen for rerun.

Users are also privileged to Include the previous reruns, Rerun the ignored runs of previous run, and can opt to Run task once immediately after saving, which runs the created automated task on the go after being saved.

DF Pipeline - Automated Task.gif

Notifications

  • Users can now receive notifications when a Data Factory Pipeline Automated Task is completed successfully.

  • The final section of the Data Factory Pipeline Automated Task configuration blade includes the notification configuration section, where users can configure the desired configured Notification channels and email address(es) to receive notifications individually or for a group.

  • All the configured Notification channels will be listed in this section.

  • Multiple email addresses can also be provided so that a group of users can get notifications and stay in touch.

DF Pipeline Automated task - notification.png


Was this article helpful?