
You could use this same strategy to set the run_id arbitrarily.

This example above just appends the id of the triggered dag. Run_id = 'trig_' + timezone.utcnow().isoformat() Self.execution_date = timezone.parse(self.execution_date) Run_id = 'trig_'.format(self.execution_date) For example: class CustomTriggerDagOperator(TriggerDagOperator): However, you could implement your own operator, CustomTriggerDagOperator that would behave the way you want/need. Therefore, always, always define the failed_states parameters with the value state.You can't immediately do this with the TriggerDagOperator as the "run_id" is generated inside it's execute method. What does it mean? If the task you are waiting for fails, your sensor will keep running forever. However, the failed_states has no default value. By default it is set to state.SUCCESS which is usually what you want. The parameter allowed_states expects a list of states that mark the ExternalTaskSensor as success. In my opinion, stick with external_task_ids.Īnother important thing to remember is that you can wait for an entire DAG Run to complete and not only Tasks by setting those parameters to None. Notice that behind the scenes, the Task id defined for external_task_id is passed to external_task_ids. You must define one of the two but not both at the same time. With the former you can wait for one task whereas for the second you can wait for multiple tasks in the same DAG. Well, that looks confusing isn’t it? The external_task_id parameter expects the Task id of the Task you are waiting for, whereas external_task_ids expects the list of Task ids for the Tasks you are waiting for. Very straightforward, this parameter expects the DAG id of the DAG where the task you are waiting for is. Instead of explicitly triggering another DAG, the ExternalTaskSensor allows you to wait for a DAG to complete before moving to the next task. It is harder to use than the TriggerDagRunOperator, but it is still very useful to know. Ideal when a DAG depends on multiple upstream DAGs, the ExternalTaskSensor is the other way to create DAG Dependencies in Apache Airflow. The ExternalTaskSensor for Dag Dependencies I tend to use it, especially for cleaning metadata generated by DAG Runs over time. The TriggerDagRunOperator is perfect if you want to trigger another DAG between two tasks like with SubDAGs (don’t use them 😉). Usually, it implies that the targer_dag has a schedule_interval to None as you want to trigger it explicitly and not automatically.

Make sure that the target_dag is unpaused otherwise the triggered DAG Run will be queued and nothing will happen. # Data received from the TriggerDagRunOperator Start_date=datetime(2022, 1, 1), catchup=False) as start(dag_run=None): The TriggerDagRunOperator can trigger a DAG from another DAG, while the ExternalTaskSensor can poll the state in another DAG. With DAG('target_dag_1_0', schedule_interval=None, In this case, you would have a variable target_dag_version with the values 1.0, 2.0, etc. The example below can be useful if you version your target DAG and don’t want to push a new DAG where the TriggerDagRunOperator is just to change the version. That means you can inject data at run time that comes from Variables, Connections, etc. Trigger_dag_id is also a templated parameter. If DAG A triggers DAG B, DAG A and DAG B must be in the same Airflow environment. Notice that the DAG “target_dag” and the DAG where the TriggerDagRunOperator is implemented must be in the same Airflow environment. Trigger_dag_id='target_dag', # the dag to trigger For example, if trigger_dag_id=”target_dag”, the DAG with the DAG id “target_dag” will be triggered. The trigger_dag_id parameter defines the DAG ID of the DAG to trigger. Let’s take a look at the parameters you can define and what they bring.

It allows you to have a task in a DAG that triggers another DAG in the same Airflow instance. The TriggerDagRunOperator is the easiest way to implement DAG dependencies in Apache Airflow.
#Triggerdagrunoperator airflow 2.0 example how to
That’s why the arrows are opposite, unlike in the previous example.Īll right, now you have the use cases in mind, let’s see how to implement them! TriggerDagRunOperator The role of the “trigger” task is to trigger another DAG when a condition is met. How? With the “trigger” tasks! Notice that each DAG on the left has the “trigger” task at the end. The DAG on the right is in charge of cleaning this metadata as soon as one DAG on the left completes. The three DAGs on the left are still doing the same stuff that produces metadata (XComs, task instances, etc). But, if you carefully look at the red arrows, there is a major change. The example above looks very similar to the previous one.
