You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Kubernetes version (if you are using kubernetes) (use kubectl version):
Environment: composer
Cloud provider or hardware configuration: gcp
OS (e.g. from /etc/os-release):
Kernel (e.g. uname -a):
Install tools:
Others:
What happened:
Seems to be a code issue. CloudDataFusionStartPipelineOperator calls the start_pipeline hook before checking for success_states and calling the wait_for_pipeline_state hook. If the pipeline takes more than 5 mins to run (default for the wait_for_pipeline_state hook) then the operator never checks for success_states because the start_pipeline hook also calls the wait_for_pipeline_state hook.
So the start_pipeline hook calling of wait_for_pipeline_state supersedes the CloudDataFusionStartPipelineOperator if the pipeline takes longer than 5 mins to enter a RUNNING state.
What you expected to happen:
It's a code issue. I expect that if I provide success_states and pipeline_timeout parameters to the data fusion operator for those parameters to change the success states and timeout parameters for the DAG.
How to reproduce it:
Run the operator with success_states and pipeline_timeout parameters on a pipeline that takes more than 5 mins to start RUNNING.
Anything else we need to know:
The text was updated successfully, but these errors were encountered:
Apache Airflow version: 1.10.10
Kubernetes version (if you are using kubernetes) (use
kubectl version
):Environment: composer
uname -a
):What happened:
Seems to be a code issue.
CloudDataFusionStartPipelineOperator calls the start_pipeline hook before checking for success_states and calling the wait_for_pipeline_state hook. If the pipeline takes more than 5 mins to run (default for the wait_for_pipeline_state hook) then the operator never checks for success_states because the start_pipeline hook also calls the wait_for_pipeline_state hook.
So the start_pipeline hook calling of wait_for_pipeline_state supersedes the CloudDataFusionStartPipelineOperator if the pipeline takes longer than 5 mins to enter a RUNNING state.
What you expected to happen:
It's a code issue. I expect that if I provide success_states and pipeline_timeout parameters to the data fusion operator for those parameters to change the success states and timeout parameters for the DAG.
How to reproduce it:
Run the operator with success_states and pipeline_timeout parameters on a pipeline that takes more than 5 mins to start RUNNING.
Anything else we need to know:
The text was updated successfully, but these errors were encountered: