Pipeline Executor
Depending on your data transformation needs, the Pipeline Executor transform can be set up to function in any of the following ways:
By default, the specified pipeline will be executed once for each input row. You can use the input row to set parameters and variables. The executor transform then passes this row to the pipeline in the form of a result row.
You can also pass a group of records based on the value in a field, so that when the field value changes dynamically, the specified pipeline is executed. In these cases, the first row in the group of rows is used to set parameters or variables in the pipeline.
You can launch multiple copies of this transform to assist in parallel pipeline processing.
Options
Parameter Tab
In this tab you can specify which field to use to set a certain parameter or variable value. If multiple rows are passed to the workflow, the first row is taken to set the parameters or variables.
There is a button in the lower right corner of the tab that will insert all the defined parameters of the specified pipeline. For information the description of the parameter is inserted into the static input value field.
On this tab you can specify the amount of input rows that are passed to the pipeline in the form of result rows. You can use the result rows in a Get rows from result transform in a pipeline.
Execution Results Tab
You can specify result fields and to which transform to send them. If you don’t need a certain result simply leave a blank input field.
In the “Result rows” tab you can specify the layout of the expected result rows of this pipeline and to which transform to send them after execution.
Result Files Tab
Here you can specify where to send the result files from the pipeline execution.