Introduction
You can use a collection of IBM DataStage REST APIs to process, compile, and run flows. DataStage flows are design-time assets that contain data integration logic in JSON-based schemas.
Process flows Use the processing API to manipulate data that you have read from a data source before writing it to a data target.
Compile flows Use the compile API to compile flows. All flows must be compiled before you run them. .
Run flows Use the run API to run flows. When you run a flow, the extraction, loading, and transforming tasks that were built into the flow designs are actually implemented.
You can use the DataStage REST APIs for both DataStage in Cloud Pak for Data as a service and DataStage in Cloud Pak for Data.
For more information on the DataStage service, see the following links:
The code examples on this tab use the client library that is provided for Java.
<dependency>
<groupId>com.ibm.cloud</groupId>
<artifactId>datastage</artifactId>
<version>0.0.1</version>
</dependency>
Gradle
compile 'com.ibm.cloud:datastage:0.0.1'
GitHub
The code examples on this tab use the client library that is provided for Node.js.
Installation
npm install datastage
GitHub
The code examples on this tab use the client library that is provided for Python.
Installation
pip install --upgrade "datastage>=0.0.1"
GitHub
Authentication
Before you can call an IBM DataStage API, you must first create an IAM bearer token. Tokens support authenticated requests without embedding service credentials in every call. Each token is valid for one hour. After a token expires, you must create a new one if you want to continue using the API. The recommended method to retrieve a token programmatically is to create an API key for your IBM Cloud identity and then use the IAM token API to exchange that key for a token. For more information on authentication, see the following links:
- Cloud Pak for Data as a Service: Authenticating to Watson services
- Cloud Pak for Data (this information is applicable to DataStage even though the topic title refers to Watson Machine Learning):
- If IAM integration was disabled during installation (default setting): Getting a bearer token with IAM integration disabled
- If IAM integration was enabled during installation: Getting a bearer token with IAM integration enabled
Replace {apikey}
and {url}
with your service credentials.
curl -X {request_method} -u "apikey:{apikey}" "{url}/v4/{method}"
Setting client options through external configuration
Example environment variables, where <SERVICE_URL>
is the endpoint URL, <API_KEY>
is your IAM API key and <IAM_URL>
is your IAM URL endpoint
DATASTAGE_AUTH_TYPE=iam
DATASTAGE_URL=https://api.dataplatform.cloud.ibm.com/data_intg
DATASTAGE_APIKEY=<API_KEY>
DATASTAGE_AUTH_URL=https://iam.cloud.ibm.com/identity/token
Example of constructing the service client
import com.ibm.cloud.datastage.v3.Datastage;
Datastage service = Datastage.newInstance();
Setting client options through external configuration
Example environment variables, where <SERVICE_URL>
is the endpoint URL, <API_KEY>
is your IAM API key and <IAM_URL>
is your IAM URL endpoint
DATASTAGE_AUTH_TYPE=iam
DATASTAGE_URL=https://api.dataplatform.cloud.ibm.com/data_intg
DATASTAGE_APIKEY=<API_KEY>
DATASTAGE_AUTH_URL=https://iam.cloud.ibm.com/identity/token
Example of constructing the service client
const DatastageV3 = require('datastage/datastage/v3');
const datastageService = DatastageV3.newInstance({});
Setting client options through external configuration
To authenticate when using this sdk, an external credentials file is necessary (i.e. credentials.env
).
In this credentials file you will define and set 4 required fields for authenticating your sdk use against IAM.
Example environment variables, where <API_KEY>
is your IAM API key
DATASTAGE_AUTH_TYPE=iam
DATASTAGE_URL=https://api.dataplatform.cloud.ibm.com/data_intg
DATASTAGE_APIKEY=<API_KEY>
DATASTAGE_AUTH_URL=https://iam.cloud.ibm.com/identity/token
Example of constructing the service client
import os
from datastage.datastage_v3 import DatastageV3
# define path to external credentials file
config_file = 'credentials.env'
# define a chosen service name
custom_service_name = 'DATASTAGE'
datastage_service = None
if os.path.exists(config_file):
# set environment variable to point towards credentials file path
os.environ['IBM_CREDENTIALS_FILE'] = config_file
# create datastage instance using custom service name
datastage_service = DatastageV3.new_instance(custom_service_name)
IBM Cloud URLs
The base URLs come from the service instance. To find the URL, view the service credentials by clicking the name of the service in the Resource list. Use the value of the URL. Add the method to form the complete API endpoint for your request.
https://api.dataplatform.cloud.ibm.com/data_intg
Example API request
curl --request GET --header "Content-Type: application/json" --header "Accept: application/json" --header "Authorization: Bearer <IAM token>" --url "https://api.dataplatform.cloud.ibm.com/data_intg/v3/data_intg_flows?project_id=<Project ID>&limit=10"
Replace <IAM token>
and <Project ID>
in this example with the values for your particular API call.
Error handling
DataStage uses standard HTTP response codes to indicate whether a method completed successfully. HTTP response codes in the 2xx range indicate success. A response in the 4xx range is some sort of failure, and a response in the 5xx range usually indicates an internal system error that cannot be resolved by the user. Response codes are listed with the method.
ErrorResponse
Name | Description |
---|---|
error string |
Description of the problem. |
code integer |
HTTP response code. |
code_description string |
Response message. |
warnings string |
Warnings associated with the error. |
Methods
Delete DataStage flows
Deletes the specified data flows in a project or catalog (either project_id
or catalog_id
must be set).
If the deletion of the data flows and their runs will take some time to finish, then a 202 response will be returned and the deletion will continue asynchronously.
All the data flow runs associated with the data flows will also be deleted. If a data flow is still running, it will not be deleted unless the force parameter is set to true. If a data flow is still running and the force parameter is set to true, the call returns immediately with a 202 response. The related data flows are deleted after the data flow runs are stopped.
Deletes the specified data flows in a project or catalog (either project_id
or catalog_id
must be set).
If the deletion of the data flows and their runs will take some time to finish, then a 202 response will be returned and the deletion will continue asynchronously. All the data flow runs associated with the data flows will also be deleted. If a data flow is still running, it will not be deleted unless the force parameter is set to true. If a data flow is still running and the force parameter is set to true, the call returns immediately with a 202 response. The related data flows are deleted after the data flow runs are stopped.
Deletes the specified data flows in a project or catalog (either project_id
or catalog_id
must be set).
If the deletion of the data flows and their runs will take some time to finish, then a 202 response will be returned and the deletion will continue asynchronously. All the data flow runs associated with the data flows will also be deleted. If a data flow is still running, it will not be deleted unless the force parameter is set to true. If a data flow is still running and the force parameter is set to true, the call returns immediately with a 202 response. The related data flows are deleted after the data flow runs are stopped.
Deletes the specified data flows in a project or catalog (either project_id
or catalog_id
must be set).
If the deletion of the data flows and their runs will take some time to finish, then a 202 response will be returned and the deletion will continue asynchronously. All the data flow runs associated with the data flows will also be deleted. If a data flow is still running, it will not be deleted unless the force parameter is set to true. If a data flow is still running and the force parameter is set to true, the call returns immediately with a 202 response. The related data flows are deleted after the data flow runs are stopped.
DELETE /v3/data_intg_flows
ServiceCall<Void> deleteDatastageFlows(DeleteDatastageFlowsOptions deleteDatastageFlowsOptions)
deleteDatastageFlows(params)
delete_datastage_flows(self,
id: List[str],
*,
catalog_id: str = None,
project_id: str = None,
force: bool = None,
**kwargs
) -> DetailedResponse
Request
Use the DeleteDatastageFlowsOptions.Builder
to create a DeleteDatastageFlowsOptions
object that contains the parameter values for the deleteDatastageFlows
method.
Query Parameters
The list of DataStage flow IDs to delete.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23
Whether to stop all running data flows. Running DataStage flows must be stopped before the DataStage flows can be deleted.
The deleteDatastageFlows options.
The list of DataStage flow IDs to delete.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23
Whether to stop all running data flows. Running DataStage flows must be stopped before the DataStage flows can be deleted.
parameters
The list of DataStage flow IDs to delete.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:Whether to stop all running data flows. Running DataStage flows must be stopped before the DataStage flows can be deleted.
parameters
The list of DataStage flow IDs to delete.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:Whether to stop all running data flows. Running DataStage flows must be stopped before the DataStage flows can be deleted.
curl -X DELETE --location --header "Authorization: Bearer {iam_token}" "{base_url}/v3/data_intg_flows?id=[]&project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
String[] ids = new String[] {flowID, cloneFlowID}; DeleteDatastageFlowsOptions deleteDatastageFlowsOptions = new DeleteDatastageFlowsOptions.Builder() .id(Arrays.asList(ids)) .projectId(projectID) .build(); datastageService.deleteDatastageFlows(deleteDatastageFlowsOptions).execute();
const params = { id: [subflow_assetID, subflowCloneID], projectId: projectID, }; const res = await datastageService.deleteDatastageSubflows(params);
response = datastage_service.delete_datastage_flows( id=createdFlowId, project_id=config['PROJECT_ID'] )
Response
Status Code
The requested operation is in progress.
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
No Sample Response
Get metadata and lock information for DataStage flows
Lists the metadata, entity and lock information for DataStage flows that are contained in the specified project.
Lists the metadata, entity and lock information for DataStage flows that are contained in the specified project.
Use the following parameters to filter the results:
| Field | Match type | Example |
| ------------------------ | ------------ | --------------------------------------- |
| entity.name
| Equals | entity.name=MyDataStageFlow
|
| entity.name
| Starts with | entity.name=starts:MyData
|
| entity.description
| Equals | entity.description=movement
|
| entity.description
| Starts with | entity.description=starts:data
|
To sort the results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time
(i.e. returning the most recently created data flows first).
| Field | Example |
| ------------------------- | ----------------------------------- |
| sort | sort=+entity.name
(sort by ascending name) |
| sort | sort=-metadata.create_time
(sort by descending creation time) |
Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time
and then in ascending order on name use: sort=-metadata.create_time
,+entity.name
.
Lists the metadata, entity and lock information for DataStage flows that are contained in the specified project.
Use the following parameters to filter the results:
| Field | Match type | Example |
| ------------------------ | ------------ | --------------------------------------- |
| entity.name
| Equals | entity.name=MyDataStageFlow
|
| entity.name
| Starts with | entity.name=starts:MyData
|
| entity.description
| Equals | entity.description=movement
|
| entity.description
| Starts with | entity.description=starts:data
|
To sort the results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time
(i.e. returning the most recently created data flows first).
| Field | Example |
| ------------------------- | ----------------------------------- |
| sort | sort=+entity.name
(sort by ascending name) |
| sort | sort=-metadata.create_time
(sort by descending creation time) |
Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time
and then in ascending order on name use: sort=-metadata.create_time
,+entity.name
.
Lists the metadata, entity and lock information for DataStage flows that are contained in the specified project.
Use the following parameters to filter the results:
| Field | Match type | Example |
| ------------------------ | ------------ | --------------------------------------- |
| entity.name
| Equals | entity.name=MyDataStageFlow
|
| entity.name
| Starts with | entity.name=starts:MyData
|
| entity.description
| Equals | entity.description=movement
|
| entity.description
| Starts with | entity.description=starts:data
|
To sort the results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time
(i.e. returning the most recently created data flows first).
| Field | Example |
| ------------------------- | ----------------------------------- |
| sort | sort=+entity.name
(sort by ascending name) |
| sort | sort=-metadata.create_time
(sort by descending creation time) |
Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time
and then in ascending order on name use: sort=-metadata.create_time
,+entity.name
.
GET /v3/data_intg_flows
ServiceCall<DataFlowPagedCollection> listDatastageFlows(ListDatastageFlowsOptions listDatastageFlowsOptions)
listDatastageFlows(params)
list_datastage_flows(self,
*,
catalog_id: str = None,
project_id: str = None,
sort: str = None,
start: str = None,
limit: int = None,
entity_name: str = None,
entity_description: str = None,
**kwargs
) -> DetailedResponse
Request
Use the ListDatastageFlowsOptions.Builder
to create a ListDatastageFlowsOptions
object that contains the parameter values for the listDatastageFlows
method.
Query Parameters
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23
The field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time.
The page token indicating where to start paging from.
The limit of the number of items to return, for example limit=50. If not specified a default of 100 will be used.
Possible values: value ≥ 1
Example:
100
Filter results based on the specified name.
Example:
MyDataStageFlow
Filter results based on the specified description.
The listDatastageFlows options.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23
The field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time.
The page token indicating where to start paging from.
The limit of the number of items to return, for example limit=50. If not specified a default of 100 will be used.
Possible values: value ≥ 1
Examples:100
Filter results based on the specified name.
Filter results based on the specified description.
parameters
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:The field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time.
The page token indicating where to start paging from.
The limit of the number of items to return, for example limit=50. If not specified a default of 100 will be used.
Possible values: value ≥ 1
Examples:Filter results based on the specified name.
Filter results based on the specified description.
parameters
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:The field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time.
The page token indicating where to start paging from.
The limit of the number of items to return, for example limit=50. If not specified a default of 100 will be used.
Possible values: value ≥ 1
Examples:Filter results based on the specified name.
Filter results based on the specified description.
curl -X GET --location --header "Authorization: Bearer {iam_token}" --header "Accept: application/json;charset=utf-8" "{base_url}/v3/data_intg_flows?project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23&limit=100"
ListDatastageFlowsOptions listDatastageFlowsOptions = new ListDatastageFlowsOptions.Builder() .projectId(projectID) .limit(Long.valueOf("100")) .build(); Response<DataFlowPagedCollection> response = datastageService.listDatastageFlows(listDatastageFlowsOptions).execute(); DataFlowPagedCollection dataFlowPagedCollection = response.getResult(); System.out.println(dataFlowPagedCollection);
const params = { projectId: projectID, sort: 'name', limit: 100, }; const res = await datastageService.listDatastageFlows(params);
data_flow_paged_collection = datastage_service.list_datastage_flows( project_id=config['PROJECT_ID'], limit=100 ).get_result() print(json.dumps(data_flow_paged_collection, indent=2))
Response
A page from a collection of DataStage flows.
A page from a collection of DataStage flows.
URI of a resource.
URI of a resource.
The number of data flows requested to be returned.
URI of a resource.
URI of a resource.
The total number of DataStage flows available.
A page from a collection of DataStage flows.
A page from a collection of DataStage flows.
- dataFlows
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
- lock
Entity information for a DataStage lock object.
- entity
DataStage flow ID that is locked.
Requester of the lock.
Metadata information for a DataStage lock object.
- metadata
Lock status.
The name of the DataStage flow.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
URI of a resource.
- first
URI of a resource.
URI of a resource.
- last
URI of a resource.
The number of data flows requested to be returned.
URI of a resource.
- next
URI of a resource.
URI of a resource.
- prev
URI of a resource.
The total number of DataStage flows available.
A page from a collection of DataStage flows.
A page from a collection of DataStage flows.
- data_flows
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
- lock
Entity information for a DataStage lock object.
- entity
DataStage flow ID that is locked.
Requester of the lock.
Metadata information for a DataStage lock object.
- metadata
Lock status.
The name of the DataStage flow.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
URI of a resource.
- first
URI of a resource.
URI of a resource.
- last
URI of a resource.
The number of data flows requested to be returned.
URI of a resource.
- next
URI of a resource.
URI of a resource.
- prev
URI of a resource.
The total number of DataStage flows available.
A page from a collection of DataStage flows.
A page from a collection of DataStage flows.
- data_flows
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
- lock
Entity information for a DataStage lock object.
- entity
DataStage flow ID that is locked.
Requester of the lock.
Metadata information for a DataStage lock object.
- metadata
Lock status.
The name of the DataStage flow.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
URI of a resource.
- first
URI of a resource.
URI of a resource.
- last
URI of a resource.
The number of data flows requested to be returned.
URI of a resource.
- next
URI of a resource.
URI of a resource.
- prev
URI of a resource.
The total number of DataStage flows available.
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
{ "data_flows": [ { "entity": { "data_intg_flow": { "mime_type": "application/json", "dataset": false } }, "metadata": { "asset_id": "{asset_id}", "asset_type": "data_intg_flow", "create_time": "2021-04-03 15:32:55+00:00", "creator_id": "IBMid-xxxxxxxxx", "description": " ", "href": "{url}/data_intg/v3/data_intg_flows/{asset_id}?project_id={project_id}", "name": "{job_name}", "project_id": "{project_id}", "resource_key": "{project_id}/data_intg_flow/{job_name}", "size": 5780, "usage": { "access_count": 0, "last_access_time": "2021-04-03 15:33:01.320000+00:00", "last_accessor_id": "IBMid-xxxxxxxxx", "last_modification_time": "2021-04-03 15:33:01.320000+00:00", "last_modifier_id": "IBMid-xxxxxxxxx" } } } ], "first": { "href": "{url}/data_intg/v3/data_intg_flows?project_id={project_id}&limit=2" }, "next": { "href": "{url}/data_intg/v3/data_intg_flows?project_id={project_id}&limit=2&start=g1AAAADOeJzLYWBgYMpgTmHQSklKzi9KdUhJMjTUS8rVTU7WLS3WLc4vLcnQNbLQS87JL01JzCvRy0styQHpyWMBkgwNQOr____9WWCxXCAhYmRgZKhrYKJrYBxiaGplbGRlahqVaJCFZocB8XYcgNhxHrcdhlamhlGJ-llZAD4lOMI" }, "total_count": 135 }
{ "data_flows": [ { "entity": { "data_intg_flow": { "mime_type": "application/json", "dataset": false } }, "metadata": { "asset_id": "{asset_id}", "asset_type": "data_intg_flow", "create_time": "2021-04-03 15:32:55+00:00", "creator_id": "IBMid-xxxxxxxxx", "description": " ", "href": "{url}/data_intg/v3/data_intg_flows/{asset_id}?project_id={project_id}", "name": "{job_name}", "project_id": "{project_id}", "resource_key": "{project_id}/data_intg_flow/{job_name}", "size": 5780, "usage": { "access_count": 0, "last_access_time": "2021-04-03 15:33:01.320000+00:00", "last_accessor_id": "IBMid-xxxxxxxxx", "last_modification_time": "2021-04-03 15:33:01.320000+00:00", "last_modifier_id": "IBMid-xxxxxxxxx" } } } ], "first": { "href": "{url}/data_intg/v3/data_intg_flows?project_id={project_id}&limit=2" }, "next": { "href": "{url}/data_intg/v3/data_intg_flows?project_id={project_id}&limit=2&start=g1AAAADOeJzLYWBgYMpgTmHQSklKzi9KdUhJMjTUS8rVTU7WLS3WLc4vLcnQNbLQS87JL01JzCvRy0styQHpyWMBkgwNQOr____9WWCxXCAhYmRgZKhrYKJrYBxiaGplbGRlahqVaJCFZocB8XYcgNhxHrcdhlamhlGJ-llZAD4lOMI" }, "total_count": 135 }
Create DataStage flow
Creates a DataStage flow in the specified project or catalog (either project_id
or catalog_id
must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
Creates a DataStage flow in the specified project or catalog (either project_id
or catalog_id
must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
Creates a DataStage flow in the specified project or catalog (either project_id
or catalog_id
must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
Creates a DataStage flow in the specified project or catalog (either project_id
or catalog_id
must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
POST /v3/data_intg_flows
ServiceCall<DataIntgFlow> createDatastageFlows(CreateDatastageFlowsOptions createDatastageFlowsOptions)
createDatastageFlows(params)
create_datastage_flows(self,
data_intg_flow_name: str,
*,
pipeline_flows: 'PipelineJson' = None,
catalog_id: str = None,
project_id: str = None,
asset_category: str = None,
**kwargs
) -> DetailedResponse
Request
Use the CreateDatastageFlowsOptions.Builder
to create a CreateDatastageFlowsOptions
object that contains the parameter values for the createDatastageFlows
method.
Query Parameters
The data flow name.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23
The category of the asset. Must be either SYSTEM or USER. Only a registered service can use this parameter.
Allowable values: [
system
,user
]
Pipeline json to be attached.
Pipeline flow to be stored.
The createDatastageFlows options.
The data flow name.
Pipeline flow to be stored.
- pipelineFlows
Object containing app-specific data.
The document type.
Examples:pipeline
Array of parameter set references.
Examples:[ { "name": "Test Param Set", "project_ref": "bd0dbbfd-810d-4f0e-b0a9-228c328a8e23", "ref": "eeabf991-b69e-4f8c-b9f1-e6f2129b9a57" } ]
Document identifier, GUID recommended.
Examples:84c2b6fb-1dd5-4114-b4ba-9bb2cb364fff
Refers to the JSON schema used to validate documents of this type.
Examples:http://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.json
Parameters for the flow document.
Examples:{ "local_parameters": [ { "name": "srcFile", "type": "string" }, { "name": "my_connection", "subtype": "connection", "type": "asset_id", "value": "dfe7c595-81d8-461e-8d13-a7c544f3f500" } ] }
- pipelines
Object containing app-specific data.
Examples:{ "ui_data": { "comments": [] } }
A brief description of the DataStage flow.
Examples:A test DataStage flow.
Unique identifier.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bc
Name of the pipeline.
Examples:ContainerC1
Array of pipeline nodes.
Examples:[ { "app_data": { "ui_data": { "description": "Produce a set of mock data based on the specified metadata", "image": "/data-intg/flows/graphics/palette/PxRowGenerator.svg", "label": "Row_Generator_1", "x_pos": 108, "y_pos": 162 } }, "id": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "op": "PxRowGenerator", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "73a5fb2c-f499-4c75-a8a7-71cea90f5105" }, "ui_data": { "label": "outPort" } }, "id": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "parameters": { "records": 10 }, "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "parameters": { "input_count": 0, "output_count": 1 }, "type": "binding" }, { "app_data": { "ui_data": { "description": "Print row column values to either the job log or to a separate output link", "image": "/data-intg/flows/graphics/palette/PxPeek.svg", "label": "Peek_1", "x_pos": 342, "y_pos": 162 } }, "id": "4195b012-d3e7-4f74-8099-e7b23ec6ebb9", "inputs": [ { "app_data": { "ui_data": { "label": "inPort" } }, "id": "c4195b34-8b4a-473f-b987-fa6d028f3968", "links": [ { "app_data": { "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "Link_1", "label": "Link_1", "outline": true, "path": "", "position": "middle" } ] } }, "id": "73a5fb2c-f499-4c75-a8a7-71cea90f5105", "link_name": "Link_1", "node_id_ref": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "port_id_ref": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "type_attr": "PRIMARY" } ], "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "op": "PxPeek", "outputs": [ { "app_data": { "ui_data": { "label": "outPort" } }, "id": "" } ], "parameters": { "all": " ", "columns": " ", "dataset": " ", "input_count": 1, "name": "name", "nrecs": 10, "output_count": 0, "selection": " " }, "type": "execution_node" } ]
Reference to the runtime type.
Examples:pxOsh
Reference to the primary (main) pipeline flow within the document.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bc
Runtime information for pipeline flow.
Examples:[ { "id": "pxOsh", "name": "pxOsh" } ]
Array of data record schemas used in the pipeline.
Examples:[ { "fields": [ { "app_data": { "is_unicode_string": false, "odbc_type": "INTEGER", "type_code": "INT32" }, "metadata": { "decimal_precision": 6, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 6, "min_length": 0 }, "name": "ID", "nullable": false, "type": "integer" } ], "id": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ]
Pipeline flow version.
Examples:3.0
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23
The category of the asset. Must be either SYSTEM or USER. Only a registered service can use this parameter.
Allowable values: [
system
,user
]
parameters
The data flow name.
Pipeline flow to be stored.
- pipelineFlows
Object containing app-specific data.
The document type.
Examples:pipeline
Array of parameter set references.
Examples:[ { "name": "Test Param Set", "project_ref": "bd0dbbfd-810d-4f0e-b0a9-228c328a8e23", "ref": "eeabf991-b69e-4f8c-b9f1-e6f2129b9a57" } ]
Document identifier, GUID recommended.
Examples:84c2b6fb-1dd5-4114-b4ba-9bb2cb364fff
Refers to the JSON schema used to validate documents of this type.
Examples:http://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.json
Parameters for the flow document.
Examples:{ "local_parameters": [ { "name": "srcFile", "type": "string" }, { "name": "my_connection", "subtype": "connection", "type": "asset_id", "value": "dfe7c595-81d8-461e-8d13-a7c544f3f500" } ] }
- pipelines
Object containing app-specific data.
Examples:{ "ui_data": { "comments": [] } }
A brief description of the DataStage flow.
Examples:A test DataStage flow.
Unique identifier.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bc
Name of the pipeline.
Examples:ContainerC1
Array of pipeline nodes.
Examples:[ { "app_data": { "ui_data": { "description": "Produce a set of mock data based on the specified metadata", "image": "/data-intg/flows/graphics/palette/PxRowGenerator.svg", "label": "Row_Generator_1", "x_pos": 108, "y_pos": 162 } }, "id": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "op": "PxRowGenerator", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "73a5fb2c-f499-4c75-a8a7-71cea90f5105" }, "ui_data": { "label": "outPort" } }, "id": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "parameters": { "records": 10 }, "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "parameters": { "input_count": 0, "output_count": 1 }, "type": "binding" }, { "app_data": { "ui_data": { "description": "Print row column values to either the job log or to a separate output link", "image": "/data-intg/flows/graphics/palette/PxPeek.svg", "label": "Peek_1", "x_pos": 342, "y_pos": 162 } }, "id": "4195b012-d3e7-4f74-8099-e7b23ec6ebb9", "inputs": [ { "app_data": { "ui_data": { "label": "inPort" } }, "id": "c4195b34-8b4a-473f-b987-fa6d028f3968", "links": [ { "app_data": { "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "Link_1", "label": "Link_1", "outline": true, "path": "", "position": "middle" } ] } }, "id": "73a5fb2c-f499-4c75-a8a7-71cea90f5105", "link_name": "Link_1", "node_id_ref": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "port_id_ref": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "type_attr": "PRIMARY" } ], "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "op": "PxPeek", "outputs": [ { "app_data": { "ui_data": { "label": "outPort" } }, "id": "" } ], "parameters": { "all": " ", "columns": " ", "dataset": " ", "input_count": 1, "name": "name", "nrecs": 10, "output_count": 0, "selection": " " }, "type": "execution_node" } ]
Reference to the runtime type.
Examples:pxOsh
Reference to the primary (main) pipeline flow within the document.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bc
Runtime information for pipeline flow.
Examples:[ { "id": "pxOsh", "name": "pxOsh" } ]
Array of data record schemas used in the pipeline.
Examples:[ { "fields": [ { "app_data": { "is_unicode_string": false, "odbc_type": "INTEGER", "type_code": "INT32" }, "metadata": { "decimal_precision": 6, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 6, "min_length": 0 }, "name": "ID", "nullable": false, "type": "integer" } ], "id": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ]
Pipeline flow version.
Examples:3.0
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:The category of the asset. Must be either SYSTEM or USER. Only a registered service can use this parameter.
Allowable values: [
system
,user
]
parameters
The data flow name.
Pipeline flow to be stored.
- pipeline_flows
Object containing app-specific data.
The document type.
Examples:pipeline
Array of parameter set references.
Examples:[ { "name": "Test Param Set", "project_ref": "bd0dbbfd-810d-4f0e-b0a9-228c328a8e23", "ref": "eeabf991-b69e-4f8c-b9f1-e6f2129b9a57" } ]
Document identifier, GUID recommended.
Examples:84c2b6fb-1dd5-4114-b4ba-9bb2cb364fff
Refers to the JSON schema used to validate documents of this type.
Examples:http://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.json
Parameters for the flow document.
Examples:{ "local_parameters": [ { "name": "srcFile", "type": "string" }, { "name": "my_connection", "subtype": "connection", "type": "asset_id", "value": "dfe7c595-81d8-461e-8d13-a7c544f3f500" } ] }
- pipelines
Object containing app-specific data.
Examples:{ "ui_data": { "comments": [] } }
A brief description of the DataStage flow.
Examples:A test DataStage flow.
Unique identifier.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bc
Name of the pipeline.
Examples:ContainerC1
Array of pipeline nodes.
Examples:[ { "app_data": { "ui_data": { "description": "Produce a set of mock data based on the specified metadata", "image": "/data-intg/flows/graphics/palette/PxRowGenerator.svg", "label": "Row_Generator_1", "x_pos": 108, "y_pos": 162 } }, "id": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "op": "PxRowGenerator", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "73a5fb2c-f499-4c75-a8a7-71cea90f5105" }, "ui_data": { "label": "outPort" } }, "id": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "parameters": { "records": 10 }, "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "parameters": { "input_count": 0, "output_count": 1 }, "type": "binding" }, { "app_data": { "ui_data": { "description": "Print row column values to either the job log or to a separate output link", "image": "/data-intg/flows/graphics/palette/PxPeek.svg", "label": "Peek_1", "x_pos": 342, "y_pos": 162 } }, "id": "4195b012-d3e7-4f74-8099-e7b23ec6ebb9", "inputs": [ { "app_data": { "ui_data": { "label": "inPort" } }, "id": "c4195b34-8b4a-473f-b987-fa6d028f3968", "links": [ { "app_data": { "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "Link_1", "label": "Link_1", "outline": true, "path": "", "position": "middle" } ] } }, "id": "73a5fb2c-f499-4c75-a8a7-71cea90f5105", "link_name": "Link_1", "node_id_ref": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "port_id_ref": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "type_attr": "PRIMARY" } ], "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "op": "PxPeek", "outputs": [ { "app_data": { "ui_data": { "label": "outPort" } }, "id": "" } ], "parameters": { "all": " ", "columns": " ", "dataset": " ", "input_count": 1, "name": "name", "nrecs": 10, "output_count": 0, "selection": " " }, "type": "execution_node" } ]
Reference to the runtime type.
Examples:pxOsh
Reference to the primary (main) pipeline flow within the document.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bc
Runtime information for pipeline flow.
Examples:[ { "id": "pxOsh", "name": "pxOsh" } ]
Array of data record schemas used in the pipeline.
Examples:[ { "fields": [ { "app_data": { "is_unicode_string": false, "odbc_type": "INTEGER", "type_code": "INT32" }, "metadata": { "decimal_precision": 6, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 6, "min_length": 0 }, "name": "ID", "nullable": false, "type": "integer" } ], "id": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ]
Pipeline flow version.
Examples:3.0
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:The category of the asset. Must be either SYSTEM or USER. Only a registered service can use this parameter.
Allowable values: [
system
,user
]
curl -X POST --location --header "Authorization: Bearer {iam_token}" --header "Accept: application/json;charset=utf-8" --header "Content-Type: application/json;charset=utf-8" --data '{}' "{base_url}/v3/data_intg_flows?data_intg_flow_name={data_intg_flow_name}&project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
PipelineJson exampleFlow = PipelineFlowHelper.buildPipelineFlow(flowJson); CreateDatastageFlowsOptions createDatastageFlowsOptions = new CreateDatastageFlowsOptions.Builder() .dataIntgFlowName(flowName) .pipelineFlows(exampleFlow) .projectId(projectID) .build(); Response<DataIntgFlow> response = datastageService.createDatastageFlows(createDatastageFlowsOptions).execute(); DataIntgFlow dataIntgFlow = response.getResult(); System.out.println(dataIntgFlow);
const pipelineJsonFromFile = JSON.parse(fs.readFileSync('testInput/rowgen_peek.json', 'utf-8')); const params = { dataIntgFlowName, pipelineFlows: pipelineJsonFromFile, projectId: projectID, assetCategory: 'system', }; const res = await datastageService.createDatastageFlows(params);
data_intg_flow = datastage_service.create_datastage_flows( data_intg_flow_name='testFlowJob1', pipeline_flows=UtilHelper.readJsonFileToDict('inputFiles/exampleFlow.json'), project_id=config['PROJECT_ID'] ).get_result() print(json.dumps(data_intg_flow, indent=2))
Response
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
System metadata about an asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
- lock
Entity information for a DataStage lock object.
- entity
DataStage flow ID that is locked.
Requester of the lock.
Metadata information for a DataStage lock object.
- metadata
Lock status.
The name of the DataStage flow.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
- lock
Entity information for a DataStage lock object.
- entity
DataStage flow ID that is locked.
Requester of the lock.
Metadata information for a DataStage lock object.
- metadata
Lock status.
The name of the DataStage flow.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
- lock
Entity information for a DataStage lock object.
- entity
DataStage flow ID that is locked.
Requester of the lock.
Metadata information for a DataStage lock object.
- metadata
Lock status.
The name of the DataStage flow.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
{ "entity": { "data_intg_flow": { "dataset": false, "mime_type": "application/json" } }, "metadata": { "asset_id": "{asset_id}", "asset_type": "data_intg_flow", "href": "{url}/data_intg/v3/data_intg_flows/{asset_id}", "name": "{job_name}", "origin_country": "US", "resource_key": "{project_id}/data_intg_flow/{job_name}" } }
{ "entity": { "data_intg_flow": { "dataset": false, "mime_type": "application/json" } }, "metadata": { "asset_id": "{asset_id}", "asset_type": "data_intg_flow", "href": "{url}/data_intg/v3/data_intg_flows/{asset_id}", "name": "{job_name}", "origin_country": "US", "resource_key": "{project_id}/data_intg_flow/{job_name}" } }
Get DataStage flow
Lists the DataStage flow that is contained in the specified project. Attachments, metadata and a limited number of attributes from the entity of each DataStage flow is returned.
Lists the DataStage flow that is contained in the specified project. Attachments, metadata and a limited number of attributes from the entity of each DataStage flow is returned.
Lists the DataStage flow that is contained in the specified project. Attachments, metadata and a limited number of attributes from the entity of each DataStage flow is returned.
Lists the DataStage flow that is contained in the specified project. Attachments, metadata and a limited number of attributes from the entity of each DataStage flow is returned.
GET /v3/data_intg_flows/{data_intg_flow_id}
ServiceCall<DataIntgFlowJson> getDatastageFlows(GetDatastageFlowsOptions getDatastageFlowsOptions)
getDatastageFlows(params)
get_datastage_flows(self,
data_intg_flow_id: str,
*,
catalog_id: str = None,
project_id: str = None,
**kwargs
) -> DetailedResponse
Request
Use the GetDatastageFlowsOptions.Builder
to create a GetDatastageFlowsOptions
object that contains the parameter values for the getDatastageFlows
method.
Path Parameters
The DataStage flow ID to use.
Query Parameters
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23
The getDatastageFlows options.
The DataStage flow ID to use.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23
parameters
The DataStage flow ID to use.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:
parameters
The DataStage flow ID to use.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:
curl -X GET --location --header "Authorization: Bearer {iam_token}" --header "Accept: application/json;charset=utf-8" "{base_url}/v3/data_intg_flows/{data_intg_flow_id}?project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
GetDatastageFlowsOptions getDatastageFlowsOptions = new GetDatastageFlowsOptions.Builder() .dataIntgFlowId(flowID) .projectId(projectID) .build(); Response<DataIntgFlowJson> response = datastageService.getDatastageFlows(getDatastageFlowsOptions).execute(); DataIntgFlowJson dataIntgFlowJson = response.getResult(); System.out.println(dataIntgFlowJson);
const params = { dataIntgFlowId: assetID, projectId: projectID, }; const res = await datastageService.getDatastageFlows(params);
data_intg_flow_json = datastage_service.get_datastage_flows( data_intg_flow_id=createdFlowId, project_id=config['PROJECT_ID'] ).get_result() print(json.dumps(data_intg_flow_json, indent=2))
Response
A pipeline JSON containing operations to apply to source(s).
Pipeline flow to be stored.
The underlying DataStage flow definition.
System metadata about an asset.
A pipeline JSON containing operations to apply to source(s).
Pipeline flow to be stored.
- attachments
Object containing app-specific data.
The document type.
Examples:pipeline
Array of parameter set references.
Examples:[ { "name": "Test Param Set", "project_ref": "bd0dbbfd-810d-4f0e-b0a9-228c328a8e23", "ref": "eeabf991-b69e-4f8c-b9f1-e6f2129b9a57" } ]
Document identifier, GUID recommended.
Examples:84c2b6fb-1dd5-4114-b4ba-9bb2cb364fff
Refers to the JSON schema used to validate documents of this type.
Examples:http://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.json
Parameters for the flow document.
Examples:{ "local_parameters": [ { "name": "srcFile", "type": "string" }, { "name": "my_connection", "subtype": "connection", "type": "asset_id", "value": "dfe7c595-81d8-461e-8d13-a7c544f3f500" } ] }
- pipelines
Object containing app-specific data.
Examples:{ "ui_data": { "comments": [] } }
A brief description of the DataStage flow.
Examples:A test DataStage flow.
Unique identifier.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bc
Name of the pipeline.
Examples:ContainerC1
Array of pipeline nodes.
Examples:[ { "app_data": { "ui_data": { "description": "Produce a set of mock data based on the specified metadata", "image": "/data-intg/flows/graphics/palette/PxRowGenerator.svg", "label": "Row_Generator_1", "x_pos": 108, "y_pos": 162 } }, "id": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "op": "PxRowGenerator", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "73a5fb2c-f499-4c75-a8a7-71cea90f5105" }, "ui_data": { "label": "outPort" } }, "id": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "parameters": { "records": 10 }, "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "parameters": { "input_count": 0, "output_count": 1 }, "type": "binding" }, { "app_data": { "ui_data": { "description": "Print row column values to either the job log or to a separate output link", "image": "/data-intg/flows/graphics/palette/PxPeek.svg", "label": "Peek_1", "x_pos": 342, "y_pos": 162 } }, "id": "4195b012-d3e7-4f74-8099-e7b23ec6ebb9", "inputs": [ { "app_data": { "ui_data": { "label": "inPort" } }, "id": "c4195b34-8b4a-473f-b987-fa6d028f3968", "links": [ { "app_data": { "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "Link_1", "label": "Link_1", "outline": true, "path": "", "position": "middle" } ] } }, "id": "73a5fb2c-f499-4c75-a8a7-71cea90f5105", "link_name": "Link_1", "node_id_ref": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "port_id_ref": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "type_attr": "PRIMARY" } ], "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "op": "PxPeek", "outputs": [ { "app_data": { "ui_data": { "label": "outPort" } }, "id": "" } ], "parameters": { "all": " ", "columns": " ", "dataset": " ", "input_count": 1, "name": "name", "nrecs": 10, "output_count": 0, "selection": " " }, "type": "execution_node" } ]
Reference to the runtime type.
Examples:pxOsh
Reference to the primary (main) pipeline flow within the document.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bc
Runtime information for pipeline flow.
Examples:[ { "id": "pxOsh", "name": "pxOsh" } ]
Array of data record schemas used in the pipeline.
Examples:[ { "fields": [ { "app_data": { "is_unicode_string": false, "odbc_type": "INTEGER", "type_code": "INT32" }, "metadata": { "decimal_precision": 6, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 6, "min_length": 0 }, "name": "ID", "nullable": false, "type": "integer" } ], "id": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ]
Pipeline flow version.
Examples:3.0
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
- lock
Entity information for a DataStage lock object.
- entity
DataStage flow ID that is locked.
Requester of the lock.
Metadata information for a DataStage lock object.
- metadata
Lock status.
The name of the DataStage flow.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
A pipeline JSON containing operations to apply to source(s).
Pipeline flow to be stored.
- attachments
Object containing app-specific data.
The document type.
Examples:pipeline
Array of parameter set references.
Examples:[ { "name": "Test Param Set", "project_ref": "bd0dbbfd-810d-4f0e-b0a9-228c328a8e23", "ref": "eeabf991-b69e-4f8c-b9f1-e6f2129b9a57" } ]
Document identifier, GUID recommended.
Examples:84c2b6fb-1dd5-4114-b4ba-9bb2cb364fff
Refers to the JSON schema used to validate documents of this type.
Examples:http://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.json
Parameters for the flow document.
Examples:{ "local_parameters": [ { "name": "srcFile", "type": "string" }, { "name": "my_connection", "subtype": "connection", "type": "asset_id", "value": "dfe7c595-81d8-461e-8d13-a7c544f3f500" } ] }
- pipelines
Object containing app-specific data.
Examples:{ "ui_data": { "comments": [] } }
A brief description of the DataStage flow.
Examples:A test DataStage flow.
Unique identifier.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bc
Name of the pipeline.
Examples:ContainerC1
Array of pipeline nodes.
Examples:[ { "app_data": { "ui_data": { "description": "Produce a set of mock data based on the specified metadata", "image": "/data-intg/flows/graphics/palette/PxRowGenerator.svg", "label": "Row_Generator_1", "x_pos": 108, "y_pos": 162 } }, "id": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "op": "PxRowGenerator", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "73a5fb2c-f499-4c75-a8a7-71cea90f5105" }, "ui_data": { "label": "outPort" } }, "id": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "parameters": { "records": 10 }, "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "parameters": { "input_count": 0, "output_count": 1 }, "type": "binding" }, { "app_data": { "ui_data": { "description": "Print row column values to either the job log or to a separate output link", "image": "/data-intg/flows/graphics/palette/PxPeek.svg", "label": "Peek_1", "x_pos": 342, "y_pos": 162 } }, "id": "4195b012-d3e7-4f74-8099-e7b23ec6ebb9", "inputs": [ { "app_data": { "ui_data": { "label": "inPort" } }, "id": "c4195b34-8b4a-473f-b987-fa6d028f3968", "links": [ { "app_data": { "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "Link_1", "label": "Link_1", "outline": true, "path": "", "position": "middle" } ] } }, "id": "73a5fb2c-f499-4c75-a8a7-71cea90f5105", "link_name": "Link_1", "node_id_ref": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "port_id_ref": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "type_attr": "PRIMARY" } ], "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "op": "PxPeek", "outputs": [ { "app_data": { "ui_data": { "label": "outPort" } }, "id": "" } ], "parameters": { "all": " ", "columns": " ", "dataset": " ", "input_count": 1, "name": "name", "nrecs": 10, "output_count": 0, "selection": " " }, "type": "execution_node" } ]
Reference to the runtime type.
Examples:pxOsh
Reference to the primary (main) pipeline flow within the document.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bc
Runtime information for pipeline flow.
Examples:[ { "id": "pxOsh", "name": "pxOsh" } ]
Array of data record schemas used in the pipeline.
Examples:[ { "fields": [ { "app_data": { "is_unicode_string": false, "odbc_type": "INTEGER", "type_code": "INT32" }, "metadata": { "decimal_precision": 6, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 6, "min_length": 0 }, "name": "ID", "nullable": false, "type": "integer" } ], "id": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ]
Pipeline flow version.
Examples:3.0
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
- lock
Entity information for a DataStage lock object.
- entity
DataStage flow ID that is locked.
Requester of the lock.
Metadata information for a DataStage lock object.
- metadata
Lock status.
The name of the DataStage flow.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
A pipeline JSON containing operations to apply to source(s).
Pipeline flow to be stored.
- attachments
Object containing app-specific data.
The document type.
Examples:pipeline
Array of parameter set references.
Examples:[ { "name": "Test Param Set", "project_ref": "bd0dbbfd-810d-4f0e-b0a9-228c328a8e23", "ref": "eeabf991-b69e-4f8c-b9f1-e6f2129b9a57" } ]
Document identifier, GUID recommended.
Examples:84c2b6fb-1dd5-4114-b4ba-9bb2cb364fff
Refers to the JSON schema used to validate documents of this type.
Examples:http://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.json
Parameters for the flow document.
Examples:{ "local_parameters": [ { "name": "srcFile", "type": "string" }, { "name": "my_connection", "subtype": "connection", "type": "asset_id", "value": "dfe7c595-81d8-461e-8d13-a7c544f3f500" } ] }
- pipelines
Object containing app-specific data.
Examples:{ "ui_data": { "comments": [] } }
A brief description of the DataStage flow.
Examples:A test DataStage flow.
Unique identifier.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bc
Name of the pipeline.
Examples:ContainerC1
Array of pipeline nodes.
Examples:[ { "app_data": { "ui_data": { "description": "Produce a set of mock data based on the specified metadata", "image": "/data-intg/flows/graphics/palette/PxRowGenerator.svg", "label": "Row_Generator_1", "x_pos": 108, "y_pos": 162 } }, "id": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "op": "PxRowGenerator", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "73a5fb2c-f499-4c75-a8a7-71cea90f5105" }, "ui_data": { "label": "outPort" } }, "id": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "parameters": { "records": 10 }, "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "parameters": { "input_count": 0, "output_count": 1 }, "type": "binding" }, { "app_data": { "ui_data": { "description": "Print row column values to either the job log or to a separate output link", "image": "/data-intg/flows/graphics/palette/PxPeek.svg", "label": "Peek_1", "x_pos": 342, "y_pos": 162 } }, "id": "4195b012-d3e7-4f74-8099-e7b23ec6ebb9", "inputs": [ { "app_data": { "ui_data": { "label": "inPort" } }, "id": "c4195b34-8b4a-473f-b987-fa6d028f3968", "links": [ { "app_data": { "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "Link_1", "label": "Link_1", "outline": true, "path": "", "position": "middle" } ] } }, "id": "73a5fb2c-f499-4c75-a8a7-71cea90f5105", "link_name": "Link_1", "node_id_ref": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "port_id_ref": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "type_attr": "PRIMARY" } ], "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "op": "PxPeek", "outputs": [ { "app_data": { "ui_data": { "label": "outPort" } }, "id": "" } ], "parameters": { "all": " ", "columns": " ", "dataset": " ", "input_count": 1, "name": "name", "nrecs": 10, "output_count": 0, "selection": " " }, "type": "execution_node" } ]
Reference to the runtime type.
Examples:pxOsh
Reference to the primary (main) pipeline flow within the document.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bc
Runtime information for pipeline flow.
Examples:[ { "id": "pxOsh", "name": "pxOsh" } ]
Array of data record schemas used in the pipeline.
Examples:[ { "fields": [ { "app_data": { "is_unicode_string": false, "odbc_type": "INTEGER", "type_code": "INT32" }, "metadata": { "decimal_precision": 6, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 6, "min_length": 0 }, "name": "ID", "nullable": false, "type": "integer" } ], "id": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ]
Pipeline flow version.
Examples:3.0
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
- lock
Entity information for a DataStage lock object.
- entity
DataStage flow ID that is locked.
Requester of the lock.
Metadata information for a DataStage lock object.
- metadata
Lock status.
The name of the DataStage flow.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
Unexpected error.
{ "attachments": { "app_data": { "datastage": { "external_parameters": [] } }, "doc_type": "pipeline", "id": "98cc1fa0-0fd8-4d55-9b27-d477096b4b37", "json_schema": "{url}/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.json", "pipelines": [ { "app_data": { "datastage": { "runtime_column_propagation": "false" }, "ui_data": { "comments": [] } }, "id": "287b2b30-95ff-4cc8-b18f-92e23c464134", "nodes": [ { "app_data": { "datastage": { "outputs_order": "46e18367-1820-4fe8-8c7c-d8badbc76aa3" }, "ui_data": { "image": "../graphics/palette/PxRowGenerator.svg", "label": "RowGen_1", "x_pos": 239, "y_pos": 236 } }, "id": "77e6d535-8312-4692-8850-c129dcf921ed", "op": "PxRowGenerator", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "55b884a7-9cfb-4e02-802b-82444ee95bb5" }, "ui_data": { "label": "outPort" } }, "id": "46e18367-1820-4fe8-8c7c-d8badbc76aa3", "parameters": { "buf_free_run": 50, "disk_write_inc": 1048576, "max_mem_buf_size": 3145728, "queue_upper_size": 0, "records": 10 }, "schema_ref": "07fed318-4370-4c95-bbbc-16d4a91421bb" } ], "parameters": { "input_count": 0, "output_count": 1 }, "type": "binding" }, { "app_data": { "datastage": { "inputs_order": "9e842525-7bbf-4a42-ae95-49ae325e0c87" }, "ui_data": { "image": "../graphics/palette/informix.svg", "label": "informixTgt", "x_pos": 690, "y_pos": 229 } }, "connection": { "project_ref": "{project_id}", "properties": { "create_statement": "CREATE TABLE custid(customer_num int)", "table_action": "append", "table_name": "custid", "write_mode": "insert" }, "ref": "85193161-aa63-4cc5-80e7-7bfcdd59c438" }, "id": "8b4933d9-32c0-4c40-9c47-d8791ab12baf", "inputs": [ { "app_data": { "datastage": {}, "ui_data": { "label": "inPort" } }, "id": "9e842525-7bbf-4a42-ae95-49ae325e0c87", "links": [ { "app_data": { "datastage": {}, "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "Link_3", "label": "Link_3", "outline": true, "path": "", "position": "middle" } ] } }, "id": "55b884a7-9cfb-4e02-802b-82444ee95bb5", "link_name": "Link_3", "node_id_ref": "77e6d535-8312-4692-8850-c129dcf921ed", "port_id_ref": "46e18367-1820-4fe8-8c7c-d8badbc76aa3", "type_attr": "PRIMARY" } ], "parameters": { "part_coll": "part_type" }, "schema_ref": "07fed318-4370-4c95-bbbc-16d4a91421bb" } ], "op": "informix", "parameters": { "input_count": 1, "output_count": 0 }, "type": "binding" } ], "runtime_ref": "pxOsh" } ], "primary_pipeline": "287b2b30-95ff-4cc8-b18f-92e23c464134", "schemas": [ { "fields": [ { "app_data": { "column_reference": "customer_num", "is_unicode_string": false, "odbc_type": "INTEGER", "table_def": "Saved\\\\Link_3\\\\ifx_customer", "type_code": "INT32" }, "metadata": { "decimal_precision": 0, "decimal_scale": 0, "is_key": false, "is_signed": true, "item_index": 0, "max_length": 0, "min_length": 0 }, "name": "customer_num", "nullable": false, "type": "integer" } ], "id": "07fed318-4370-4c95-bbbc-16d4a91421bb" } ], "version": "3.0" }, "entity": { "data_intg_flow": { "mime_type": "application/json", "dataset": false } }, "metadata": { "asset_id": "{asset_id}", "asset_type": "data_intg_flow", "create_time": "2021-04-08 17:14:08+00:00", "creator_id": "IBMid-xxxxxxxxxx", "description": "", "name": "{job_name}", "project_id": "{project_id}", "resource_key": "{project_id}/data_intg_flow/{job_name}", "size": 2712, "usage": { "access_count": 0, "last_access_time": "2021-04-08 17:14:10.193000+00:00", "last_accessor_id": "IBMid-xxxxxxxxxx", "last_modification_time": "2021-04-08 17:14:10.193000+00:00", "last_modifier_id": "IBMid-xxxxxxxxxx" } } }
{ "attachments": { "app_data": { "datastage": { "external_parameters": [] } }, "doc_type": "pipeline", "id": "98cc1fa0-0fd8-4d55-9b27-d477096b4b37", "json_schema": "{url}/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.json", "pipelines": [ { "app_data": { "datastage": { "runtime_column_propagation": "false" }, "ui_data": { "comments": [] } }, "id": "287b2b30-95ff-4cc8-b18f-92e23c464134", "nodes": [ { "app_data": { "datastage": { "outputs_order": "46e18367-1820-4fe8-8c7c-d8badbc76aa3" }, "ui_data": { "image": "../graphics/palette/PxRowGenerator.svg", "label": "RowGen_1", "x_pos": 239, "y_pos": 236 } }, "id": "77e6d535-8312-4692-8850-c129dcf921ed", "op": "PxRowGenerator", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "55b884a7-9cfb-4e02-802b-82444ee95bb5" }, "ui_data": { "label": "outPort" } }, "id": "46e18367-1820-4fe8-8c7c-d8badbc76aa3", "parameters": { "buf_free_run": 50, "disk_write_inc": 1048576, "max_mem_buf_size": 3145728, "queue_upper_size": 0, "records": 10 }, "schema_ref": "07fed318-4370-4c95-bbbc-16d4a91421bb" } ], "parameters": { "input_count": 0, "output_count": 1 }, "type": "binding" }, { "app_data": { "datastage": { "inputs_order": "9e842525-7bbf-4a42-ae95-49ae325e0c87" }, "ui_data": { "image": "../graphics/palette/informix.svg", "label": "informixTgt", "x_pos": 690, "y_pos": 229 } }, "connection": { "project_ref": "{project_id}", "properties": { "create_statement": "CREATE TABLE custid(customer_num int)", "table_action": "append", "table_name": "custid", "write_mode": "insert" }, "ref": "85193161-aa63-4cc5-80e7-7bfcdd59c438" }, "id": "8b4933d9-32c0-4c40-9c47-d8791ab12baf", "inputs": [ { "app_data": { "datastage": {}, "ui_data": { "label": "inPort" } }, "id": "9e842525-7bbf-4a42-ae95-49ae325e0c87", "links": [ { "app_data": { "datastage": {}, "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "Link_3", "label": "Link_3", "outline": true, "path": "", "position": "middle" } ] } }, "id": "55b884a7-9cfb-4e02-802b-82444ee95bb5", "link_name": "Link_3", "node_id_ref": "77e6d535-8312-4692-8850-c129dcf921ed", "port_id_ref": "46e18367-1820-4fe8-8c7c-d8badbc76aa3", "type_attr": "PRIMARY" } ], "parameters": { "part_coll": "part_type" }, "schema_ref": "07fed318-4370-4c95-bbbc-16d4a91421bb" } ], "op": "informix", "parameters": { "input_count": 1, "output_count": 0 }, "type": "binding" } ], "runtime_ref": "pxOsh" } ], "primary_pipeline": "287b2b30-95ff-4cc8-b18f-92e23c464134", "schemas": [ { "fields": [ { "app_data": { "column_reference": "customer_num", "is_unicode_string": false, "odbc_type": "INTEGER", "table_def": "Saved\\\\Link_3\\\\ifx_customer", "type_code": "INT32" }, "metadata": { "decimal_precision": 0, "decimal_scale": 0, "is_key": false, "is_signed": true, "item_index": 0, "max_length": 0, "min_length": 0 }, "name": "customer_num", "nullable": false, "type": "integer" } ], "id": "07fed318-4370-4c95-bbbc-16d4a91421bb" } ], "version": "3.0" }, "entity": { "data_intg_flow": { "mime_type": "application/json", "dataset": false } }, "metadata": { "asset_id": "{asset_id}", "asset_type": "data_intg_flow", "create_time": "2021-04-08 17:14:08+00:00", "creator_id": "IBMid-xxxxxxxxxx", "description": "", "name": "{job_name}", "project_id": "{project_id}", "resource_key": "{project_id}/data_intg_flow/{job_name}", "size": 2712, "usage": { "access_count": 0, "last_access_time": "2021-04-08 17:14:10.193000+00:00", "last_accessor_id": "IBMid-xxxxxxxxxx", "last_modification_time": "2021-04-08 17:14:10.193000+00:00", "last_modifier_id": "IBMid-xxxxxxxxxx" } } }
Update DataStage flow
Modifies a data flow in the specified project or catalog (either project_id
or catalog_id
must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
Modifies a data flow in the specified project or catalog (either project_id
or catalog_id
must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
Modifies a data flow in the specified project or catalog (either project_id
or catalog_id
must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
Modifies a data flow in the specified project or catalog (either project_id
or catalog_id
must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
PUT /v3/data_intg_flows/{data_intg_flow_id}
ServiceCall<DataIntgFlow> updateDatastageFlows(UpdateDatastageFlowsOptions updateDatastageFlowsOptions)
updateDatastageFlows(params)
update_datastage_flows(self,
data_intg_flow_id: str,
data_intg_flow_name: str,
*,
pipeline_flows: 'PipelineJson' = None,
catalog_id: str = None,
project_id: str = None,
**kwargs
) -> DetailedResponse
Request
Use the UpdateDatastageFlowsOptions.Builder
to create a UpdateDatastageFlowsOptions
object that contains the parameter values for the updateDatastageFlows
method.
Path Parameters
The DataStage flow ID to use.
Query Parameters
The data flow name.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23
Pipeline json to be attached.
Pipeline flow to be stored.
The updateDatastageFlows options.
The DataStage flow ID to use.
The data flow name.
Pipeline flow to be stored.
- pipelineFlows
Object containing app-specific data.
The document type.
Examples:pipeline
Array of parameter set references.
Examples:[ { "name": "Test Param Set", "project_ref": "bd0dbbfd-810d-4f0e-b0a9-228c328a8e23", "ref": "eeabf991-b69e-4f8c-b9f1-e6f2129b9a57" } ]
Document identifier, GUID recommended.
Examples:84c2b6fb-1dd5-4114-b4ba-9bb2cb364fff
Refers to the JSON schema used to validate documents of this type.
Examples:http://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.json
Parameters for the flow document.
Examples:{ "local_parameters": [ { "name": "srcFile", "type": "string" }, { "name": "my_connection", "subtype": "connection", "type": "asset_id", "value": "dfe7c595-81d8-461e-8d13-a7c544f3f500" } ] }
- pipelines
Object containing app-specific data.
Examples:{ "ui_data": { "comments": [] } }
A brief description of the DataStage flow.
Examples:A test DataStage flow.
Unique identifier.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bc
Name of the pipeline.
Examples:ContainerC1
Array of pipeline nodes.
Examples:[ { "app_data": { "ui_data": { "description": "Produce a set of mock data based on the specified metadata", "image": "/data-intg/flows/graphics/palette/PxRowGenerator.svg", "label": "Row_Generator_1", "x_pos": 108, "y_pos": 162 } }, "id": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "op": "PxRowGenerator", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "73a5fb2c-f499-4c75-a8a7-71cea90f5105" }, "ui_data": { "label": "outPort" } }, "id": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "parameters": { "records": 10 }, "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "parameters": { "input_count": 0, "output_count": 1 }, "type": "binding" }, { "app_data": { "ui_data": { "description": "Print row column values to either the job log or to a separate output link", "image": "/data-intg/flows/graphics/palette/PxPeek.svg", "label": "Peek_1", "x_pos": 342, "y_pos": 162 } }, "id": "4195b012-d3e7-4f74-8099-e7b23ec6ebb9", "inputs": [ { "app_data": { "ui_data": { "label": "inPort" } }, "id": "c4195b34-8b4a-473f-b987-fa6d028f3968", "links": [ { "app_data": { "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "Link_1", "label": "Link_1", "outline": true, "path": "", "position": "middle" } ] } }, "id": "73a5fb2c-f499-4c75-a8a7-71cea90f5105", "link_name": "Link_1", "node_id_ref": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "port_id_ref": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "type_attr": "PRIMARY" } ], "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "op": "PxPeek", "outputs": [ { "app_data": { "ui_data": { "label": "outPort" } }, "id": "" } ], "parameters": { "all": " ", "columns": " ", "dataset": " ", "input_count": 1, "name": "name", "nrecs": 10, "output_count": 0, "selection": " " }, "type": "execution_node" } ]
Reference to the runtime type.
Examples:pxOsh
Reference to the primary (main) pipeline flow within the document.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bc
Runtime information for pipeline flow.
Examples:[ { "id": "pxOsh", "name": "pxOsh" } ]
Array of data record schemas used in the pipeline.
Examples:[ { "fields": [ { "app_data": { "is_unicode_string": false, "odbc_type": "INTEGER", "type_code": "INT32" }, "metadata": { "decimal_precision": 6, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 6, "min_length": 0 }, "name": "ID", "nullable": false, "type": "integer" } ], "id": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ]
Pipeline flow version.
Examples:3.0
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23
parameters
The DataStage flow ID to use.
The data flow name.
Pipeline flow to be stored.
- pipelineFlows
Object containing app-specific data.
The document type.
Examples:pipeline
Array of parameter set references.
Examples:[ { "name": "Test Param Set", "project_ref": "bd0dbbfd-810d-4f0e-b0a9-228c328a8e23", "ref": "eeabf991-b69e-4f8c-b9f1-e6f2129b9a57" } ]
Document identifier, GUID recommended.
Examples:84c2b6fb-1dd5-4114-b4ba-9bb2cb364fff
Refers to the JSON schema used to validate documents of this type.
Examples:http://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.json
Parameters for the flow document.
Examples:{ "local_parameters": [ { "name": "srcFile", "type": "string" }, { "name": "my_connection", "subtype": "connection", "type": "asset_id", "value": "dfe7c595-81d8-461e-8d13-a7c544f3f500" } ] }
- pipelines
Object containing app-specific data.
Examples:{ "ui_data": { "comments": [] } }
A brief description of the DataStage flow.
Examples:A test DataStage flow.
Unique identifier.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bc
Name of the pipeline.
Examples:ContainerC1
Array of pipeline nodes.
Examples:[ { "app_data": { "ui_data": { "description": "Produce a set of mock data based on the specified metadata", "image": "/data-intg/flows/graphics/palette/PxRowGenerator.svg", "label": "Row_Generator_1", "x_pos": 108, "y_pos": 162 } }, "id": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "op": "PxRowGenerator", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "73a5fb2c-f499-4c75-a8a7-71cea90f5105" }, "ui_data": { "label": "outPort" } }, "id": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "parameters": { "records": 10 }, "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "parameters": { "input_count": 0, "output_count": 1 }, "type": "binding" }, { "app_data": { "ui_data": { "description": "Print row column values to either the job log or to a separate output link", "image": "/data-intg/flows/graphics/palette/PxPeek.svg", "label": "Peek_1", "x_pos": 342, "y_pos": 162 } }, "id": "4195b012-d3e7-4f74-8099-e7b23ec6ebb9", "inputs": [ { "app_data": { "ui_data": { "label": "inPort" } }, "id": "c4195b34-8b4a-473f-b987-fa6d028f3968", "links": [ { "app_data": { "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "Link_1", "label": "Link_1", "outline": true, "path": "", "position": "middle" } ] } }, "id": "73a5fb2c-f499-4c75-a8a7-71cea90f5105", "link_name": "Link_1", "node_id_ref": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "port_id_ref": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "type_attr": "PRIMARY" } ], "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "op": "PxPeek", "outputs": [ { "app_data": { "ui_data": { "label": "outPort" } }, "id": "" } ], "parameters": { "all": " ", "columns": " ", "dataset": " ", "input_count": 1, "name": "name", "nrecs": 10, "output_count": 0, "selection": " " }, "type": "execution_node" } ]
Reference to the runtime type.
Examples:pxOsh
Reference to the primary (main) pipeline flow within the document.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bc
Runtime information for pipeline flow.
Examples:[ { "id": "pxOsh", "name": "pxOsh" } ]
Array of data record schemas used in the pipeline.
Examples:[ { "fields": [ { "app_data": { "is_unicode_string": false, "odbc_type": "INTEGER", "type_code": "INT32" }, "metadata": { "decimal_precision": 6, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 6, "min_length": 0 }, "name": "ID", "nullable": false, "type": "integer" } ], "id": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ]
Pipeline flow version.
Examples:3.0
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:
parameters
The DataStage flow ID to use.
The data flow name.
Pipeline flow to be stored.
- pipeline_flows
Object containing app-specific data.
The document type.
Examples:pipeline
Array of parameter set references.
Examples:[ { "name": "Test Param Set", "project_ref": "bd0dbbfd-810d-4f0e-b0a9-228c328a8e23", "ref": "eeabf991-b69e-4f8c-b9f1-e6f2129b9a57" } ]
Document identifier, GUID recommended.
Examples:84c2b6fb-1dd5-4114-b4ba-9bb2cb364fff
Refers to the JSON schema used to validate documents of this type.
Examples:http://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.json
Parameters for the flow document.
Examples:{ "local_parameters": [ { "name": "srcFile", "type": "string" }, { "name": "my_connection", "subtype": "connection", "type": "asset_id", "value": "dfe7c595-81d8-461e-8d13-a7c544f3f500" } ] }
- pipelines
Object containing app-specific data.
Examples:{ "ui_data": { "comments": [] } }
A brief description of the DataStage flow.
Examples:A test DataStage flow.
Unique identifier.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bc
Name of the pipeline.
Examples:ContainerC1
Array of pipeline nodes.
Examples:[ { "app_data": { "ui_data": { "description": "Produce a set of mock data based on the specified metadata", "image": "/data-intg/flows/graphics/palette/PxRowGenerator.svg", "label": "Row_Generator_1", "x_pos": 108, "y_pos": 162 } }, "id": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "op": "PxRowGenerator", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "73a5fb2c-f499-4c75-a8a7-71cea90f5105" }, "ui_data": { "label": "outPort" } }, "id": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "parameters": { "records": 10 }, "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "parameters": { "input_count": 0, "output_count": 1 }, "type": "binding" }, { "app_data": { "ui_data": { "description": "Print row column values to either the job log or to a separate output link", "image": "/data-intg/flows/graphics/palette/PxPeek.svg", "label": "Peek_1", "x_pos": 342, "y_pos": 162 } }, "id": "4195b012-d3e7-4f74-8099-e7b23ec6ebb9", "inputs": [ { "app_data": { "ui_data": { "label": "inPort" } }, "id": "c4195b34-8b4a-473f-b987-fa6d028f3968", "links": [ { "app_data": { "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "Link_1", "label": "Link_1", "outline": true, "path": "", "position": "middle" } ] } }, "id": "73a5fb2c-f499-4c75-a8a7-71cea90f5105", "link_name": "Link_1", "node_id_ref": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "port_id_ref": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "type_attr": "PRIMARY" } ], "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "op": "PxPeek", "outputs": [ { "app_data": { "ui_data": { "label": "outPort" } }, "id": "" } ], "parameters": { "all": " ", "columns": " ", "dataset": " ", "input_count": 1, "name": "name", "nrecs": 10, "output_count": 0, "selection": " " }, "type": "execution_node" } ]
Reference to the runtime type.
Examples:pxOsh
Reference to the primary (main) pipeline flow within the document.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bc
Runtime information for pipeline flow.
Examples:[ { "id": "pxOsh", "name": "pxOsh" } ]
Array of data record schemas used in the pipeline.
Examples:[ { "fields": [ { "app_data": { "is_unicode_string": false, "odbc_type": "INTEGER", "type_code": "INT32" }, "metadata": { "decimal_precision": 6, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 6, "min_length": 0 }, "name": "ID", "nullable": false, "type": "integer" } ], "id": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ]
Pipeline flow version.
Examples:3.0
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:
curl -X PUT --location --header "Authorization: Bearer {iam_token}" --header "Accept: application/json;charset=utf-8" --header "Content-Type: application/json;charset=utf-8" --data '{}' "{base_url}/v3/data_intg_flows/{data_intg_flow_id}?data_intg_flow_name={data_intg_flow_name}&project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
PipelineJson exampleFlowUpdated = PipelineFlowHelper.buildPipelineFlow(updatedFlowJson); UpdateDatastageFlowsOptions updateDatastageFlowsOptions = new UpdateDatastageFlowsOptions.Builder() .dataIntgFlowId(flowID) .dataIntgFlowName(flowName) .pipelineFlows(exampleFlowUpdated) .projectId(projectID) .build(); Response<DataIntgFlow> response = datastageService.updateDatastageFlows(updateDatastageFlowsOptions).execute(); DataIntgFlow dataIntgFlow = response.getResult(); System.out.println(dataIntgFlow);
const params = { dataIntgFlowId: assetID, dataIntgFlowName, pipelineFlows: pipelineJsonFromFile, projectId: projectID, assetCategory: 'system', }; const res = await datastageService.updateDatastageFlows(params);
data_intg_flow = datastage_service.update_datastage_flows( data_intg_flow_id=createdFlowId, data_intg_flow_name='testFlowJob1Updated', pipeline_flows=UtilHelper.readJsonFileToDict('inputFiles/exampleFlowUpdated.json'), project_id=config['PROJECT_ID'] ).get_result() print(json.dumps(data_intg_flow, indent=2))
Response
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
System metadata about an asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
- lock
Entity information for a DataStage lock object.
- entity
DataStage flow ID that is locked.
Requester of the lock.
Metadata information for a DataStage lock object.
- metadata
Lock status.
The name of the DataStage flow.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
- lock
Entity information for a DataStage lock object.
- entity
DataStage flow ID that is locked.
Requester of the lock.
Metadata information for a DataStage lock object.
- metadata
Lock status.
The name of the DataStage flow.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
- lock
Entity information for a DataStage lock object.
- entity
DataStage flow ID that is locked.
Requester of the lock.
Metadata information for a DataStage lock object.
- metadata
Lock status.
The name of the DataStage flow.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
{ "attachments": [ { "asset_type": "data_intg_flow", "attachment_id": "9081dd6b-0ab7-47b5-8233-c10c6e64509d", "href": "{url}/v2/assets/{asset_id}/attachments/9081dd6b-0ab7-47b5-8233-c10c6e64509d?project_id={project_id}", "mime": "application/json", "name": "data_intg_flows", "object_key": "data_intg_flow/{project_id}{asset_id}", "object_key_is_read_only": false, "private_url": false } ], "entity": { "data_intg_flow": { "dataset": false, "mime_type": "application/json" } }, "metadata": { "asset_id": "{asset_id}", "asset_type": "data_intg_flow", "catalog_id": "{catalog_id}", "create_time": "2021-04-08 17:14:08+00:00", "creator_id": "IBMid-xxxxxxxxxx", "description": "", "href": "{url}/data_intg/v3/data_intg_flows/{asset_id}?catalog_id={catalog_id}", "name": "{job_name}", "origin_country": "us", "project_id": "{project_id}", "resource_key": "{project_id}/data_intg_flow/{job_name}", "size": 2712, "tags": [], "usage": { "access_count": 0, "last_access_time": "2021-04-08 17:21:33.936000+00:00", "last_accessor_id": "IBMid-xxxxxxxxxx" } } }
{ "attachments": [ { "asset_type": "data_intg_flow", "attachment_id": "9081dd6b-0ab7-47b5-8233-c10c6e64509d", "href": "{url}/v2/assets/{asset_id}/attachments/9081dd6b-0ab7-47b5-8233-c10c6e64509d?project_id={project_id}", "mime": "application/json", "name": "data_intg_flows", "object_key": "data_intg_flow/{project_id}{asset_id}", "object_key_is_read_only": false, "private_url": false } ], "entity": { "data_intg_flow": { "dataset": false, "mime_type": "application/json" } }, "metadata": { "asset_id": "{asset_id}", "asset_type": "data_intg_flow", "catalog_id": "{catalog_id}", "create_time": "2021-04-08 17:14:08+00:00", "creator_id": "IBMid-xxxxxxxxxx", "description": "", "href": "{url}/data_intg/v3/data_intg_flows/{asset_id}?catalog_id={catalog_id}", "name": "{job_name}", "origin_country": "us", "project_id": "{project_id}", "resource_key": "{project_id}/data_intg_flow/{job_name}", "size": 2712, "tags": [], "usage": { "access_count": 0, "last_access_time": "2021-04-08 17:21:33.936000+00:00", "last_accessor_id": "IBMid-xxxxxxxxxx" } } }
Clone DataStage flow
Create a DataStage flow in the specified project or catalog based on an existing DataStage flow in the same project or catalog.
Create a DataStage flow in the specified project or catalog based on an existing DataStage flow in the same project or catalog.
Create a DataStage flow in the specified project or catalog based on an existing DataStage flow in the same project or catalog.
Create a DataStage flow in the specified project or catalog based on an existing DataStage flow in the same project or catalog.
POST /v3/data_intg_flows/{data_intg_flow_id}/clone
ServiceCall<DataIntgFlow> cloneDatastageFlows(CloneDatastageFlowsOptions cloneDatastageFlowsOptions)
cloneDatastageFlows(params)
clone_datastage_flows(self,
data_intg_flow_id: str,
*,
catalog_id: str = None,
project_id: str = None,
**kwargs
) -> DetailedResponse
Request
Use the CloneDatastageFlowsOptions.Builder
to create a CloneDatastageFlowsOptions
object that contains the parameter values for the cloneDatastageFlows
method.
Path Parameters
The DataStage flow ID to use.
Query Parameters
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23
The cloneDatastageFlows options.
The DataStage flow ID to use.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23
parameters
The DataStage flow ID to use.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:
parameters
The DataStage flow ID to use.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:
curl -X POST --location --header "Authorization: Bearer {iam_token}" --header "Accept: application/json;charset=utf-8" "{base_url}/v3/data_intg_flows/{data_intg_flow_id}/clone?project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
CloneDatastageFlowsOptions cloneDatastageFlowsOptions = new CloneDatastageFlowsOptions.Builder() .dataIntgFlowId(flowID) .projectId(projectID) .build(); Response<DataIntgFlow> response = datastageService.cloneDatastageFlows(cloneDatastageFlowsOptions).execute(); DataIntgFlow dataIntgFlow = response.getResult(); System.out.println(dataIntgFlow);
const params = { dataIntgFlowId: assetID, projectId: projectID, }; const res = await datastageService.cloneDatastageFlows(params);
data_intg_flow = datastage_service.clone_datastage_flows( data_intg_flow_id=createdFlowId, project_id=config['PROJECT_ID'] ).get_result() print(json.dumps(data_intg_flow, indent=2))
Response
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
System metadata about an asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
- lock
Entity information for a DataStage lock object.
- entity
DataStage flow ID that is locked.
Requester of the lock.
Metadata information for a DataStage lock object.
- metadata
Lock status.
The name of the DataStage flow.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
- lock
Entity information for a DataStage lock object.
- entity
DataStage flow ID that is locked.
Requester of the lock.
Metadata information for a DataStage lock object.
- metadata
Lock status.
The name of the DataStage flow.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
- lock
Entity information for a DataStage lock object.
- entity
DataStage flow ID that is locked.
Requester of the lock.
Metadata information for a DataStage lock object.
- metadata
Lock status.
The name of the DataStage flow.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
{ "entity": { "data_intg_flow": { "dataset": false, "mime_type": "application/json" } }, "metadata": { "asset_id": "{asset_id}", "asset_type": "data_intg_flow", "href": "{url}/data_intg/v3/data_intg_flows/{asset_id}", "name": "{job_name_copy}", "origin_country": "US", "resource_key": "{project_id}/data_intg_flow/{job_name_copy}" } }
{ "entity": { "data_intg_flow": { "dataset": false, "mime_type": "application/json" } }, "metadata": { "asset_id": "{asset_id}", "asset_type": "data_intg_flow", "href": "{url}/data_intg/v3/data_intg_flows/{asset_id}", "name": "{job_name_copy}", "origin_country": "US", "resource_key": "{project_id}/data_intg_flow/{job_name_copy}" } }
Compile DataStage flow to generate runtime assets
Generate the runtime assets for a DataStage flow in the specified project or catalog for a specified runtime type. Either project_id or catalog_id must be specified.
Generate the runtime assets for a DataStage flow in the specified project or catalog for a specified runtime type. Either project_id or catalog_id must be specified.
Generate the runtime assets for a DataStage flow in the specified project or catalog for a specified runtime type. Either project_id or catalog_id must be specified.
Generate the runtime assets for a DataStage flow in the specified project or catalog for a specified runtime type. Either project_id or catalog_id must be specified.
POST /v3/ds_codegen/compile/{data_intg_flow_id}
ServiceCall<FlowCompileResponse> compileDatastageFlows(CompileDatastageFlowsOptions compileDatastageFlowsOptions)
compileDatastageFlows(params)
compile_datastage_flows(self,
data_intg_flow_id: str,
*,
catalog_id: str = None,
project_id: str = None,
runtime_type: str = None,
**kwargs
) -> DetailedResponse
Request
Use the CompileDatastageFlowsOptions.Builder
to create a CompileDatastageFlowsOptions
object that contains the parameter values for the compileDatastageFlows
method.
Path Parameters
The DataStage flow ID to use.
Query Parameters
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23
The type of the runtime to use. e.g. dspxosh or Spark etc. If not provided queried from within pipeline flow if available otherwise default of dspxosh is used.
The compileDatastageFlows options.
The DataStage flow ID to use.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23
The type of the runtime to use. e.g. dspxosh or Spark etc. If not provided queried from within pipeline flow if available otherwise default of dspxosh is used.
parameters
The DataStage flow ID to use.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:The type of the runtime to use. e.g. dspxosh or Spark etc. If not provided queried from within pipeline flow if available otherwise default of dspxosh is used.
parameters
The DataStage flow ID to use.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:The type of the runtime to use. e.g. dspxosh or Spark etc. If not provided queried from within pipeline flow if available otherwise default of dspxosh is used.
curl -X POST --location --header "Authorization: Bearer {iam_token}" --header "Accept: application/json;charset=utf-8" "{base_url}/v3/ds_codegen/compile/{data_intg_flow_id}?project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
CompileDatastageFlowsOptions compileDatastageFlowsOptions = new CompileDatastageFlowsOptions.Builder() .dataIntgFlowId(flowID) .projectId(projectID) .build(); Response<FlowCompileResponse> response = datastageService.compileDatastageFlows(compileDatastageFlowsOptions).execute(); FlowCompileResponse flowCompileResponse = response.getResult(); System.out.println(flowCompileResponse);
const params = { dataIntgFlowId: assetID, projectId: projectID, }; const res = await datastageService.compileDatastageFlows(params);
flow_compile_response = datastage_service.compile_datastage_flows( data_intg_flow_id=createdFlowId, project_id=config['PROJECT_ID'] ).get_result() print(json.dumps(flow_compile_response, indent=2))
Response
Describes the compile response model.
Compile result for DataStage flow.
- message
Compile response type. For example ok or error.
Describes the compile response model.
Compile result for DataStage flow.
Compile response type. For example ok or error.
Describes the compile response model.
Compile result for DataStage flow.
Compile response type. For example ok or error.
Describes the compile response model.
Compile result for DataStage flow.
Compile response type. For example ok or error.
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
Request object contains invalid information. Server is not able to process the request object.
Unexpected error.
{ "message": { "flowName": "{job_name}", "flow_name": "{job_name}", "result": "success", "runtime_code": "{compiled_OSH}", "runtime_type": "dspxosh" }, "type": "ok" }
{ "message": { "flowName": "{job_name}", "flow_name": "{job_name}", "result": "success", "runtime_code": "{compiled_OSH}", "runtime_type": "dspxosh" }, "type": "ok" }
Delete DataStage subflows
Deletes the specified data subflows in a project or catalog (either project_id
or catalog_id
must be set).
If the deletion of the data subflows will take some time to finish, then a 202 response will be returned and the deletion will continue asynchronously.
Deletes the specified data subflows in a project or catalog (either project_id
or catalog_id
must be set).
If the deletion of the data subflows will take some time to finish, then a 202 response will be returned and the deletion will continue asynchronously.
Deletes the specified data subflows in a project or catalog (either project_id
or catalog_id
must be set).
If the deletion of the data subflows will take some time to finish, then a 202 response will be returned and the deletion will continue asynchronously.
Deletes the specified data subflows in a project or catalog (either project_id
or catalog_id
must be set).
If the deletion of the data subflows will take some time to finish, then a 202 response will be returned and the deletion will continue asynchronously.
DELETE /v3/data_intg_flows/subflows
ServiceCall<Void> deleteDatastageSubflows(DeleteDatastageSubflowsOptions deleteDatastageSubflowsOptions)
deleteDatastageSubflows(params)
delete_datastage_subflows(self,
id: List[str],
*,
catalog_id: str = None,
project_id: str = None,
**kwargs
) -> DetailedResponse
Request
Use the DeleteDatastageSubflowsOptions.Builder
to create a DeleteDatastageSubflowsOptions
object that contains the parameter values for the deleteDatastageSubflows
method.
Query Parameters
The list of DataStage subflow IDs to delete.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23
The deleteDatastageSubflows options.
The list of DataStage subflow IDs to delete.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23
parameters
The list of DataStage subflow IDs to delete.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:
parameters
The list of DataStage subflow IDs to delete.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:
curl -X DELETE --location --header "Authorization: Bearer {iam_token}" "{base_url}/v3/data_intg_flows/subflows?id=[]&project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
String[] ids = new String[] {subflowID, cloneSubflowID}; DeleteDatastageSubflowsOptions deleteDatastageSubflowsOptions = new DeleteDatastageSubflowsOptions.Builder() .id(Arrays.asList(ids)) .projectId(projectID) .build(); datastageService.deleteDatastageSubflows(deleteDatastageSubflowsOptions).execute();
const params = { id: [assetID, cloneID], projectId: projectID, force: true, }; const res = await datastageService.deleteDatastageFlows(params);
response = datastage_service.delete_datastage_subflows( id=createdSubflowId, project_id=config['PROJECT_ID'] )
Response
Status Code
The requested operation is in progress.
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
No Sample Response
Get metadata and lock information for DataStage subflows
Lists the metadata, entity and lock information for DataStage subflows that are contained in the specified project.
Lists the metadata, entity and lock information for DataStage subflows that are contained in the specified project.
Use the following parameters to filter the results:
| Field | Match type | Example |
| ------------------------ | ------------ | --------------------------------------- |
| entity.name
| Equals | entity.name=MyDataStageSubFlow
|
| entity.name
| Starts with | entity.name=starts:MyData
|
| entity.description
| Equals | entity.description=movement
|
| entity.description
| Starts with | entity.description=starts:data
|
To sort the results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time
(i.e. returning the most recently created data flows first).
| Field | Example |
| ------------------------- | ----------------------------------- |
| sort | sort=+entity.name
(sort by ascending name) |
| sort | sort=-metadata.create_time
(sort by descending creation time) |
Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time
and then in ascending order on name use: sort=-metadata.create_time
,+entity.name
.
Lists the metadata, entity and lock information for DataStage subflows that are contained in the specified project.
Use the following parameters to filter the results:
| Field | Match type | Example |
| ------------------------ | ------------ | --------------------------------------- |
| entity.name
| Equals | entity.name=MyDataStageSubFlow
|
| entity.name
| Starts with | entity.name=starts:MyData
|
| entity.description
| Equals | entity.description=movement
|
| entity.description
| Starts with | entity.description=starts:data
|
To sort the results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time
(i.e. returning the most recently created data flows first).
| Field | Example |
| ------------------------- | ----------------------------------- |
| sort | sort=+entity.name
(sort by ascending name) |
| sort | sort=-metadata.create_time
(sort by descending creation time) |
Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time
and then in ascending order on name use: sort=-metadata.create_time
,+entity.name
.
Lists the metadata, entity and lock information for DataStage subflows that are contained in the specified project.
Use the following parameters to filter the results:
| Field | Match type | Example |
| ------------------------ | ------------ | --------------------------------------- |
| entity.name
| Equals | entity.name=MyDataStageSubFlow
|
| entity.name
| Starts with | entity.name=starts:MyData
|
| entity.description
| Equals | entity.description=movement
|
| entity.description
| Starts with | entity.description=starts:data
|
To sort the results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time
(i.e. returning the most recently created data flows first).
| Field | Example |
| ------------------------- | ----------------------------------- |
| sort | sort=+entity.name
(sort by ascending name) |
| sort | sort=-metadata.create_time
(sort by descending creation time) |
Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time
and then in ascending order on name use: sort=-metadata.create_time
,+entity.name
.
GET /v3/data_intg_flows/subflows
ServiceCall<DataFlowPagedCollection> listDatastageSubflows(ListDatastageSubflowsOptions listDatastageSubflowsOptions)
listDatastageSubflows(params)
list_datastage_subflows(self,
*,
catalog_id: str = None,
project_id: str = None,
sort: str = None,
start: str = None,
limit: int = None,
entity_name: str = None,
entity_description: str = None,
**kwargs
) -> DetailedResponse
Request
Use the ListDatastageSubflowsOptions.Builder
to create a ListDatastageSubflowsOptions
object that contains the parameter values for the listDatastageSubflows
method.
Query Parameters
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23
The field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time.
The page token indicating where to start paging from.
The limit of the number of items to return, for example limit=50. If not specified a default of 100 will be used.
Possible values: value ≥ 1
Example:
100
Filter results based on the specified name.
Example:
MyDataStageSubFlow
Filter results based on the specified description.
The listDatastageSubflows options.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23
The field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time.
The page token indicating where to start paging from.
The limit of the number of items to return, for example limit=50. If not specified a default of 100 will be used.
Possible values: value ≥ 1
Examples:100
Filter results based on the specified name.
Filter results based on the specified description.
parameters
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:The field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time.
The page token indicating where to start paging from.
The limit of the number of items to return, for example limit=50. If not specified a default of 100 will be used.
Possible values: value ≥ 1
Examples:Filter results based on the specified name.
Filter results based on the specified description.
parameters
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:The field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time.
The page token indicating where to start paging from.
The limit of the number of items to return, for example limit=50. If not specified a default of 100 will be used.
Possible values: value ≥ 1
Examples:Filter results based on the specified name.
Filter results based on the specified description.
curl -X GET --location --header "Authorization: Bearer {iam_token}" --header "Accept: application/json;charset=utf-8" "{base_url}/v3/data_intg_flows/subflows?project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23&limit=100"
ListDatastageSubflowsOptions listDatastageSubflowsOptions = new ListDatastageSubflowsOptions.Builder() .projectId(projectID) .limit(Long.valueOf("100")) .build(); Response<DataFlowPagedCollection> response = datastageService.listDatastageSubflows(listDatastageSubflowsOptions).execute(); DataFlowPagedCollection dataFlowPagedCollection = response.getResult(); System.out.println(dataFlowPagedCollection);
const params = { projectId: projectID, sort: 'name', limit: 100, }; const res = await datastageService.listDatastageSubflows(params);
data_flow_paged_collection = datastage_service.list_datastage_subflows( project_id=config['PROJECT_ID'], limit=100 ).get_result() print(json.dumps(data_flow_paged_collection, indent=2))
Response
A page from a collection of DataStage flows.
A page from a collection of DataStage flows.
URI of a resource.
URI of a resource.
The number of data flows requested to be returned.
URI of a resource.
URI of a resource.
The total number of DataStage flows available.
A page from a collection of DataStage flows.
A page from a collection of DataStage flows.
- dataFlows
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
- lock
Entity information for a DataStage lock object.
- entity
DataStage flow ID that is locked.
Requester of the lock.
Metadata information for a DataStage lock object.
- metadata
Lock status.
The name of the DataStage flow.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
URI of a resource.
- first
URI of a resource.
URI of a resource.
- last
URI of a resource.
The number of data flows requested to be returned.
URI of a resource.
- next
URI of a resource.
URI of a resource.
- prev
URI of a resource.
The total number of DataStage flows available.
A page from a collection of DataStage flows.
A page from a collection of DataStage flows.
- data_flows
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
- lock
Entity information for a DataStage lock object.
- entity
DataStage flow ID that is locked.
Requester of the lock.
Metadata information for a DataStage lock object.
- metadata
Lock status.
The name of the DataStage flow.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
URI of a resource.
- first
URI of a resource.
URI of a resource.
- last
URI of a resource.
The number of data flows requested to be returned.
URI of a resource.
- next
URI of a resource.
URI of a resource.
- prev
URI of a resource.
The total number of DataStage flows available.
A page from a collection of DataStage flows.
A page from a collection of DataStage flows.
- data_flows
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
- lock
Entity information for a DataStage lock object.
- entity
DataStage flow ID that is locked.
Requester of the lock.
Metadata information for a DataStage lock object.
- metadata
Lock status.
The name of the DataStage flow.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
URI of a resource.
- first
URI of a resource.
URI of a resource.
- last
URI of a resource.
The number of data flows requested to be returned.
URI of a resource.
- next
URI of a resource.
URI of a resource.
- prev
URI of a resource.
The total number of DataStage flows available.
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
{ "data_flows": [ { "entity": { "data_intg_subflow": { "mime_type": "application/json", "dataset": false } }, "metadata": { "asset_id": "{asset_id}", "asset_type": "data_intg_subflow", "create_time": "2021-04-03 15:32:55+00:00", "creator_id": "IBMid-xxxxxxxxx", "description": " ", "href": "{url}/data_intg/v3/data_intg_flows/subflows/{asset_id}?project_id={project_id}", "name": "{job_name}", "project_id": "{project_id}", "resource_key": "{project_id}/data_intg_subflow/{job_name}", "size": 5780, "usage": { "access_count": 0, "last_access_time": "2021-04-03 15:33:01.320000+00:00", "last_accessor_id": "IBMid-xxxxxxxxx", "last_modification_time": "2021-04-03 15:33:01.320000+00:00", "last_modifier_id": "IBMid-xxxxxxxxx" } } } ], "first": { "href": "{url}/data_intg/v3/data_intg_flows/subflows?project_id={project_id}&limit=2" }, "next": { "href": "{url}/data_intg/v3/data_intg_flows/subflows?project_id={project_id}&limit=2&start=g1AAAADOeJzLYWBgYMpgTmHQSklKzi9KdUhJMjTUS8rVTU7WLS3WLc4vLcnQNbLQS87JL01JzCvRy0styQHpyWMBkgwNQOr____9WWCxXCAhYmRgZKhrYKJrYBxiaGplbGRlahqVaJCFZocB8XYcgNhxHrcdhlamhlGJ-llZAD4lOMI" }, "total_count": 1 }
{ "data_flows": [ { "entity": { "data_intg_subflow": { "mime_type": "application/json", "dataset": false } }, "metadata": { "asset_id": "{asset_id}", "asset_type": "data_intg_subflow", "create_time": "2021-04-03 15:32:55+00:00", "creator_id": "IBMid-xxxxxxxxx", "description": " ", "href": "{url}/data_intg/v3/data_intg_flows/subflows/{asset_id}?project_id={project_id}", "name": "{job_name}", "project_id": "{project_id}", "resource_key": "{project_id}/data_intg_subflow/{job_name}", "size": 5780, "usage": { "access_count": 0, "last_access_time": "2021-04-03 15:33:01.320000+00:00", "last_accessor_id": "IBMid-xxxxxxxxx", "last_modification_time": "2021-04-03 15:33:01.320000+00:00", "last_modifier_id": "IBMid-xxxxxxxxx" } } } ], "first": { "href": "{url}/data_intg/v3/data_intg_flows/subflows?project_id={project_id}&limit=2" }, "next": { "href": "{url}/data_intg/v3/data_intg_flows/subflows?project_id={project_id}&limit=2&start=g1AAAADOeJzLYWBgYMpgTmHQSklKzi9KdUhJMjTUS8rVTU7WLS3WLc4vLcnQNbLQS87JL01JzCvRy0styQHpyWMBkgwNQOr____9WWCxXCAhYmRgZKhrYKJrYBxiaGplbGRlahqVaJCFZocB8XYcgNhxHrcdhlamhlGJ-llZAD4lOMI" }, "total_count": 1 }
Create DataStage subflow
Creates a DataStage subflow in the specified project or catalog (either project_id
or catalog_id
must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
Creates a DataStage subflow in the specified project or catalog (either project_id
or catalog_id
must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
Creates a DataStage subflow in the specified project or catalog (either project_id
or catalog_id
must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
Creates a DataStage subflow in the specified project or catalog (either project_id
or catalog_id
must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
POST /v3/data_intg_flows/subflows
ServiceCall<DataIntgFlow> createDatastageSubflows(CreateDatastageSubflowsOptions createDatastageSubflowsOptions)
createDatastageSubflows(params)
create_datastage_subflows(self,
data_intg_subflow_name: str,
*,
pipeline_flows: 'PipelineJson' = None,
catalog_id: str = None,
project_id: str = None,
asset_category: str = None,
**kwargs
) -> DetailedResponse
Request
Use the CreateDatastageSubflowsOptions.Builder
to create a CreateDatastageSubflowsOptions
object that contains the parameter values for the createDatastageSubflows
method.
Query Parameters
The DataStage subflow name.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23
The category of the asset. Must be either SYSTEM or USER. Only a registered service can use this parameter.
Allowable values: [
system
,user
]
Pipeline json to be attached.
Pipeline flow to be stored.
The createDatastageSubflows options.
The DataStage subflow name.
Pipeline flow to be stored.
- pipelineFlows
Object containing app-specific data.
The document type.
Examples:pipeline
Array of parameter set references.
Examples:[ { "name": "Test Param Set", "project_ref": "bd0dbbfd-810d-4f0e-b0a9-228c328a8e23", "ref": "eeabf991-b69e-4f8c-b9f1-e6f2129b9a57" } ]
Document identifier, GUID recommended.
Examples:84c2b6fb-1dd5-4114-b4ba-9bb2cb364fff
Refers to the JSON schema used to validate documents of this type.
Examples:http://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.json
Parameters for the flow document.
Examples:{ "local_parameters": [ { "name": "srcFile", "type": "string" }, { "name": "my_connection", "subtype": "connection", "type": "asset_id", "value": "dfe7c595-81d8-461e-8d13-a7c544f3f500" } ] }
- pipelines
Object containing app-specific data.
Examples:{ "ui_data": { "comments": [] } }
A brief description of the DataStage flow.
Examples:A test DataStage flow.
Unique identifier.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bc
Name of the pipeline.
Examples:ContainerC1
Array of pipeline nodes.
Examples:[ { "app_data": { "ui_data": { "description": "Produce a set of mock data based on the specified metadata", "image": "/data-intg/flows/graphics/palette/PxRowGenerator.svg", "label": "Row_Generator_1", "x_pos": 108, "y_pos": 162 } }, "id": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "op": "PxRowGenerator", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "73a5fb2c-f499-4c75-a8a7-71cea90f5105" }, "ui_data": { "label": "outPort" } }, "id": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "parameters": { "records": 10 }, "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "parameters": { "input_count": 0, "output_count": 1 }, "type": "binding" }, { "app_data": { "ui_data": { "description": "Print row column values to either the job log or to a separate output link", "image": "/data-intg/flows/graphics/palette/PxPeek.svg", "label": "Peek_1", "x_pos": 342, "y_pos": 162 } }, "id": "4195b012-d3e7-4f74-8099-e7b23ec6ebb9", "inputs": [ { "app_data": { "ui_data": { "label": "inPort" } }, "id": "c4195b34-8b4a-473f-b987-fa6d028f3968", "links": [ { "app_data": { "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "Link_1", "label": "Link_1", "outline": true, "path": "", "position": "middle" } ] } }, "id": "73a5fb2c-f499-4c75-a8a7-71cea90f5105", "link_name": "Link_1", "node_id_ref": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "port_id_ref": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "type_attr": "PRIMARY" } ], "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "op": "PxPeek", "outputs": [ { "app_data": { "ui_data": { "label": "outPort" } }, "id": "" } ], "parameters": { "all": " ", "columns": " ", "dataset": " ", "input_count": 1, "name": "name", "nrecs": 10, "output_count": 0, "selection": " " }, "type": "execution_node" } ]
Reference to the runtime type.
Examples:pxOsh
Reference to the primary (main) pipeline flow within the document.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bc
Runtime information for pipeline flow.
Examples:[ { "id": "pxOsh", "name": "pxOsh" } ]
Array of data record schemas used in the pipeline.
Examples:[ { "fields": [ { "app_data": { "is_unicode_string": false, "odbc_type": "INTEGER", "type_code": "INT32" }, "metadata": { "decimal_precision": 6, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 6, "min_length": 0 }, "name": "ID", "nullable": false, "type": "integer" } ], "id": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ]
Pipeline flow version.
Examples:3.0
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23
The category of the asset. Must be either SYSTEM or USER. Only a registered service can use this parameter.
Allowable values: [
system
,user
]
parameters
The DataStage subflow name.
Pipeline flow to be stored.
- pipelineFlows
Object containing app-specific data.
The document type.
Examples:pipeline
Array of parameter set references.
Examples:[ { "name": "Test Param Set", "project_ref": "bd0dbbfd-810d-4f0e-b0a9-228c328a8e23", "ref": "eeabf991-b69e-4f8c-b9f1-e6f2129b9a57" } ]
Document identifier, GUID recommended.
Examples:84c2b6fb-1dd5-4114-b4ba-9bb2cb364fff
Refers to the JSON schema used to validate documents of this type.
Examples:http://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.json
Parameters for the flow document.
Examples:{ "local_parameters": [ { "name": "srcFile", "type": "string" }, { "name": "my_connection", "subtype": "connection", "type": "asset_id", "value": "dfe7c595-81d8-461e-8d13-a7c544f3f500" } ] }
- pipelines
Object containing app-specific data.
Examples:{ "ui_data": { "comments": [] } }
A brief description of the DataStage flow.
Examples:A test DataStage flow.
Unique identifier.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bc
Name of the pipeline.
Examples:ContainerC1
Array of pipeline nodes.
Examples:[ { "app_data": { "ui_data": { "description": "Produce a set of mock data based on the specified metadata", "image": "/data-intg/flows/graphics/palette/PxRowGenerator.svg", "label": "Row_Generator_1", "x_pos": 108, "y_pos": 162 } }, "id": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "op": "PxRowGenerator", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "73a5fb2c-f499-4c75-a8a7-71cea90f5105" }, "ui_data": { "label": "outPort" } }, "id": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "parameters": { "records": 10 }, "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "parameters": { "input_count": 0, "output_count": 1 }, "type": "binding" }, { "app_data": { "ui_data": { "description": "Print row column values to either the job log or to a separate output link", "image": "/data-intg/flows/graphics/palette/PxPeek.svg", "label": "Peek_1", "x_pos": 342, "y_pos": 162 } }, "id": "4195b012-d3e7-4f74-8099-e7b23ec6ebb9", "inputs": [ { "app_data": { "ui_data": { "label": "inPort" } }, "id": "c4195b34-8b4a-473f-b987-fa6d028f3968", "links": [ { "app_data": { "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "Link_1", "label": "Link_1", "outline": true, "path": "", "position": "middle" } ] } }, "id": "73a5fb2c-f499-4c75-a8a7-71cea90f5105", "link_name": "Link_1", "node_id_ref": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "port_id_ref": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "type_attr": "PRIMARY" } ], "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "op": "PxPeek", "outputs": [ { "app_data": { "ui_data": { "label": "outPort" } }, "id": "" } ], "parameters": { "all": " ", "columns": " ", "dataset": " ", "input_count": 1, "name": "name", "nrecs": 10, "output_count": 0, "selection": " " }, "type": "execution_node" } ]
Reference to the runtime type.
Examples:pxOsh
Reference to the primary (main) pipeline flow within the document.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bc
Runtime information for pipeline flow.
Examples:[ { "id": "pxOsh", "name": "pxOsh" } ]
Array of data record schemas used in the pipeline.
Examples:[ { "fields": [ { "app_data": { "is_unicode_string": false, "odbc_type": "INTEGER", "type_code": "INT32" }, "metadata": { "decimal_precision": 6, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 6, "min_length": 0 }, "name": "ID", "nullable": false, "type": "integer" } ], "id": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ]
Pipeline flow version.
Examples:3.0
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:The category of the asset. Must be either SYSTEM or USER. Only a registered service can use this parameter.
Allowable values: [
system
,user
]
parameters
The DataStage subflow name.
Pipeline flow to be stored.
- pipeline_flows
Object containing app-specific data.
The document type.
Examples:pipeline
Array of parameter set references.
Examples:[ { "name": "Test Param Set", "project_ref": "bd0dbbfd-810d-4f0e-b0a9-228c328a8e23", "ref": "eeabf991-b69e-4f8c-b9f1-e6f2129b9a57" } ]
Document identifier, GUID recommended.
Examples:84c2b6fb-1dd5-4114-b4ba-9bb2cb364fff
Refers to the JSON schema used to validate documents of this type.
Examples:http://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.json
Parameters for the flow document.
Examples:{ "local_parameters": [ { "name": "srcFile", "type": "string" }, { "name": "my_connection", "subtype": "connection", "type": "asset_id", "value": "dfe7c595-81d8-461e-8d13-a7c544f3f500" } ] }
- pipelines
Object containing app-specific data.
Examples:{ "ui_data": { "comments": [] } }
A brief description of the DataStage flow.
Examples:A test DataStage flow.
Unique identifier.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bc
Name of the pipeline.
Examples:ContainerC1
Array of pipeline nodes.
Examples:[ { "app_data": { "ui_data": { "description": "Produce a set of mock data based on the specified metadata", "image": "/data-intg/flows/graphics/palette/PxRowGenerator.svg", "label": "Row_Generator_1", "x_pos": 108, "y_pos": 162 } }, "id": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "op": "PxRowGenerator", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "73a5fb2c-f499-4c75-a8a7-71cea90f5105" }, "ui_data": { "label": "outPort" } }, "id": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "parameters": { "records": 10 }, "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "parameters": { "input_count": 0, "output_count": 1 }, "type": "binding" }, { "app_data": { "ui_data": { "description": "Print row column values to either the job log or to a separate output link", "image": "/data-intg/flows/graphics/palette/PxPeek.svg", "label": "Peek_1", "x_pos": 342, "y_pos": 162 } }, "id": "4195b012-d3e7-4f74-8099-e7b23ec6ebb9", "inputs": [ { "app_data": { "ui_data": { "label": "inPort" } }, "id": "c4195b34-8b4a-473f-b987-fa6d028f3968", "links": [ { "app_data": { "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "Link_1", "label": "Link_1", "outline": true, "path": "", "position": "middle" } ] } }, "id": "73a5fb2c-f499-4c75-a8a7-71cea90f5105", "link_name": "Link_1", "node_id_ref": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "port_id_ref": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "type_attr": "PRIMARY" } ], "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "op": "PxPeek", "outputs": [ { "app_data": { "ui_data": { "label": "outPort" } }, "id": "" } ], "parameters": { "all": " ", "columns": " ", "dataset": " ", "input_count": 1, "name": "name", "nrecs": 10, "output_count": 0, "selection": " " }, "type": "execution_node" } ]
Reference to the runtime type.
Examples:pxOsh
Reference to the primary (main) pipeline flow within the document.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bc
Runtime information for pipeline flow.
Examples:[ { "id": "pxOsh", "name": "pxOsh" } ]
Array of data record schemas used in the pipeline.
Examples:[ { "fields": [ { "app_data": { "is_unicode_string": false, "odbc_type": "INTEGER", "type_code": "INT32" }, "metadata": { "decimal_precision": 6, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 6, "min_length": 0 }, "name": "ID", "nullable": false, "type": "integer" } ], "id": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ]
Pipeline flow version.
Examples:3.0
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:The category of the asset. Must be either SYSTEM or USER. Only a registered service can use this parameter.
Allowable values: [
system
,user
]
curl -X POST --location --header "Authorization: Bearer {iam_token}" --header "Accept: application/json;charset=utf-8" --header "Content-Type: application/json;charset=utf-8" --data '{}' "{base_url}/v3/data_intg_flows/subflows?data_intg_subflow_name={data_intg_subflow_name}&project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
PipelineJson exampleSubFlow = PipelineFlowHelper.buildPipelineFlow(subFlowJson); CreateDatastageSubflowsOptions createDatastageSubflowsOptions = new CreateDatastageSubflowsOptions.Builder() .dataIntgSubflowName(subflowName) .pipelineFlows(exampleSubFlow) .projectId(projectID) .build(); Response<DataIntgFlow> response = datastageService.createDatastageSubflows(createDatastageSubflowsOptions).execute(); DataIntgFlow dataIntgFlow = response.getResult(); System.out.println(dataIntgFlow);
const params = { dataIntgSubflowName: dataIntgSubFlowName, pipelineFlows: pipelineJsonFromFile, projectId: projectID, assetCategory: 'system', }; const res = await datastageService.createDatastageSubflows(params);
data_intg_flow = datastage_service.create_datastage_subflows( data_intg_subflow_name='testSubflow1', pipeline_flows=UtilHelper.readJsonFileToDict('inputFiles/exampleSubflow.json'), project_id=config['PROJECT_ID'] ).get_result() print(json.dumps(data_intg_flow, indent=2))
Response
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
System metadata about an asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
- lock
Entity information for a DataStage lock object.
- entity
DataStage flow ID that is locked.
Requester of the lock.
Metadata information for a DataStage lock object.
- metadata
Lock status.
The name of the DataStage flow.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
- lock
Entity information for a DataStage lock object.
- entity
DataStage flow ID that is locked.
Requester of the lock.
Metadata information for a DataStage lock object.
- metadata
Lock status.
The name of the DataStage flow.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
- lock
Entity information for a DataStage lock object.
- entity
DataStage flow ID that is locked.
Requester of the lock.
Metadata information for a DataStage lock object.
- metadata
Lock status.
The name of the DataStage flow.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
{ "entity": { "data_intg_subflow": { "dataset": false, "mime_type": "application/json" } }, "metadata": { "asset_id": "{asset_id}", "asset_type": "data_intg_subflow", "href": "{url}/data_intg/v3/data_intg_flows/subflows/{asset_id}", "name": "{subflow_name}", "origin_country": "US", "resource_key": "{project_id}/data_intg_subflow/{job_name}" } }
{ "entity": { "data_intg_subflow": { "dataset": false, "mime_type": "application/json" } }, "metadata": { "asset_id": "{asset_id}", "asset_type": "data_intg_subflow", "href": "{url}/data_intg/v3/data_intg_flows/subflows/{asset_id}", "name": "{subflow_name}", "origin_country": "US", "resource_key": "{project_id}/data_intg_subflow/{job_name}" } }
Get DataStage subflow
Lists the DataStage subflow that is contained in the specified project. Attachments, metadata and a limited number of attributes from the entity of each DataStage flow is returned.
Lists the DataStage subflow that is contained in the specified project. Attachments, metadata and a limited number of attributes from the entity of each DataStage flow is returned.
Lists the DataStage subflow that is contained in the specified project. Attachments, metadata and a limited number of attributes from the entity of each DataStage flow is returned.
Lists the DataStage subflow that is contained in the specified project. Attachments, metadata and a limited number of attributes from the entity of each DataStage flow is returned.
GET /v3/data_intg_flows/subflows/{data_intg_subflow_id}
ServiceCall<DataIntgFlowJson> getDatastageSubflows(GetDatastageSubflowsOptions getDatastageSubflowsOptions)
getDatastageSubflows(params)
get_datastage_subflows(self,
data_intg_subflow_id: str,
*,
catalog_id: str = None,
project_id: str = None,
**kwargs
) -> DetailedResponse
Request
Use the GetDatastageSubflowsOptions.Builder
to create a GetDatastageSubflowsOptions
object that contains the parameter values for the getDatastageSubflows
method.
Path Parameters
The DataStage subflow ID to use.
Query Parameters
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23
The getDatastageSubflows options.
The DataStage subflow ID to use.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23
parameters
The DataStage subflow ID to use.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:
parameters
The DataStage subflow ID to use.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:
curl -X GET --location --header "Authorization: Bearer {iam_token}" --header "Accept: application/json;charset=utf-8" "{base_url}/v3/data_intg_flows/subflows/{data_intg_subflow_id}?project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
GetDatastageSubflowsOptions getDatastageSubflowsOptions = new GetDatastageSubflowsOptions.Builder() .dataIntgSubflowId(subflowID) .projectId(projectID) .build(); Response<DataIntgFlowJson> response = datastageService.getDatastageSubflows(getDatastageSubflowsOptions).execute(); DataIntgFlowJson dataIntgFlowJson = response.getResult(); System.out.println(dataIntgFlowJson);
const params = { dataIntgSubflowId: subflow_assetID, projectId: projectID, }; const res = await datastageService.getDatastageSubflows(params);
data_intg_flow_json = datastage_service.get_datastage_subflows( data_intg_subflow_id=createdSubflowId, project_id=config['PROJECT_ID'] ).get_result() print(json.dumps(data_intg_flow_json, indent=2))
Response
A pipeline JSON containing operations to apply to source(s).
Pipeline flow to be stored.
The underlying DataStage flow definition.
System metadata about an asset.
A pipeline JSON containing operations to apply to source(s).
Pipeline flow to be stored.
- attachments
Object containing app-specific data.
The document type.
Examples:pipeline
Array of parameter set references.
Examples:[ { "name": "Test Param Set", "project_ref": "bd0dbbfd-810d-4f0e-b0a9-228c328a8e23", "ref": "eeabf991-b69e-4f8c-b9f1-e6f2129b9a57" } ]
Document identifier, GUID recommended.
Examples:84c2b6fb-1dd5-4114-b4ba-9bb2cb364fff
Refers to the JSON schema used to validate documents of this type.
Examples:http://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.json
Parameters for the flow document.
Examples:{ "local_parameters": [ { "name": "srcFile", "type": "string" }, { "name": "my_connection", "subtype": "connection", "type": "asset_id", "value": "dfe7c595-81d8-461e-8d13-a7c544f3f500" } ] }
- pipelines
Object containing app-specific data.
Examples:{ "ui_data": { "comments": [] } }
A brief description of the DataStage flow.
Examples:A test DataStage flow.
Unique identifier.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bc
Name of the pipeline.
Examples:ContainerC1
Array of pipeline nodes.
Examples:[ { "app_data": { "ui_data": { "description": "Produce a set of mock data based on the specified metadata", "image": "/data-intg/flows/graphics/palette/PxRowGenerator.svg", "label": "Row_Generator_1", "x_pos": 108, "y_pos": 162 } }, "id": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "op": "PxRowGenerator", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "73a5fb2c-f499-4c75-a8a7-71cea90f5105" }, "ui_data": { "label": "outPort" } }, "id": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "parameters": { "records": 10 }, "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "parameters": { "input_count": 0, "output_count": 1 }, "type": "binding" }, { "app_data": { "ui_data": { "description": "Print row column values to either the job log or to a separate output link", "image": "/data-intg/flows/graphics/palette/PxPeek.svg", "label": "Peek_1", "x_pos": 342, "y_pos": 162 } }, "id": "4195b012-d3e7-4f74-8099-e7b23ec6ebb9", "inputs": [ { "app_data": { "ui_data": { "label": "inPort" } }, "id": "c4195b34-8b4a-473f-b987-fa6d028f3968", "links": [ { "app_data": { "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "Link_1", "label": "Link_1", "outline": true, "path": "", "position": "middle" } ] } }, "id": "73a5fb2c-f499-4c75-a8a7-71cea90f5105", "link_name": "Link_1", "node_id_ref": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "port_id_ref": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "type_attr": "PRIMARY" } ], "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "op": "PxPeek", "outputs": [ { "app_data": { "ui_data": { "label": "outPort" } }, "id": "" } ], "parameters": { "all": " ", "columns": " ", "dataset": " ", "input_count": 1, "name": "name", "nrecs": 10, "output_count": 0, "selection": " " }, "type": "execution_node" } ]
Reference to the runtime type.
Examples:pxOsh
Reference to the primary (main) pipeline flow within the document.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bc
Runtime information for pipeline flow.
Examples:[ { "id": "pxOsh", "name": "pxOsh" } ]
Array of data record schemas used in the pipeline.
Examples:[ { "fields": [ { "app_data": { "is_unicode_string": false, "odbc_type": "INTEGER", "type_code": "INT32" }, "metadata": { "decimal_precision": 6, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 6, "min_length": 0 }, "name": "ID", "nullable": false, "type": "integer" } ], "id": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ]
Pipeline flow version.
Examples:3.0
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
- lock
Entity information for a DataStage lock object.
- entity
DataStage flow ID that is locked.
Requester of the lock.
Metadata information for a DataStage lock object.
- metadata
Lock status.
The name of the DataStage flow.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
A pipeline JSON containing operations to apply to source(s).
Pipeline flow to be stored.
- attachments
Object containing app-specific data.
The document type.
Examples:pipeline
Array of parameter set references.
Examples:[ { "name": "Test Param Set", "project_ref": "bd0dbbfd-810d-4f0e-b0a9-228c328a8e23", "ref": "eeabf991-b69e-4f8c-b9f1-e6f2129b9a57" } ]
Document identifier, GUID recommended.
Examples:84c2b6fb-1dd5-4114-b4ba-9bb2cb364fff
Refers to the JSON schema used to validate documents of this type.
Examples:http://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.json
Parameters for the flow document.
Examples:{ "local_parameters": [ { "name": "srcFile", "type": "string" }, { "name": "my_connection", "subtype": "connection", "type": "asset_id", "value": "dfe7c595-81d8-461e-8d13-a7c544f3f500" } ] }
- pipelines
Object containing app-specific data.
Examples:{ "ui_data": { "comments": [] } }
A brief description of the DataStage flow.
Examples:A test DataStage flow.
Unique identifier.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bc
Name of the pipeline.
Examples:ContainerC1
Array of pipeline nodes.
Examples:[ { "app_data": { "ui_data": { "description": "Produce a set of mock data based on the specified metadata", "image": "/data-intg/flows/graphics/palette/PxRowGenerator.svg", "label": "Row_Generator_1", "x_pos": 108, "y_pos": 162 } }, "id": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "op": "PxRowGenerator", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "73a5fb2c-f499-4c75-a8a7-71cea90f5105" }, "ui_data": { "label": "outPort" } }, "id": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "parameters": { "records": 10 }, "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "parameters": { "input_count": 0, "output_count": 1 }, "type": "binding" }, { "app_data": { "ui_data": { "description": "Print row column values to either the job log or to a separate output link", "image": "/data-intg/flows/graphics/palette/PxPeek.svg", "label": "Peek_1", "x_pos": 342, "y_pos": 162 } }, "id": "4195b012-d3e7-4f74-8099-e7b23ec6ebb9", "inputs": [ { "app_data": { "ui_data": { "label": "inPort" } }, "id": "c4195b34-8b4a-473f-b987-fa6d028f3968", "links": [ { "app_data": { "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "Link_1", "label": "Link_1", "outline": true, "path": "", "position": "middle" } ] } }, "id": "73a5fb2c-f499-4c75-a8a7-71cea90f5105", "link_name": "Link_1", "node_id_ref": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "port_id_ref": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "type_attr": "PRIMARY" } ], "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "op": "PxPeek", "outputs": [ { "app_data": { "ui_data": { "label": "outPort" } }, "id": "" } ], "parameters": { "all": " ", "columns": " ", "dataset": " ", "input_count": 1, "name": "name", "nrecs": 10, "output_count": 0, "selection": " " }, "type": "execution_node" } ]
Reference to the runtime type.
Examples:pxOsh
Reference to the primary (main) pipeline flow within the document.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bc
Runtime information for pipeline flow.
Examples:[ { "id": "pxOsh", "name": "pxOsh" } ]
Array of data record schemas used in the pipeline.
Examples:[ { "fields": [ { "app_data": { "is_unicode_string": false, "odbc_type": "INTEGER", "type_code": "INT32" }, "metadata": { "decimal_precision": 6, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 6, "min_length": 0 }, "name": "ID", "nullable": false, "type": "integer" } ], "id": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ]
Pipeline flow version.
Examples:3.0
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
- lock
Entity information for a DataStage lock object.
- entity
DataStage flow ID that is locked.
Requester of the lock.
Metadata information for a DataStage lock object.
- metadata
Lock status.
The name of the DataStage flow.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
A pipeline JSON containing operations to apply to source(s).
Pipeline flow to be stored.
- attachments
Object containing app-specific data.
The document type.
Examples:pipeline
Array of parameter set references.
Examples:[ { "name": "Test Param Set", "project_ref": "bd0dbbfd-810d-4f0e-b0a9-228c328a8e23", "ref": "eeabf991-b69e-4f8c-b9f1-e6f2129b9a57" } ]
Document identifier, GUID recommended.
Examples:84c2b6fb-1dd5-4114-b4ba-9bb2cb364fff
Refers to the JSON schema used to validate documents of this type.
Examples:http://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.json
Parameters for the flow document.
Examples:{ "local_parameters": [ { "name": "srcFile", "type": "string" }, { "name": "my_connection", "subtype": "connection", "type": "asset_id", "value": "dfe7c595-81d8-461e-8d13-a7c544f3f500" } ] }
- pipelines
Object containing app-specific data.
Examples:{ "ui_data": { "comments": [] } }
A brief description of the DataStage flow.
Examples:A test DataStage flow.
Unique identifier.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bc
Name of the pipeline.
Examples:ContainerC1
Array of pipeline nodes.
Examples:[ { "app_data": { "ui_data": { "description": "Produce a set of mock data based on the specified metadata", "image": "/data-intg/flows/graphics/palette/PxRowGenerator.svg", "label": "Row_Generator_1", "x_pos": 108, "y_pos": 162 } }, "id": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "op": "PxRowGenerator", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "73a5fb2c-f499-4c75-a8a7-71cea90f5105" }, "ui_data": { "label": "outPort" } }, "id": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "parameters": { "records": 10 }, "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "parameters": { "input_count": 0, "output_count": 1 }, "type": "binding" }, { "app_data": { "ui_data": { "description": "Print row column values to either the job log or to a separate output link", "image": "/data-intg/flows/graphics/palette/PxPeek.svg", "label": "Peek_1", "x_pos": 342, "y_pos": 162 } }, "id": "4195b012-d3e7-4f74-8099-e7b23ec6ebb9", "inputs": [ { "app_data": { "ui_data": { "label": "inPort" } }, "id": "c4195b34-8b4a-473f-b987-fa6d028f3968", "links": [ { "app_data": { "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "Link_1", "label": "Link_1", "outline": true, "path": "", "position": "middle" } ] } }, "id": "73a5fb2c-f499-4c75-a8a7-71cea90f5105", "link_name": "Link_1", "node_id_ref": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "port_id_ref": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "type_attr": "PRIMARY" } ], "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "op": "PxPeek", "outputs": [ { "app_data": { "ui_data": { "label": "outPort" } }, "id": "" } ], "parameters": { "all": " ", "columns": " ", "dataset": " ", "input_count": 1, "name": "name", "nrecs": 10, "output_count": 0, "selection": " " }, "type": "execution_node" } ]
Reference to the runtime type.
Examples:pxOsh
Reference to the primary (main) pipeline flow within the document.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bc
Runtime information for pipeline flow.
Examples:[ { "id": "pxOsh", "name": "pxOsh" } ]
Array of data record schemas used in the pipeline.
Examples:[ { "fields": [ { "app_data": { "is_unicode_string": false, "odbc_type": "INTEGER", "type_code": "INT32" }, "metadata": { "decimal_precision": 6, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 6, "min_length": 0 }, "name": "ID", "nullable": false, "type": "integer" } ], "id": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ]
Pipeline flow version.
Examples:3.0
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
- lock
Entity information for a DataStage lock object.
- entity
DataStage flow ID that is locked.
Requester of the lock.
Metadata information for a DataStage lock object.
- metadata
Lock status.
The name of the DataStage flow.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
Unexpected error.
{ "metadata": { "asset_id": "7ad1e03c-5380-4bfa-8317-3604b95954c1", "asset_type": "data_intg_subflow", "catalog_id": "e35806c5-5314-4677-bb8a-416d3c628d41", "create_time": "2021-05-10T19:11:04.000Z", "creator_id": "IBMid-310000E15B", "name": "NSC2_Subflow", "origin_country": "us", "size": 5117, "project_id": "{project_id}", "resource_key": "baa8b445-9bea-4c7b-9930-233f57f8c629/data_intg_subflow/NSC2_Subflow", "description": "", "tags": [], "usage": { "last_access_time": "2021-05-10T19:11:05.474Z", "last_accessor_id": "IBMid-310000E15B", "access_count": 0 } }, "entity": { "data_intg_subflow": { "mime_type": "application/json", "dataset": false } }, "attachments": { "doc_type": "pipeline", "version": "3.0", "json_schema": "https://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.json", "id": "913abf38-fac2-4c56-815b-f6f21e140fa3", "primary_pipeline": "abd53940-0ab2-4559-978e-864800ee875a", "pipelines": [ { "id": "abd53940-0ab2-4559-978e-864800ee875a", "runtime_ref": "pxOsh", "nodes": [ { "outputs": [ { "id": "5e514391-fc64-4ad9-b7ef-d164783d1484", "app_data": { "datastage": { "is_source_of_link": "aaac7610-cf58-4b7c-9431-643afe952621" }, "ui_data": { "label": "outPort" } }, "schema_ref": "a479344e-7835-42b8-a5f5-7d88bc490dfe" } ], "id": "602a1843-4cb2-4a28-93f3-f6d08e9910b6", "type": "binding", "app_data": { "datastage": { "outputs_order": "5e514391-fc64-4ad9-b7ef-d164783d1484" }, "ui_data": { "image": "", "x_pos": 48, "label": "Entry node 1", "y_pos": 48 } } }, { "outputs": [ { "id": "c539d891-84a8-481e-82fa-a6c90e588e1d", "app_data": { "datastage": { "is_source_of_link": "78c4cdcd-2f6b-474a-805f-f15d00b7cac2" }, "ui_data": { "label": "outPort" } }, "schema_ref": "d4ba6846-debd-47c5-90ec-dda663728a36" } ], "inputs": [ { "links": [ { "node_id_ref": "602a1843-4cb2-4a28-93f3-f6d08e9910b6", "type_attr": "PRIMARY", "id": "aaac7610-cf58-4b7c-9431-643afe952621", "link_name": "DSLink1E", "app_data": { "datastage": {}, "ui_data": { "decorations": [ { "path": "", "outline": true, "hotspot": false, "id": "DSLink1E", "label": "DSLink1E", "position": "middle", "class_name": "" } ] } }, "port_id_ref": "5e514391-fc64-4ad9-b7ef-d164783d1484" } ], "id": "fb38f373-7b2c-4e70-8629-c4e5e05a7cff", "app_data": { "datastage": {}, "ui_data": { "label": "inPort" } }, "parameters": { "runtime_column_propagation": 0 }, "schema_ref": "a479344e-7835-42b8-a5f5-7d88bc490dfe" } ], "id": "a2fb41ad-5088-4849-a3cc-453a6416492c", "type": "super_node", "app_data": { "datastage": { "inputs_order": "fb38f373-7b2c-4e70-8629-c4e5e05a7cff", "outputs_order": "c539d891-84a8-481e-82fa-a6c90e588e1d" }, "ui_data": { "image": "../graphics/palette/Standardize.svg", "expanded_height": 200, "is_expanded": false, "expanded_width": 300, "x_pos": 192, "label": "ContainerC3", "y_pos": 48 } }, "parameters": { "output_count": 1, "input_count": 1 }, "subflow_ref": { "url": "app_defined", "pipeline_id_ref": "default_pipeline_id" } }, { "inputs": [ { "links": [ { "node_id_ref": "a2fb41ad-5088-4849-a3cc-453a6416492c", "type_attr": "PRIMARY", "id": "78c4cdcd-2f6b-474a-805f-f15d00b7cac2", "link_name": "DSLink2E", "app_data": { "datastage": {}, "ui_data": { "decorations": [ { "path": "", "outline": true, "hotspot": false, "id": "DSLink2E", "label": "DSLink2E", "position": "middle", "class_name": "" } ] } }, "port_id_ref": "c539d891-84a8-481e-82fa-a6c90e588e1d" } ], "id": "2a52a02d-113c-4d9b-8f36-609414be8bf5", "app_data": { "datastage": {}, "ui_data": { "label": "inPort" } }, "schema_ref": "d4ba6846-debd-47c5-90ec-dda663728a36" } ], "id": "547dcda4-a052-432d-ae4b-06df14e8e5b3", "type": "binding", "app_data": { "datastage": { "inputs_order": "2a52a02d-113c-4d9b-8f36-609414be8bf5" }, "ui_data": { "image": "", "x_pos": 384, "label": "Exit node 1", "y_pos": 48 } } } ], "app_data": { "datastage": { "runtimecolumnpropagation": "true" }, "ui_data": { "comments": [] } } } ], "schemas": [ { "id": "a479344e-7835-42b8-a5f5-7d88bc490dfe", "fields": [ { "metadata": { "item_index": 0, "is_key": true, "min_length": 0, "decimal_scale": 0, "decimal_precision": 0, "max_length": 0, "is_signed": true }, "nullable": false, "name": "col1", "type": "integer", "app_data": { "column_reference": "col1", "odbc_type": "INTEGER", "table_def": "Basic3\\\\Basic3\\\\Basic3", "is_unicode_string": false, "type_code": "INT32" } }, { "metadata": { "item_index": 0, "is_key": false, "min_length": 5, "decimal_scale": 0, "decimal_precision": 0, "max_length": 5, "is_signed": false }, "nullable": false, "name": "col2", "type": "string", "app_data": { "column_reference": "col2", "odbc_type": "CHAR", "table_def": "Basic3\\\\Basic3\\\\Basic3", "is_unicode_string": false, "type_code": "STRING" } }, { "metadata": { "item_index": 0, "is_key": false, "min_length": 0, "decimal_scale": 0, "decimal_precision": 0, "max_length": 10, "is_signed": false }, "nullable": false, "name": "col3", "type": "string", "app_data": { "column_reference": "col3", "odbc_type": "VARCHAR", "table_def": "Basic3\\\\Basic3\\\\Basic3", "is_unicode_string": false, "type_code": "STRING" } } ] }, { "id": "d4ba6846-debd-47c5-90ec-dda663728a36", "fields": [ { "metadata": { "item_index": 0, "is_key": true, "min_length": 0, "decimal_scale": 0, "decimal_precision": 0, "max_length": 0, "is_signed": true }, "nullable": false, "name": "col1", "type": "integer", "app_data": { "column_reference": "col1", "odbc_type": "INTEGER", "table_def": "Basic3\\\\Basic3\\\\Basic3", "is_unicode_string": false, "type_code": "INT32" } }, { "metadata": { "item_index": 0, "is_key": false, "min_length": 5, "decimal_scale": 0, "decimal_precision": 0, "max_length": 5, "is_signed": false }, "nullable": false, "name": "col2", "type": "string", "app_data": { "column_reference": "col2", "odbc_type": "CHAR", "table_def": "Basic3\\\\Basic3\\\\Basic3", "is_unicode_string": false, "type_code": "STRING" } }, { "metadata": { "item_index": 0, "is_key": false, "min_length": 0, "decimal_scale": 0, "decimal_precision": 0, "max_length": 10, "is_signed": false }, "nullable": false, "name": "col3", "type": "string", "app_data": { "column_reference": "col3", "odbc_type": "VARCHAR", "table_def": "Basic3\\\\Basic3\\\\Basic3", "is_unicode_string": false, "type_code": "STRING" } } ] } ], "runtimes": [ { "name": "pxOsh", "id": "pxOsh" } ], "app_data": { "datastage": { "version": "3.0.2" } } } }
{ "metadata": { "asset_id": "7ad1e03c-5380-4bfa-8317-3604b95954c1", "asset_type": "data_intg_subflow", "catalog_id": "e35806c5-5314-4677-bb8a-416d3c628d41", "create_time": "2021-05-10T19:11:04.000Z", "creator_id": "IBMid-310000E15B", "name": "NSC2_Subflow", "origin_country": "us", "size": 5117, "project_id": "{project_id}", "resource_key": "baa8b445-9bea-4c7b-9930-233f57f8c629/data_intg_subflow/NSC2_Subflow", "description": "", "tags": [], "usage": { "last_access_time": "2021-05-10T19:11:05.474Z", "last_accessor_id": "IBMid-310000E15B", "access_count": 0 } }, "entity": { "data_intg_subflow": { "mime_type": "application/json", "dataset": false } }, "attachments": { "doc_type": "pipeline", "version": "3.0", "json_schema": "https://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.json", "id": "913abf38-fac2-4c56-815b-f6f21e140fa3", "primary_pipeline": "abd53940-0ab2-4559-978e-864800ee875a", "pipelines": [ { "id": "abd53940-0ab2-4559-978e-864800ee875a", "runtime_ref": "pxOsh", "nodes": [ { "outputs": [ { "id": "5e514391-fc64-4ad9-b7ef-d164783d1484", "app_data": { "datastage": { "is_source_of_link": "aaac7610-cf58-4b7c-9431-643afe952621" }, "ui_data": { "label": "outPort" } }, "schema_ref": "a479344e-7835-42b8-a5f5-7d88bc490dfe" } ], "id": "602a1843-4cb2-4a28-93f3-f6d08e9910b6", "type": "binding", "app_data": { "datastage": { "outputs_order": "5e514391-fc64-4ad9-b7ef-d164783d1484" }, "ui_data": { "image": "", "x_pos": 48, "label": "Entry node 1", "y_pos": 48 } } }, { "outputs": [ { "id": "c539d891-84a8-481e-82fa-a6c90e588e1d", "app_data": { "datastage": { "is_source_of_link": "78c4cdcd-2f6b-474a-805f-f15d00b7cac2" }, "ui_data": { "label": "outPort" } }, "schema_ref": "d4ba6846-debd-47c5-90ec-dda663728a36" } ], "inputs": [ { "links": [ { "node_id_ref": "602a1843-4cb2-4a28-93f3-f6d08e9910b6", "type_attr": "PRIMARY", "id": "aaac7610-cf58-4b7c-9431-643afe952621", "link_name": "DSLink1E", "app_data": { "datastage": {}, "ui_data": { "decorations": [ { "path": "", "outline": true, "hotspot": false, "id": "DSLink1E", "label": "DSLink1E", "position": "middle", "class_name": "" } ] } }, "port_id_ref": "5e514391-fc64-4ad9-b7ef-d164783d1484" } ], "id": "fb38f373-7b2c-4e70-8629-c4e5e05a7cff", "app_data": { "datastage": {}, "ui_data": { "label": "inPort" } }, "parameters": { "runtime_column_propagation": 0 }, "schema_ref": "a479344e-7835-42b8-a5f5-7d88bc490dfe" } ], "id": "a2fb41ad-5088-4849-a3cc-453a6416492c", "type": "super_node", "app_data": { "datastage": { "inputs_order": "fb38f373-7b2c-4e70-8629-c4e5e05a7cff", "outputs_order": "c539d891-84a8-481e-82fa-a6c90e588e1d" }, "ui_data": { "image": "../graphics/palette/Standardize.svg", "expanded_height": 200, "is_expanded": false, "expanded_width": 300, "x_pos": 192, "label": "ContainerC3", "y_pos": 48 } }, "parameters": { "output_count": 1, "input_count": 1 }, "subflow_ref": { "url": "app_defined", "pipeline_id_ref": "default_pipeline_id" } }, { "inputs": [ { "links": [ { "node_id_ref": "a2fb41ad-5088-4849-a3cc-453a6416492c", "type_attr": "PRIMARY", "id": "78c4cdcd-2f6b-474a-805f-f15d00b7cac2", "link_name": "DSLink2E", "app_data": { "datastage": {}, "ui_data": { "decorations": [ { "path": "", "outline": true, "hotspot": false, "id": "DSLink2E", "label": "DSLink2E", "position": "middle", "class_name": "" } ] } }, "port_id_ref": "c539d891-84a8-481e-82fa-a6c90e588e1d" } ], "id": "2a52a02d-113c-4d9b-8f36-609414be8bf5", "app_data": { "datastage": {}, "ui_data": { "label": "inPort" } }, "schema_ref": "d4ba6846-debd-47c5-90ec-dda663728a36" } ], "id": "547dcda4-a052-432d-ae4b-06df14e8e5b3", "type": "binding", "app_data": { "datastage": { "inputs_order": "2a52a02d-113c-4d9b-8f36-609414be8bf5" }, "ui_data": { "image": "", "x_pos": 384, "label": "Exit node 1", "y_pos": 48 } } } ], "app_data": { "datastage": { "runtimecolumnpropagation": "true" }, "ui_data": { "comments": [] } } } ], "schemas": [ { "id": "a479344e-7835-42b8-a5f5-7d88bc490dfe", "fields": [ { "metadata": { "item_index": 0, "is_key": true, "min_length": 0, "decimal_scale": 0, "decimal_precision": 0, "max_length": 0, "is_signed": true }, "nullable": false, "name": "col1", "type": "integer", "app_data": { "column_reference": "col1", "odbc_type": "INTEGER", "table_def": "Basic3\\\\Basic3\\\\Basic3", "is_unicode_string": false, "type_code": "INT32" } }, { "metadata": { "item_index": 0, "is_key": false, "min_length": 5, "decimal_scale": 0, "decimal_precision": 0, "max_length": 5, "is_signed": false }, "nullable": false, "name": "col2", "type": "string", "app_data": { "column_reference": "col2", "odbc_type": "CHAR", "table_def": "Basic3\\\\Basic3\\\\Basic3", "is_unicode_string": false, "type_code": "STRING" } }, { "metadata": { "item_index": 0, "is_key": false, "min_length": 0, "decimal_scale": 0, "decimal_precision": 0, "max_length": 10, "is_signed": false }, "nullable": false, "name": "col3", "type": "string", "app_data": { "column_reference": "col3", "odbc_type": "VARCHAR", "table_def": "Basic3\\\\Basic3\\\\Basic3", "is_unicode_string": false, "type_code": "STRING" } } ] }, { "id": "d4ba6846-debd-47c5-90ec-dda663728a36", "fields": [ { "metadata": { "item_index": 0, "is_key": true, "min_length": 0, "decimal_scale": 0, "decimal_precision": 0, "max_length": 0, "is_signed": true }, "nullable": false, "name": "col1", "type": "integer", "app_data": { "column_reference": "col1", "odbc_type": "INTEGER", "table_def": "Basic3\\\\Basic3\\\\Basic3", "is_unicode_string": false, "type_code": "INT32" } }, { "metadata": { "item_index": 0, "is_key": false, "min_length": 5, "decimal_scale": 0, "decimal_precision": 0, "max_length": 5, "is_signed": false }, "nullable": false, "name": "col2", "type": "string", "app_data": { "column_reference": "col2", "odbc_type": "CHAR", "table_def": "Basic3\\\\Basic3\\\\Basic3", "is_unicode_string": false, "type_code": "STRING" } }, { "metadata": { "item_index": 0, "is_key": false, "min_length": 0, "decimal_scale": 0, "decimal_precision": 0, "max_length": 10, "is_signed": false }, "nullable": false, "name": "col3", "type": "string", "app_data": { "column_reference": "col3", "odbc_type": "VARCHAR", "table_def": "Basic3\\\\Basic3\\\\Basic3", "is_unicode_string": false, "type_code": "STRING" } } ] } ], "runtimes": [ { "name": "pxOsh", "id": "pxOsh" } ], "app_data": { "datastage": { "version": "3.0.2" } } } }
Update DataStage subflow
Modifies a data subflow in the specified project or catalog (either project_id
or catalog_id
must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
Modifies a data subflow in the specified project or catalog (either project_id
or catalog_id
must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
Modifies a data subflow in the specified project or catalog (either project_id
or catalog_id
must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
Modifies a data subflow in the specified project or catalog (either project_id
or catalog_id
must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
PUT /v3/data_intg_flows/subflows/{data_intg_subflow_id}
ServiceCall<DataIntgFlow> updateDatastageSubflows(UpdateDatastageSubflowsOptions updateDatastageSubflowsOptions)
updateDatastageSubflows(params)
update_datastage_subflows(self,
data_intg_subflow_id: str,
data_intg_subflow_name: str,
*,
pipeline_flows: 'PipelineJson' = None,
catalog_id: str = None,
project_id: str = None,
**kwargs
) -> DetailedResponse
Request
Use the UpdateDatastageSubflowsOptions.Builder
to create a UpdateDatastageSubflowsOptions
object that contains the parameter values for the updateDatastageSubflows
method.
Path Parameters
The DataStage subflow ID to use.
Query Parameters
The DataStage subflow name.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23
Pipeline json to be attached.
Pipeline flow to be stored.
The updateDatastageSubflows options.
The DataStage subflow ID to use.
The DataStage subflow name.
Pipeline flow to be stored.
- pipelineFlows
Object containing app-specific data.
The document type.
Examples:pipeline
Array of parameter set references.
Examples:[ { "name": "Test Param Set", "project_ref": "bd0dbbfd-810d-4f0e-b0a9-228c328a8e23", "ref": "eeabf991-b69e-4f8c-b9f1-e6f2129b9a57" } ]
Document identifier, GUID recommended.
Examples:84c2b6fb-1dd5-4114-b4ba-9bb2cb364fff
Refers to the JSON schema used to validate documents of this type.
Examples:http://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.json
Parameters for the flow document.
Examples:{ "local_parameters": [ { "name": "srcFile", "type": "string" }, { "name": "my_connection", "subtype": "connection", "type": "asset_id", "value": "dfe7c595-81d8-461e-8d13-a7c544f3f500" } ] }
- pipelines
Object containing app-specific data.
Examples:{ "ui_data": { "comments": [] } }
A brief description of the DataStage flow.
Examples:A test DataStage flow.
Unique identifier.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bc
Name of the pipeline.
Examples:ContainerC1
Array of pipeline nodes.
Examples:[ { "app_data": { "ui_data": { "description": "Produce a set of mock data based on the specified metadata", "image": "/data-intg/flows/graphics/palette/PxRowGenerator.svg", "label": "Row_Generator_1", "x_pos": 108, "y_pos": 162 } }, "id": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "op": "PxRowGenerator", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "73a5fb2c-f499-4c75-a8a7-71cea90f5105" }, "ui_data": { "label": "outPort" } }, "id": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "parameters": { "records": 10 }, "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "parameters": { "input_count": 0, "output_count": 1 }, "type": "binding" }, { "app_data": { "ui_data": { "description": "Print row column values to either the job log or to a separate output link", "image": "/data-intg/flows/graphics/palette/PxPeek.svg", "label": "Peek_1", "x_pos": 342, "y_pos": 162 } }, "id": "4195b012-d3e7-4f74-8099-e7b23ec6ebb9", "inputs": [ { "app_data": { "ui_data": { "label": "inPort" } }, "id": "c4195b34-8b4a-473f-b987-fa6d028f3968", "links": [ { "app_data": { "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "Link_1", "label": "Link_1", "outline": true, "path": "", "position": "middle" } ] } }, "id": "73a5fb2c-f499-4c75-a8a7-71cea90f5105", "link_name": "Link_1", "node_id_ref": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "port_id_ref": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "type_attr": "PRIMARY" } ], "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "op": "PxPeek", "outputs": [ { "app_data": { "ui_data": { "label": "outPort" } }, "id": "" } ], "parameters": { "all": " ", "columns": " ", "dataset": " ", "input_count": 1, "name": "name", "nrecs": 10, "output_count": 0, "selection": " " }, "type": "execution_node" } ]
Reference to the runtime type.
Examples:pxOsh
Reference to the primary (main) pipeline flow within the document.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bc
Runtime information for pipeline flow.
Examples:[ { "id": "pxOsh", "name": "pxOsh" } ]
Array of data record schemas used in the pipeline.
Examples:[ { "fields": [ { "app_data": { "is_unicode_string": false, "odbc_type": "INTEGER", "type_code": "INT32" }, "metadata": { "decimal_precision": 6, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 6, "min_length": 0 }, "name": "ID", "nullable": false, "type": "integer" } ], "id": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ]
Pipeline flow version.
Examples:3.0
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23
parameters
The DataStage subflow ID to use.
The DataStage subflow name.
Pipeline flow to be stored.
- pipelineFlows
Object containing app-specific data.
The document type.
Examples:pipeline
Array of parameter set references.
Examples:[ { "name": "Test Param Set", "project_ref": "bd0dbbfd-810d-4f0e-b0a9-228c328a8e23", "ref": "eeabf991-b69e-4f8c-b9f1-e6f2129b9a57" } ]
Document identifier, GUID recommended.
Examples:84c2b6fb-1dd5-4114-b4ba-9bb2cb364fff
Refers to the JSON schema used to validate documents of this type.
Examples:http://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.json
Parameters for the flow document.
Examples:{ "local_parameters": [ { "name": "srcFile", "type": "string" }, { "name": "my_connection", "subtype": "connection", "type": "asset_id", "value": "dfe7c595-81d8-461e-8d13-a7c544f3f500" } ] }
- pipelines
Object containing app-specific data.
Examples:{ "ui_data": { "comments": [] } }
A brief description of the DataStage flow.
Examples:A test DataStage flow.
Unique identifier.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bc
Name of the pipeline.
Examples:ContainerC1
Array of pipeline nodes.
Examples:[ { "app_data": { "ui_data": { "description": "Produce a set of mock data based on the specified metadata", "image": "/data-intg/flows/graphics/palette/PxRowGenerator.svg", "label": "Row_Generator_1", "x_pos": 108, "y_pos": 162 } }, "id": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "op": "PxRowGenerator", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "73a5fb2c-f499-4c75-a8a7-71cea90f5105" }, "ui_data": { "label": "outPort" } }, "id": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "parameters": { "records": 10 }, "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "parameters": { "input_count": 0, "output_count": 1 }, "type": "binding" }, { "app_data": { "ui_data": { "description": "Print row column values to either the job log or to a separate output link", "image": "/data-intg/flows/graphics/palette/PxPeek.svg", "label": "Peek_1", "x_pos": 342, "y_pos": 162 } }, "id": "4195b012-d3e7-4f74-8099-e7b23ec6ebb9", "inputs": [ { "app_data": { "ui_data": { "label": "inPort" } }, "id": "c4195b34-8b4a-473f-b987-fa6d028f3968", "links": [ { "app_data": { "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "Link_1", "label": "Link_1", "outline": true, "path": "", "position": "middle" } ] } }, "id": "73a5fb2c-f499-4c75-a8a7-71cea90f5105", "link_name": "Link_1", "node_id_ref": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "port_id_ref": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "type_attr": "PRIMARY" } ], "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "op": "PxPeek", "outputs": [ { "app_data": { "ui_data": { "label": "outPort" } }, "id": "" } ], "parameters": { "all": " ", "columns": " ", "dataset": " ", "input_count": 1, "name": "name", "nrecs": 10, "output_count": 0, "selection": " " }, "type": "execution_node" } ]
Reference to the runtime type.
Examples:pxOsh
Reference to the primary (main) pipeline flow within the document.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bc
Runtime information for pipeline flow.
Examples:[ { "id": "pxOsh", "name": "pxOsh" } ]
Array of data record schemas used in the pipeline.
Examples:[ { "fields": [ { "app_data": { "is_unicode_string": false, "odbc_type": "INTEGER", "type_code": "INT32" }, "metadata": { "decimal_precision": 6, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 6, "min_length": 0 }, "name": "ID", "nullable": false, "type": "integer" } ], "id": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ]
Pipeline flow version.
Examples:3.0
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:
parameters
The DataStage subflow ID to use.
The DataStage subflow name.
Pipeline flow to be stored.
- pipeline_flows
Object containing app-specific data.
The document type.
Examples:pipeline
Array of parameter set references.
Examples:[ { "name": "Test Param Set", "project_ref": "bd0dbbfd-810d-4f0e-b0a9-228c328a8e23", "ref": "eeabf991-b69e-4f8c-b9f1-e6f2129b9a57" } ]
Document identifier, GUID recommended.
Examples:84c2b6fb-1dd5-4114-b4ba-9bb2cb364fff
Refers to the JSON schema used to validate documents of this type.
Examples:http://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.json
Parameters for the flow document.
Examples:{ "local_parameters": [ { "name": "srcFile", "type": "string" }, { "name": "my_connection", "subtype": "connection", "type": "asset_id", "value": "dfe7c595-81d8-461e-8d13-a7c544f3f500" } ] }
- pipelines
Object containing app-specific data.
Examples:{ "ui_data": { "comments": [] } }
A brief description of the DataStage flow.
Examples:A test DataStage flow.
Unique identifier.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bc
Name of the pipeline.
Examples:ContainerC1
Array of pipeline nodes.
Examples:[ { "app_data": { "ui_data": { "description": "Produce a set of mock data based on the specified metadata", "image": "/data-intg/flows/graphics/palette/PxRowGenerator.svg", "label": "Row_Generator_1", "x_pos": 108, "y_pos": 162 } }, "id": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "op": "PxRowGenerator", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "73a5fb2c-f499-4c75-a8a7-71cea90f5105" }, "ui_data": { "label": "outPort" } }, "id": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "parameters": { "records": 10 }, "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "parameters": { "input_count": 0, "output_count": 1 }, "type": "binding" }, { "app_data": { "ui_data": { "description": "Print row column values to either the job log or to a separate output link", "image": "/data-intg/flows/graphics/palette/PxPeek.svg", "label": "Peek_1", "x_pos": 342, "y_pos": 162 } }, "id": "4195b012-d3e7-4f74-8099-e7b23ec6ebb9", "inputs": [ { "app_data": { "ui_data": { "label": "inPort" } }, "id": "c4195b34-8b4a-473f-b987-fa6d028f3968", "links": [ { "app_data": { "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "Link_1", "label": "Link_1", "outline": true, "path": "", "position": "middle" } ] } }, "id": "73a5fb2c-f499-4c75-a8a7-71cea90f5105", "link_name": "Link_1", "node_id_ref": "9fc2ec49-87ed-49c7-bdfc-abb06a46af37", "port_id_ref": "3d01fe66-e675-4e7f-ad7b-3ba9a9cff30d", "type_attr": "PRIMARY" } ], "schema_ref": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ], "op": "PxPeek", "outputs": [ { "app_data": { "ui_data": { "label": "outPort" } }, "id": "" } ], "parameters": { "all": " ", "columns": " ", "dataset": " ", "input_count": 1, "name": "name", "nrecs": 10, "output_count": 0, "selection": " " }, "type": "execution_node" } ]
Reference to the runtime type.
Examples:pxOsh
Reference to the primary (main) pipeline flow within the document.
Examples:fa1b859a-d592-474d-b56c-2137e4efa4bc
Runtime information for pipeline flow.
Examples:[ { "id": "pxOsh", "name": "pxOsh" } ]
Array of data record schemas used in the pipeline.
Examples:[ { "fields": [ { "app_data": { "is_unicode_string": false, "odbc_type": "INTEGER", "type_code": "INT32" }, "metadata": { "decimal_precision": 6, "decimal_scale": 0, "is_key": false, "is_signed": false, "item_index": 0, "max_length": 6, "min_length": 0 }, "name": "ID", "nullable": false, "type": "integer" } ], "id": "0e04b1b8-60c2-4b36-bae6-d0c7ae03dd8d" } ]
Pipeline flow version.
Examples:3.0
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:
curl -X PUT --location --header "Authorization: Bearer {iam_token}" --header "Accept: application/json;charset=utf-8" --header "Content-Type: application/json;charset=utf-8" --data '{}' "{base_url}/v3/data_intg_flows/subflows/{data_intg_subflow_id}?data_intg_subflow_name={data_intg_subflow_name}&project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
PipelineJson exampleSubFlowUpdated = PipelineFlowHelper.buildPipelineFlow(updatedSubFlowJson); UpdateDatastageSubflowsOptions updateDatastageSubflowsOptions = new UpdateDatastageSubflowsOptions.Builder() .dataIntgSubflowId(subflowID) .dataIntgSubflowName(subflowName) .pipelineFlows(exampleSubFlowUpdated) .projectId(projectID) .build(); Response<DataIntgFlow> response = datastageService.updateDatastageSubflows(updateDatastageSubflowsOptions).execute(); DataIntgFlow dataIntgFlow = response.getResult(); System.out.println(dataIntgFlow);
const params = { dataIntgSubflowId: subflow_assetID, dataIntgSubflowName: dataIntgSubFlowName, pipelineFlows: pipelineJsonFromFile, projectId: projectID, assetCategory: 'system', }; const res = await datastageService.updateDatastageSubflows(params);
data_intg_flow = datastage_service.update_datastage_subflows( data_intg_subflow_id=createdSubflowId, data_intg_subflow_name='testSubflow1Updated', pipeline_flows=UtilHelper.readJsonFileToDict('inputFiles/exampleSubflowUpdated.json'), project_id=config['PROJECT_ID'] ).get_result() print(json.dumps(data_intg_flow, indent=2))
Response
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
System metadata about an asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
- lock
Entity information for a DataStage lock object.
- entity
DataStage flow ID that is locked.
Requester of the lock.
Metadata information for a DataStage lock object.
- metadata
Lock status.
The name of the DataStage flow.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
- lock
Entity information for a DataStage lock object.
- entity
DataStage flow ID that is locked.
Requester of the lock.
Metadata information for a DataStage lock object.
- metadata
Lock status.
The name of the DataStage flow.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
- lock
Entity information for a DataStage lock object.
- entity
DataStage flow ID that is locked.
Requester of the lock.
Metadata information for a DataStage lock object.
- metadata
Lock status.
The name of the DataStage flow.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
{ "attachments": [ { "asset_type": "data_intg_subflow", "attachment_id": "9081dd6b-0ab7-47b5-8233-c10c6e64509d", "href": "{url}/v2/assets/{asset_id}/attachments/9081dd6b-0ab7-47b5-8233-c10c6e64509d?project_id={project_id}", "mime": "application/json", "name": "data_intg_flows", "object_key": "data_intg_subflow/{project_id}{asset_id}", "object_key_is_read_only": false, "private_url": false } ], "entity": { "data_intg_subflow": { "dataset": false, "mime_type": "application/json" } }, "metadata": { "asset_id": "{asset_id}", "asset_type": "data_intg_subflow", "catalog_id": "{catalog_id}", "create_time": "2021-04-08 17:14:08+00:00", "creator_id": "IBMid-xxxxxxxxxx", "description": "", "href": "{url}/data_intg/v3/data_intg_flows/subflows/{asset_id}?catalog_id={catalog_id}", "name": "{subflow_name}", "origin_country": "us", "project_id": "{project_id}", "resource_key": "{project_id}/data_intg_subflow/{subflow_name}", "size": 2712, "tags": [], "usage": { "access_count": 0, "last_access_time": "2021-04-08 17:21:33.936000+00:00", "last_accessor_id": "IBMid-xxxxxxxxxx" } } }
{ "attachments": [ { "asset_type": "data_intg_subflow", "attachment_id": "9081dd6b-0ab7-47b5-8233-c10c6e64509d", "href": "{url}/v2/assets/{asset_id}/attachments/9081dd6b-0ab7-47b5-8233-c10c6e64509d?project_id={project_id}", "mime": "application/json", "name": "data_intg_flows", "object_key": "data_intg_subflow/{project_id}{asset_id}", "object_key_is_read_only": false, "private_url": false } ], "entity": { "data_intg_subflow": { "dataset": false, "mime_type": "application/json" } }, "metadata": { "asset_id": "{asset_id}", "asset_type": "data_intg_subflow", "catalog_id": "{catalog_id}", "create_time": "2021-04-08 17:14:08+00:00", "creator_id": "IBMid-xxxxxxxxxx", "description": "", "href": "{url}/data_intg/v3/data_intg_flows/subflows/{asset_id}?catalog_id={catalog_id}", "name": "{subflow_name}", "origin_country": "us", "project_id": "{project_id}", "resource_key": "{project_id}/data_intg_subflow/{subflow_name}", "size": 2712, "tags": [], "usage": { "access_count": 0, "last_access_time": "2021-04-08 17:21:33.936000+00:00", "last_accessor_id": "IBMid-xxxxxxxxxx" } } }
Clone DataStage subflow
Create a DataStage subflow in the specified project or catalog based on an existing DataStage subflow in the same project or catalog.
Create a DataStage subflow in the specified project or catalog based on an existing DataStage subflow in the same project or catalog.
Create a DataStage subflow in the specified project or catalog based on an existing DataStage subflow in the same project or catalog.
Create a DataStage subflow in the specified project or catalog based on an existing DataStage subflow in the same project or catalog.
POST /v3/data_intg_flows/subflows/{data_intg_subflow_id}/clone
ServiceCall<DataIntgFlow> cloneDatastageSubflows(CloneDatastageSubflowsOptions cloneDatastageSubflowsOptions)
cloneDatastageSubflows(params)
clone_datastage_subflows(self,
data_intg_subflow_id: str,
*,
catalog_id: str = None,
project_id: str = None,
**kwargs
) -> DetailedResponse
Request
Use the CloneDatastageSubflowsOptions.Builder
to create a CloneDatastageSubflowsOptions
object that contains the parameter values for the cloneDatastageSubflows
method.
Path Parameters
The DataStage subflow ID to use.
Query Parameters
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23
The cloneDatastageSubflows options.
The DataStage subflow ID to use.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23
parameters
The DataStage subflow ID to use.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:
parameters
The DataStage subflow ID to use.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:
curl -X POST --location --header "Authorization: Bearer {iam_token}" --header "Accept: application/json;charset=utf-8" "{base_url}/v3/data_intg_flows/subflows/{data_intg_subflow_id}/clone?project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
CloneDatastageSubflowsOptions cloneDatastageSubflowsOptions = new CloneDatastageSubflowsOptions.Builder() .dataIntgSubflowId(subflowID) .projectId(projectID) .build(); Response<DataIntgFlow> response = datastageService.cloneDatastageSubflows(cloneDatastageSubflowsOptions).execute(); DataIntgFlow dataIntgFlow = response.getResult(); System.out.println(dataIntgFlow);
const params = { dataIntgSubflowId: subflow_assetID, projectId: projectID, }; const res = await datastageService.cloneDatastageSubflows(params);
data_intg_flow = datastage_service.clone_datastage_subflows( data_intg_subflow_id=createdSubflowId, project_id=config['PROJECT_ID'] ).get_result() print(json.dumps(data_intg_flow, indent=2))
Response
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
System metadata about an asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
- lock
Entity information for a DataStage lock object.
- entity
DataStage flow ID that is locked.
Requester of the lock.
Metadata information for a DataStage lock object.
- metadata
Lock status.
The name of the DataStage flow.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
- lock
Entity information for a DataStage lock object.
- entity
DataStage flow ID that is locked.
Requester of the lock.
Metadata information for a DataStage lock object.
- metadata
Lock status.
The name of the DataStage flow.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
- entity
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
- lock
Entity information for a DataStage lock object.
- entity
DataStage flow ID that is locked.
Requester of the lock.
Metadata information for a DataStage lock object.
- metadata
Lock status.
The name of the DataStage flow.
The rules of visibility for an asset.
- rov
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
System metadata about an asset.
- metadata
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
- usage
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
No Sample Response
Create V3 data flows from the attached job export file
Creates data flows from the attached job export file. This is an asynchronous call. The API call returns almost immediately which does not necessarily imply the completion of the import request. It only means that the import request has been accepted. The status field of the import request is included in the import response object. The status "completed" ("in_progress", "failed", resp.) indicates the import request is completed (in progress, and failed, resp.) The job export file for an import request may contain one mor more data flows. Unless the on_failure option is set to "stop", a completed import request may contain not only successfully imported data flows but also data flows that cannot be imported.
Creates data flows from the attached job export file. This is an asynchronous call. The API call returns almost immediately which does not necessarily imply the completion of the import request. It only means that the import request has been accepted. The status field of the import request is included in the import response object. The status "completed" ("in_progress", "failed", resp.) indicates the import request is completed (in progress, and failed, resp.) The job export file for an import request may contain one mor more data flows. Unless the on_failure option is set to "stop", a completed import request may contain not only successfully imported data flows but also data flows that cannot be imported.
Creates data flows from the attached job export file. This is an asynchronous call. The API call returns almost immediately which does not necessarily imply the completion of the import request. It only means that the import request has been accepted. The status field of the import request is included in the import response object. The status "completed" ("in_progress", "failed", resp.) indicates the import request is completed (in progress, and failed, resp.) The job export file for an import request may contain one mor more data flows. Unless the on_failure option is set to "stop", a completed import request may contain not only successfully imported data flows but also data flows that cannot be imported.
Creates data flows from the attached job export file. This is an asynchronous call. The API call returns almost immediately which does not necessarily imply the completion of the import request. It only means that the import request has been accepted. The status field of the import request is included in the import response object. The status "completed" ("in_progress", "failed", resp.) indicates the import request is completed (in progress, and failed, resp.) The job export file for an import request may contain one mor more data flows. Unless the on_failure option is set to "stop", a completed import request may contain not only successfully imported data flows but also data flows that cannot be imported.
POST /v3/migration/isx_imports
ServiceCall<ImportResponse> createMigration(CreateMigrationOptions createMigrationOptions)
createMigration(params)
create_migration(self,
body: BinaryIO,
*,
catalog_id: str = None,
project_id: str = None,
on_failure: str = None,
conflict_resolution: str = None,
attachment_type: str = None,
file_name: str = None,
**kwargs
) -> DetailedResponse
Request
Use the CreateMigrationOptions.Builder
to create a CreateMigrationOptions
object that contains the parameter values for the createMigration
method.
Query Parameters
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23
Action when the first import failure occurs. The default action is "continue" which will continue importing the remaining data flows. The "stop" action will stop the import operation upon the first error.
Allowable values: [
continue
,stop
]Example:
continue
Resolution when data flow to be imported has a name conflict with an existing data flow in the project or catalog. The default conflict resolution is "skip" will skip the data flow so that it will not be imported. The "rename" resolution will append "_Import_NNNN" suffix to the original name and use the new name for the imported data flow, while the "replace" resolution will first remove the existing data flow with the same name and import the new data flow. For the "rename_replace" option, when the flow name is already used, a new flow name with the suffix "_DATASTAGE_ISX_IMPORT" will be used. If the name is not currently used, the imported flow will be created with this name. In case the new name is already used, the existing flow will be removed first before the imported flow is created. With the rename_replace option, job creation will be determined as follows. If the job name is already used, a new job name with the suffix ".DataStage job" will be used. If the new job name is not currently used, the job will be created with this name. In case the new job name is already used, the job creation will not happen and an error will be raised.
Allowable values: [
skip
,rename
,replace
,rename_replace
]Example:
rename
Type of attachment. The default attachment type is "isx".
Allowable values: [
isx
]Example:
isx
Name of the input file, if it exists.
Example:
myFlows.isx
The createMigration options.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23
Action when the first import failure occurs. The default action is "continue" which will continue importing the remaining data flows. The "stop" action will stop the import operation upon the first error.
Allowable values: [
continue
,stop
]Examples:continue
Resolution when data flow to be imported has a name conflict with an existing data flow in the project or catalog. The default conflict resolution is "skip" will skip the data flow so that it will not be imported. The "rename" resolution will append "_Import_NNNN" suffix to the original name and use the new name for the imported data flow, while the "replace" resolution will first remove the existing data flow with the same name and import the new data flow. For the "rename_replace" option, when the flow name is already used, a new flow name with the suffix "_DATASTAGE_ISX_IMPORT" will be used. If the name is not currently used, the imported flow will be created with this name. In case the new name is already used, the existing flow will be removed first before the imported flow is created. With the rename_replace option, job creation will be determined as follows. If the job name is already used, a new job name with the suffix ".DataStage job" will be used. If the new job name is not currently used, the job will be created with this name. In case the new job name is already used, the job creation will not happen and an error will be raised.
Allowable values: [
skip
,rename
,replace
,rename_replace
]Examples:rename
Type of attachment. The default attachment type is "isx".
Allowable values: [
isx
]Examples:isx
Name of the input file, if it exists.
Examples:myFlows.isx
parameters
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:Action when the first import failure occurs. The default action is "continue" which will continue importing the remaining data flows. The "stop" action will stop the import operation upon the first error.
Allowable values: [
continue
,stop
]Examples:Resolution when data flow to be imported has a name conflict with an existing data flow in the project or catalog. The default conflict resolution is "skip" will skip the data flow so that it will not be imported. The "rename" resolution will append "_Import_NNNN" suffix to the original name and use the new name for the imported data flow, while the "replace" resolution will first remove the existing data flow with the same name and import the new data flow. For the "rename_replace" option, when the flow name is already used, a new flow name with the suffix "_DATASTAGE_ISX_IMPORT" will be used. If the name is not currently used, the imported flow will be created with this name. In case the new name is already used, the existing flow will be removed first before the imported flow is created. With the rename_replace option, job creation will be determined as follows. If the job name is already used, a new job name with the suffix ".DataStage job" will be used. If the new job name is not currently used, the job will be created with this name. In case the new job name is already used, the job creation will not happen and an error will be raised.
Allowable values: [
skip
,rename
,replace
,rename_replace
]Examples:Type of attachment. The default attachment type is "isx".
Allowable values: [
isx
]Examples:Name of the input file, if it exists.
Examples:
parameters
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:Action when the first import failure occurs. The default action is "continue" which will continue importing the remaining data flows. The "stop" action will stop the import operation upon the first error.
Allowable values: [
continue
,stop
]Examples:Resolution when data flow to be imported has a name conflict with an existing data flow in the project or catalog. The default conflict resolution is "skip" will skip the data flow so that it will not be imported. The "rename" resolution will append "_Import_NNNN" suffix to the original name and use the new name for the imported data flow, while the "replace" resolution will first remove the existing data flow with the same name and import the new data flow. For the "rename_replace" option, when the flow name is already used, a new flow name with the suffix "_DATASTAGE_ISX_IMPORT" will be used. If the name is not currently used, the imported flow will be created with this name. In case the new name is already used, the existing flow will be removed first before the imported flow is created. With the rename_replace option, job creation will be determined as follows. If the job name is already used, a new job name with the suffix ".DataStage job" will be used. If the new job name is not currently used, the job will be created with this name. In case the new job name is already used, the job creation will not happen and an error will be raised.
Allowable values: [
skip
,rename
,replace
,rename_replace
]Examples:Type of attachment. The default attachment type is "isx".
Allowable values: [
isx
]Examples:Name of the input file, if it exists.
Examples:
curl -X POST --location --header "Authorization: Bearer {iam_token}" --header "Accept: application/json;charset=utf-8" --header "Content-Type: application/octet-stream" --data 'createMockStream(This is a mock file.)' "{base_url}/v3/migration/isx_imports?project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23&on_failure=continue&conflict_resolution=rename&attachment_type=isx&file_name=myFlows.isx"
CreateMigrationOptions createMigrationOptions = new CreateMigrationOptions.Builder() .body(rowGenIsx) .projectId(projectID) .onFailure("continue") .conflictResolution("rename") .attachmentType("isx") .fileName("rowgen_peek.isx") .build(); Response<ImportResponse> response = datastageService.createMigration(createMigrationOptions).execute(); ImportResponse importResponse = response.getResult(); System.out.println(importResponse);
const params = { body: Buffer.from(fs.readFileSync('testInput/rowgen_peek.isx')), projectId: projectID, onFailure: 'continue', conflictResolution: 'rename', attachmentType: 'isx', fileName: 'rowgen_peek.isx', }; const res = await datastageService.createMigration(params);
import_response = datastage_service.create_migration( body=open(Path(__file__).parent / 'inputFiles/rowgen_peek.isx', "rb").read(), project_id=config['PROJECT_ID'], on_failure='continue', conflict_resolution='rename', attachment_type='isx', file_name='rowgen_peek.isx' ).get_result() print(json.dumps(import_response, indent=2))
Response
Response object of an import request.
Import the response entity.
Import the response metadata.
Response object of an import request.
Import the response entity.
- entity
Account ID of the user who cancelled the import request. This field is required only when the status field is "cancelled".
Examples:user1@company1.com
The conflict_resolution option used for the import.
The timestamp when the import opearton completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
All data flows imported or to be imported. Each ImportFlow object contains status for the individual data flow import operation.
- importDataFlows
conflict resolution status.
Possible values: [
flow_replacement_succeeded
,flow_replacement_failed
,import_flow_renamed
,import_flow_skipped
,connection_replacement_succeeded
,connection_replacement_failed
,connection_renamed
,connection_skipped
,parameter_set_replacement_succeeded
,parameter_set_replacement_failed
,parameter_set_renamed
,parameter_set_skipped
,table_definition_replacement_succeeded
,table_definition_replacement_failed
,table_definition_renamed
,table_definition_skipped
,sequence_job_replacement_succeeded
,sequence_job_replacement_failed
,sequence_job_renamed
,sequence_job_skipped
,subflow_replacement_succeeded
,subflow_replacement_failed
,subflow_renamed
,subflow_skipped
]Examples:import_flow_renamed
The timestamp when the flow import is completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
The errors array report all the problems preventing the data flow from being successfully imported.
- errors
additional error text.
error object name.
error stage type.
error type.
Possible values: [
unsupported_stage_type
,unsupported_feature
,empty_json
,isx_conversion_error
,model_conversion_error
,invalid_input_type
,invalid_json_format
,json_conversion_error
,flow_deletion_error
,flow_creation_error
,flow_response_parsing_error
,auth_token_error
,flow_compilation_error
,empty_stage_list
,empty_stage_node
,missing_stage_type_class_name
,dummy_stage
,missing_stage_type
,missing_repos_id
,stage_conversion_error
,unimplemented_stage_type
,job_creation_error
,job_run_error
,flow_search_error
,unsupported_job_type
,internal_error
,connection_creation_error
,flow_rename_error
,duplicate_job_error
,parameter_set_creation_error
,distributed_lock_error
,duplicate_object_error
,unbound_object_reference
,table_def_creation_error
,connection_creation_api_error
,connection_patch_api_error
,connection_deletion_api_error
,sequence_job_creation_error
,unsupported_stage_type_in_subflow
]
Unique id of the data flow. This field is returned only if the underlying data flow has been successfully imported.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136
Unique id of the job. This field is returned only if the corresponding job object has been successfully created.
Examples:ccfaaafd-810d-4f0e-b0a9-228c328a0136
Job name. This field is returned only if the corresponding job object has been successfully created.
Examples:Aggregator12_DataStage_1
(deprecated) original type of the job or data flow in the import file.
Possible values: [
px_job
,server_job
,connection
,table_def
]Examples:px_job
Name of the imported data flow.
Examples:cancel-reservation-job
Name of the data flow to be imported.
Examples:cancel-reservation-job
The ID of an existing asset this object refers to. If ref_asset_id is specified, the id field will be the same as ref_asset_id for backward compatibility.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136
data import status.
Possible values: [
completed
,in_progress
,failed
,skipped
,deprecated
,unsupported
,flow_conversion_failed
,flow_creation_failed
,flow_compilation_failed
,job_creation_failed
,job_run_failed
,connection_conversion_failed
,connection_creation_failed
,parameter_set_conversion_failed
,parameter_set_creation_failed
,table_definition_conversion_failed
,table_definition_creation_failed
]Examples:completed
type of the job or data connection in the import file.
Possible values: [
px_job
,server_job
,connection
,table_def
,parameter_set
,subflow
,sequence_job
]Examples:px_job
The warnings array report all the warnings in the data flow import operation.
- warnings
additional warning text.
warning object name.
warning type.
Possible values: [
unreleased_stage_type
,unreleased_feature
,credentials_file_warning
,transformer_trigger_unsupported
,transformer_buildtab_unsupported
,unsupported_secure_gateway
,placeholder_connection_parameters
,description_truncated
,empty_stage_list
,missing_parameter_set
]
Name of the import request.
Examples:seat-reservation-jobs
Import event notifications.
- notifications
The timestamp when the import notification was created. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Notification id.
Import status associated with the notification.
The on_failure option used for the import.
Estimate of remaining time in seconds.
The timestamp when the import opearton started. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import status.
Possible values: [
in_progress
,cancelled
,queued
,started
,completed
]Examples:in_progress
Import statistics. total = imported (including renamed and replaced) + skipped + failed + deprecated + unsupported + pending.
- tally
Total number of data connections.
Total number of deprecated resources in the import file.
Total number of data flows that cannot be imported due to import errors.
Total number of data flows successfully imported.
Total number of parameter sets.
Total number of data flows that have not been processed.
Total number of data flows successfully imported and renamed due to a name conflict. The renamed count is included in the imported count.
Total number of existing data flows replaced by imported flows. The replaced count is included in the imported count.
Total number of sequence jobs.
Total number of data flows skipped due to name conflicts. The skipped count is not included in the failed count or imported count.
Total number of parallel job subflows.
Total number of table definitions.
Total number of data flows to be imported.
Total number of unsupported resources in the import file.
Import the response metadata.
- metadata
Catalog id.
The timestamp when the import API was submitted. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Account ID of the user who submitted the import request.
Examples:user1@company1.com
The unique import id.
The timestamp when the import status was last updated. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import file name.
Project id.
Project name.
The URL which can be used to get the status of the import request right after it is submitted.
Response object of an import request.
Import the response entity.
- entity
Account ID of the user who cancelled the import request. This field is required only when the status field is "cancelled".
Examples:user1@company1.com
The conflict_resolution option used for the import.
The timestamp when the import opearton completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
All data flows imported or to be imported. Each ImportFlow object contains status for the individual data flow import operation.
- import_data_flows
conflict resolution status.
Possible values: [
flow_replacement_succeeded
,flow_replacement_failed
,import_flow_renamed
,import_flow_skipped
,connection_replacement_succeeded
,connection_replacement_failed
,connection_renamed
,connection_skipped
,parameter_set_replacement_succeeded
,parameter_set_replacement_failed
,parameter_set_renamed
,parameter_set_skipped
,table_definition_replacement_succeeded
,table_definition_replacement_failed
,table_definition_renamed
,table_definition_skipped
,sequence_job_replacement_succeeded
,sequence_job_replacement_failed
,sequence_job_renamed
,sequence_job_skipped
,subflow_replacement_succeeded
,subflow_replacement_failed
,subflow_renamed
,subflow_skipped
]Examples:import_flow_renamed
The timestamp when the flow import is completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
The errors array report all the problems preventing the data flow from being successfully imported.
- errors
additional error text.
error object name.
error stage type.
error type.
Possible values: [
unsupported_stage_type
,unsupported_feature
,empty_json
,isx_conversion_error
,model_conversion_error
,invalid_input_type
,invalid_json_format
,json_conversion_error
,flow_deletion_error
,flow_creation_error
,flow_response_parsing_error
,auth_token_error
,flow_compilation_error
,empty_stage_list
,empty_stage_node
,missing_stage_type_class_name
,dummy_stage
,missing_stage_type
,missing_repos_id
,stage_conversion_error
,unimplemented_stage_type
,job_creation_error
,job_run_error
,flow_search_error
,unsupported_job_type
,internal_error
,connection_creation_error
,flow_rename_error
,duplicate_job_error
,parameter_set_creation_error
,distributed_lock_error
,duplicate_object_error
,unbound_object_reference
,table_def_creation_error
,connection_creation_api_error
,connection_patch_api_error
,connection_deletion_api_error
,sequence_job_creation_error
,unsupported_stage_type_in_subflow
]
Unique id of the data flow. This field is returned only if the underlying data flow has been successfully imported.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136
Unique id of the job. This field is returned only if the corresponding job object has been successfully created.
Examples:ccfaaafd-810d-4f0e-b0a9-228c328a0136
Job name. This field is returned only if the corresponding job object has been successfully created.
Examples:Aggregator12_DataStage_1
(deprecated) original type of the job or data flow in the import file.
Possible values: [
px_job
,server_job
,connection
,table_def
]Examples:px_job
Name of the imported data flow.
Examples:cancel-reservation-job
Name of the data flow to be imported.
Examples:cancel-reservation-job
The ID of an existing asset this object refers to. If ref_asset_id is specified, the id field will be the same as ref_asset_id for backward compatibility.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136
data import status.
Possible values: [
completed
,in_progress
,failed
,skipped
,deprecated
,unsupported
,flow_conversion_failed
,flow_creation_failed
,flow_compilation_failed
,job_creation_failed
,job_run_failed
,connection_conversion_failed
,connection_creation_failed
,parameter_set_conversion_failed
,parameter_set_creation_failed
,table_definition_conversion_failed
,table_definition_creation_failed
]Examples:completed
type of the job or data connection in the import file.
Possible values: [
px_job
,server_job
,connection
,table_def
,parameter_set
,subflow
,sequence_job
]Examples:px_job
The warnings array report all the warnings in the data flow import operation.
- warnings
additional warning text.
warning object name.
warning type.
Possible values: [
unreleased_stage_type
,unreleased_feature
,credentials_file_warning
,transformer_trigger_unsupported
,transformer_buildtab_unsupported
,unsupported_secure_gateway
,placeholder_connection_parameters
,description_truncated
,empty_stage_list
,missing_parameter_set
]
Name of the import request.
Examples:seat-reservation-jobs
Import event notifications.
- notifications
The timestamp when the import notification was created. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Notification id.
Import status associated with the notification.
The on_failure option used for the import.
Estimate of remaining time in seconds.
The timestamp when the import opearton started. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import status.
Possible values: [
in_progress
,cancelled
,queued
,started
,completed
]Examples:in_progress
Import statistics. total = imported (including renamed and replaced) + skipped + failed + deprecated + unsupported + pending.
- tally
Total number of data connections.
Total number of deprecated resources in the import file.
Total number of data flows that cannot be imported due to import errors.
Total number of data flows successfully imported.
Total number of parameter sets.
Total number of data flows that have not been processed.
Total number of data flows successfully imported and renamed due to a name conflict. The renamed count is included in the imported count.
Total number of existing data flows replaced by imported flows. The replaced count is included in the imported count.
Total number of sequence jobs.
Total number of data flows skipped due to name conflicts. The skipped count is not included in the failed count or imported count.
Total number of parallel job subflows.
Total number of table definitions.
Total number of data flows to be imported.
Total number of unsupported resources in the import file.
Import the response metadata.
- metadata
Catalog id.
The timestamp when the import API was submitted. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Account ID of the user who submitted the import request.
Examples:user1@company1.com
The unique import id.
The timestamp when the import status was last updated. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import file name.
Project id.
Project name.
The URL which can be used to get the status of the import request right after it is submitted.
Response object of an import request.
Import the response entity.
- entity
Account ID of the user who cancelled the import request. This field is required only when the status field is "cancelled".
Examples:user1@company1.com
The conflict_resolution option used for the import.
The timestamp when the import opearton completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
All data flows imported or to be imported. Each ImportFlow object contains status for the individual data flow import operation.
- import_data_flows
conflict resolution status.
Possible values: [
flow_replacement_succeeded
,flow_replacement_failed
,import_flow_renamed
,import_flow_skipped
,connection_replacement_succeeded
,connection_replacement_failed
,connection_renamed
,connection_skipped
,parameter_set_replacement_succeeded
,parameter_set_replacement_failed
,parameter_set_renamed
,parameter_set_skipped
,table_definition_replacement_succeeded
,table_definition_replacement_failed
,table_definition_renamed
,table_definition_skipped
,sequence_job_replacement_succeeded
,sequence_job_replacement_failed
,sequence_job_renamed
,sequence_job_skipped
,subflow_replacement_succeeded
,subflow_replacement_failed
,subflow_renamed
,subflow_skipped
]Examples:import_flow_renamed
The timestamp when the flow import is completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
The errors array report all the problems preventing the data flow from being successfully imported.
- errors
additional error text.
error object name.
error stage type.
error type.
Possible values: [
unsupported_stage_type
,unsupported_feature
,empty_json
,isx_conversion_error
,model_conversion_error
,invalid_input_type
,invalid_json_format
,json_conversion_error
,flow_deletion_error
,flow_creation_error
,flow_response_parsing_error
,auth_token_error
,flow_compilation_error
,empty_stage_list
,empty_stage_node
,missing_stage_type_class_name
,dummy_stage
,missing_stage_type
,missing_repos_id
,stage_conversion_error
,unimplemented_stage_type
,job_creation_error
,job_run_error
,flow_search_error
,unsupported_job_type
,internal_error
,connection_creation_error
,flow_rename_error
,duplicate_job_error
,parameter_set_creation_error
,distributed_lock_error
,duplicate_object_error
,unbound_object_reference
,table_def_creation_error
,connection_creation_api_error
,connection_patch_api_error
,connection_deletion_api_error
,sequence_job_creation_error
,unsupported_stage_type_in_subflow
]
Unique id of the data flow. This field is returned only if the underlying data flow has been successfully imported.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136
Unique id of the job. This field is returned only if the corresponding job object has been successfully created.
Examples:ccfaaafd-810d-4f0e-b0a9-228c328a0136
Job name. This field is returned only if the corresponding job object has been successfully created.
Examples:Aggregator12_DataStage_1
(deprecated) original type of the job or data flow in the import file.
Possible values: [
px_job
,server_job
,connection
,table_def
]Examples:px_job
Name of the imported data flow.
Examples:cancel-reservation-job
Name of the data flow to be imported.
Examples:cancel-reservation-job
The ID of an existing asset this object refers to. If ref_asset_id is specified, the id field will be the same as ref_asset_id for backward compatibility.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136
data import status.
Possible values: [
completed
,in_progress
,failed
,skipped
,deprecated
,unsupported
,flow_conversion_failed
,flow_creation_failed
,flow_compilation_failed
,job_creation_failed
,job_run_failed
,connection_conversion_failed
,connection_creation_failed
,parameter_set_conversion_failed
,parameter_set_creation_failed
,table_definition_conversion_failed
,table_definition_creation_failed
]Examples:completed
type of the job or data connection in the import file.
Possible values: [
px_job
,server_job
,connection
,table_def
,parameter_set
,subflow
,sequence_job
]Examples:px_job
The warnings array report all the warnings in the data flow import operation.
- warnings
additional warning text.
warning object name.
warning type.
Possible values: [
unreleased_stage_type
,unreleased_feature
,credentials_file_warning
,transformer_trigger_unsupported
,transformer_buildtab_unsupported
,unsupported_secure_gateway
,placeholder_connection_parameters
,description_truncated
,empty_stage_list
,missing_parameter_set
]
Name of the import request.
Examples:seat-reservation-jobs
Import event notifications.
- notifications
The timestamp when the import notification was created. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Notification id.
Import status associated with the notification.
The on_failure option used for the import.
Estimate of remaining time in seconds.
The timestamp when the import opearton started. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import status.
Possible values: [
in_progress
,cancelled
,queued
,started
,completed
]Examples:in_progress
Import statistics. total = imported (including renamed and replaced) + skipped + failed + deprecated + unsupported + pending.
- tally
Total number of data connections.
Total number of deprecated resources in the import file.
Total number of data flows that cannot be imported due to import errors.
Total number of data flows successfully imported.
Total number of parameter sets.
Total number of data flows that have not been processed.
Total number of data flows successfully imported and renamed due to a name conflict. The renamed count is included in the imported count.
Total number of existing data flows replaced by imported flows. The replaced count is included in the imported count.
Total number of sequence jobs.
Total number of data flows skipped due to name conflicts. The skipped count is not included in the failed count or imported count.
Total number of parallel job subflows.
Total number of table definitions.
Total number of data flows to be imported.
Total number of unsupported resources in the import file.
Import the response metadata.
- metadata
Catalog id.
The timestamp when the import API was submitted. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Account ID of the user who submitted the import request.
Examples:user1@company1.com
The unique import id.
The timestamp when the import status was last updated. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import file name.
Project id.
Project name.
The URL which can be used to get the status of the import request right after it is submitted.
Status Code
The requested import operation has been accepted. However, the import operation may or may not be completed. The status field in the import response object describes the current status of the import. The response "Location" header provides a convenient url for retrieving the status with a GET request.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
{ "entity": { "conflict_resolution": "rename", "end_time": "2021-04-08 17:28:46.819000+00:00", "import_data_flows": [ { "conflict_resolution_status": "import_flow_renamed", "end_time": "2021-04-08 17:28:46.811000+00:00", "id": "3fe0af3b-20a8-4bbe-86a8-6675c0b0d300", "job_id": "a7c0110f-920c-4a3a-aa3b-2b55ee4bdad7", "job_name": "rowgen_peek_Import_1617902925763.DataStage job", "job_type": "px_job", "name": "rowgen_peek_Import_1617902921560", "original_name": "rowgen_peek", "status": "completed", "type": "px_job" } ], "name": "Import_1617902920158", "notifications": [ { "created_at": "2021-04-08 17:28:46.819000+00:00", "id": "752eaf78-8689-41e4-8f7d-9b2286682f6f", "status": "completed" }, { "created_at": "2021-04-08 17:28:40.160000+00:00", "id": "38b13187-e18f-4905-a357-33d9981b06aa", "status": "queued" } ], "on_failure": "continue", "remaining_time": 0, "start_time": "2021-04-08 17:28:41.082000+00:00", "status": "completed", "tally": { "connections_total": 0, "deprecated": 0, "failed": 0, "imported": 1, "parameter_sets_total": 0, "pending": 0, "px_containers_total": 0, "renamed": 1, "replaced": 0, "sequence_jobs_total": 0, "skipped": 0, "table_definitions_total": 0, "total": 1, "unsupported": 0 } }, "metadata": { "created_at": "2021-04-08 17:28:40.158000+00:00", "created_by": "{ibm_id}", "id": "395d1b77-60eb-4f8f-81bd-643c20f99bfb", "modified_at": "2021-04-08 17:28:46.819000+00:00", "name": "rowgen_peek.isx", "project_id": "{project_id}", "project_name": "dstage", "url": "{url}/data_intg/v3/migration/isx_imports/395d1b77-60eb-4f8f-81bd-643c20f99bfb?project_id={project_id}" } }
{ "entity": { "conflict_resolution": "rename", "end_time": "2021-04-08 17:28:46.819000+00:00", "import_data_flows": [ { "conflict_resolution_status": "import_flow_renamed", "end_time": "2021-04-08 17:28:46.811000+00:00", "id": "3fe0af3b-20a8-4bbe-86a8-6675c0b0d300", "job_id": "a7c0110f-920c-4a3a-aa3b-2b55ee4bdad7", "job_name": "rowgen_peek_Import_1617902925763.DataStage job", "job_type": "px_job", "name": "rowgen_peek_Import_1617902921560", "original_name": "rowgen_peek", "status": "completed", "type": "px_job" } ], "name": "Import_1617902920158", "notifications": [ { "created_at": "2021-04-08 17:28:46.819000+00:00", "id": "752eaf78-8689-41e4-8f7d-9b2286682f6f", "status": "completed" }, { "created_at": "2021-04-08 17:28:40.160000+00:00", "id": "38b13187-e18f-4905-a357-33d9981b06aa", "status": "queued" } ], "on_failure": "continue", "remaining_time": 0, "start_time": "2021-04-08 17:28:41.082000+00:00", "status": "completed", "tally": { "connections_total": 0, "deprecated": 0, "failed": 0, "imported": 1, "parameter_sets_total": 0, "pending": 0, "px_containers_total": 0, "renamed": 1, "replaced": 0, "sequence_jobs_total": 0, "skipped": 0, "table_definitions_total": 0, "total": 1, "unsupported": 0 } }, "metadata": { "created_at": "2021-04-08 17:28:40.158000+00:00", "created_by": "{ibm_id}", "id": "395d1b77-60eb-4f8f-81bd-643c20f99bfb", "modified_at": "2021-04-08 17:28:46.819000+00:00", "name": "rowgen_peek.isx", "project_id": "{project_id}", "project_name": "dstage", "url": "{url}/data_intg/v3/migration/isx_imports/395d1b77-60eb-4f8f-81bd-643c20f99bfb?project_id={project_id}" } }
Cancel a previous import request
Cancel a previous import request. Use GET /v3/migration/imports/{import_id} to obtain the current status of the import, including whether it has been cancelled.
Cancel a previous import request. Use GET /v3/migration/imports/{import_id} to obtain the current status of the import, including whether it has been cancelled.
Cancel a previous import request. Use GET /v3/migration/imports/{import_id} to obtain the current status of the import, including whether it has been cancelled.
Cancel a previous import request. Use GET /v3/migration/imports/{import_id} to obtain the current status of the import, including whether it has been cancelled.
DELETE /v3/migration/isx_imports/{import_id}
ServiceCall<Void> deleteMigration(DeleteMigrationOptions deleteMigrationOptions)
deleteMigration(params)
delete_migration(self,
import_id: str,
*,
catalog_id: str = None,
project_id: str = None,
**kwargs
) -> DetailedResponse
Request
Use the DeleteMigrationOptions.Builder
to create a DeleteMigrationOptions
object that contains the parameter values for the deleteMigration
method.
Path Parameters
Unique ID of the import request.
Example:
cc6dbbfd-810d-4f0e-b0a9-228c328aff29
Query Parameters
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23
The deleteMigration options.
Unique ID of the import request.
Examples:cc6dbbfd-810d-4f0e-b0a9-228c328aff29
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23
parameters
Unique ID of the import request.
Examples:The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:
parameters
Unique ID of the import request.
Examples:The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:
curl -X DELETE --location --header "Authorization: Bearer {iam_token}" "{base_url}/v3/migration/isx_imports/{import_id}?project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
DeleteMigrationOptions deleteMigrationOptions = new DeleteMigrationOptions.Builder() .importId(importID) .projectId(projectID) .build(); datastageService.deleteMigration(deleteMigrationOptions).execute();
const params = { importId: importID, projectId: projectID, }; const res = await datastageService.deleteMigration(params);
response = datastage_service.delete_migration( import_id=importId, project_id=config['PROJECT_ID'] )
Response
Status Code
The import cancellation request was accepted.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
Status of the import request cannot be found. This can be due to the given import_id is not valid, or the import has been completed long ago and its status information is no longer available.
An error occurred. See response for more information.
No Sample Response
Get the status of a previous import request
Gets the status of an import request. The status field in the response object indicates if the given import is completed, in progress, or failed. Detailed status information about each imported data flow is also contained in the response object.
Gets the status of an import request. The status field in the response object indicates if the given import is completed, in progress, or failed. Detailed status information about each imported data flow is also contained in the response object.
Gets the status of an import request. The status field in the response object indicates if the given import is completed, in progress, or failed. Detailed status information about each imported data flow is also contained in the response object.
Gets the status of an import request. The status field in the response object indicates if the given import is completed, in progress, or failed. Detailed status information about each imported data flow is also contained in the response object.
GET /v3/migration/isx_imports/{import_id}
ServiceCall<ImportResponse> getMigration(GetMigrationOptions getMigrationOptions)
getMigration(params)
get_migration(self,
import_id: str,
*,
catalog_id: str = None,
project_id: str = None,
**kwargs
) -> DetailedResponse
Request
Use the GetMigrationOptions.Builder
to create a GetMigrationOptions
object that contains the parameter values for the getMigration
method.
Path Parameters
Unique ID of the import request.
Query Parameters
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23
The getMigration options.
Unique ID of the import request.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:bd0dbbfd-810d-4f0e-b0a9-228c328a8e23
parameters
Unique ID of the import request.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:
parameters
Unique ID of the import request.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:
curl -X GET --location --header "Authorization: Bearer {iam_token}" --header "Accept: application/json;charset=utf-8" "{base_url}/v3/migration/isx_imports/{import_id}?project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
GetMigrationOptions getMigrationOptions = new GetMigrationOptions.Builder() .importId(importID) .projectId(projectID) .build(); Response<ImportResponse> response = datastageService.getMigration(getMigrationOptions).execute(); ImportResponse importResponse = response.getResult(); System.out.println(importResponse);
const params = { importId: importID, projectId: projectID, }; const res = await datastageService.getMigration(params);
import_response = datastage_service.get_migration( import_id=importId, project_id=config['PROJECT_ID'] ).get_result() print(json.dumps(import_response, indent=2))
Response
Response object of an import request.
Import the response entity.
Import the response metadata.
Response object of an import request.
Import the response entity.
- entity
Account ID of the user who cancelled the import request. This field is required only when the status field is "cancelled".
Examples:user1@company1.com
The conflict_resolution option used for the import.
The timestamp when the import opearton completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
All data flows imported or to be imported. Each ImportFlow object contains status for the individual data flow import operation.
- importDataFlows
conflict resolution status.
Possible values: [
flow_replacement_succeeded
,flow_replacement_failed
,import_flow_renamed
,import_flow_skipped
,connection_replacement_succeeded
,connection_replacement_failed
,connection_renamed
,connection_skipped
,parameter_set_replacement_succeeded
,parameter_set_replacement_failed
,parameter_set_renamed
,parameter_set_skipped
,table_definition_replacement_succeeded
,table_definition_replacement_failed
,table_definition_renamed
,table_definition_skipped
,sequence_job_replacement_succeeded
,sequence_job_replacement_failed
,sequence_job_renamed
,sequence_job_skipped
,subflow_replacement_succeeded
,subflow_replacement_failed
,subflow_renamed
,subflow_skipped
]Examples:import_flow_renamed
The timestamp when the flow import is completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
The errors array report all the problems preventing the data flow from being successfully imported.
- errors
additional error text.
error object name.
error stage type.
error type.
Possible values: [
unsupported_stage_type
,unsupported_feature
,empty_json
,isx_conversion_error
,model_conversion_error
,invalid_input_type
,invalid_json_format
,json_conversion_error
,flow_deletion_error
,flow_creation_error
,flow_response_parsing_error
,auth_token_error
,flow_compilation_error
,empty_stage_list
,empty_stage_node
,missing_stage_type_class_name
,dummy_stage
,missing_stage_type
,missing_repos_id
,stage_conversion_error
,unimplemented_stage_type
,job_creation_error
,job_run_error
,flow_search_error
,unsupported_job_type
,internal_error
,connection_creation_error
,flow_rename_error
,duplicate_job_error
,parameter_set_creation_error
,distributed_lock_error
,duplicate_object_error
,unbound_object_reference
,table_def_creation_error
,connection_creation_api_error
,connection_patch_api_error
,connection_deletion_api_error
,sequence_job_creation_error
,unsupported_stage_type_in_subflow
]
Unique id of the data flow. This field is returned only if the underlying data flow has been successfully imported.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136
Unique id of the job. This field is returned only if the corresponding job object has been successfully created.
Examples:ccfaaafd-810d-4f0e-b0a9-228c328a0136
Job name. This field is returned only if the corresponding job object has been successfully created.
Examples:Aggregator12_DataStage_1
(deprecated) original type of the job or data flow in the import file.
Possible values: [
px_job
,server_job
,connection
,table_def
]Examples:px_job
Name of the imported data flow.
Examples:cancel-reservation-job
Name of the data flow to be imported.
Examples:cancel-reservation-job
The ID of an existing asset this object refers to. If ref_asset_id is specified, the id field will be the same as ref_asset_id for backward compatibility.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136
data import status.
Possible values: [
completed
,in_progress
,failed
,skipped
,deprecated
,unsupported
,flow_conversion_failed
,flow_creation_failed
,flow_compilation_failed
,job_creation_failed
,job_run_failed
,connection_conversion_failed
,connection_creation_failed
,parameter_set_conversion_failed
,parameter_set_creation_failed
,table_definition_conversion_failed
,table_definition_creation_failed
]Examples:completed
type of the job or data connection in the import file.
Possible values: [
px_job
,server_job
,connection
,table_def
,parameter_set
,subflow
,sequence_job
]Examples:px_job
The warnings array report all the warnings in the data flow import operation.
- warnings
additional warning text.
warning object name.
warning type.
Possible values: [
unreleased_stage_type
,unreleased_feature
,credentials_file_warning
,transformer_trigger_unsupported
,transformer_buildtab_unsupported
,unsupported_secure_gateway
,placeholder_connection_parameters
,description_truncated
,empty_stage_list
,missing_parameter_set
]
Name of the import request.
Examples:seat-reservation-jobs
Import event notifications.
- notifications
The timestamp when the import notification was created. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Notification id.
Import status associated with the notification.
The on_failure option used for the import.
Estimate of remaining time in seconds.
The timestamp when the import opearton started. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import status.
Possible values: [
in_progress
,cancelled
,queued
,started
,completed
]Examples:in_progress
Import statistics. total = imported (including renamed and replaced) + skipped + failed + deprecated + unsupported + pending.
- tally
Total number of data connections.
Total number of deprecated resources in the import file.
Total number of data flows that cannot be imported due to import errors.
Total number of data flows successfully imported.
Total number of parameter sets.
Total number of data flows that have not been processed.
Total number of data flows successfully imported and renamed due to a name conflict. The renamed count is included in the imported count.
Total number of existing data flows replaced by imported flows. The replaced count is included in the imported count.
Total number of sequence jobs.
Total number of data flows skipped due to name conflicts. The skipped count is not included in the failed count or imported count.
Total number of parallel job subflows.
Total number of table definitions.
Total number of data flows to be imported.
Total number of unsupported resources in the import file.
Import the response metadata.
- metadata
Catalog id.
The timestamp when the import API was submitted. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Account ID of the user who submitted the import request.
Examples:user1@company1.com
The unique import id.
The timestamp when the import status was last updated. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import file name.
Project id.
Project name.
The URL which can be used to get the status of the import request right after it is submitted.
Response object of an import request.
Import the response entity.
- entity
Account ID of the user who cancelled the import request. This field is required only when the status field is "cancelled".
Examples:user1@company1.com
The conflict_resolution option used for the import.
The timestamp when the import opearton completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
All data flows imported or to be imported. Each ImportFlow object contains status for the individual data flow import operation.
- import_data_flows
conflict resolution status.
Possible values: [
flow_replacement_succeeded
,flow_replacement_failed
,import_flow_renamed
,import_flow_skipped
,connection_replacement_succeeded
,connection_replacement_failed
,connection_renamed
,connection_skipped
,parameter_set_replacement_succeeded
,parameter_set_replacement_failed
,parameter_set_renamed
,parameter_set_skipped
,table_definition_replacement_succeeded
,table_definition_replacement_failed
,table_definition_renamed
,table_definition_skipped
,sequence_job_replacement_succeeded
,sequence_job_replacement_failed
,sequence_job_renamed
,sequence_job_skipped
,subflow_replacement_succeeded
,subflow_replacement_failed
,subflow_renamed
,subflow_skipped
]Examples:import_flow_renamed
The timestamp when the flow import is completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
The errors array report all the problems preventing the data flow from being successfully imported.
- errors
additional error text.
error object name.
error stage type.
error type.
Possible values: [
unsupported_stage_type
,unsupported_feature
,empty_json
,isx_conversion_error
,model_conversion_error
,invalid_input_type
,invalid_json_format
,json_conversion_error
,flow_deletion_error
,flow_creation_error
,flow_response_parsing_error
,auth_token_error
,flow_compilation_error
,empty_stage_list
,empty_stage_node
,missing_stage_type_class_name
,dummy_stage
,missing_stage_type
,missing_repos_id
,stage_conversion_error
,unimplemented_stage_type
,job_creation_error
,job_run_error
,flow_search_error
,unsupported_job_type
,internal_error
,connection_creation_error
,flow_rename_error
,duplicate_job_error
,parameter_set_creation_error
,distributed_lock_error
,duplicate_object_error
,unbound_object_reference
,table_def_creation_error
,connection_creation_api_error
,connection_patch_api_error
,connection_deletion_api_error
,sequence_job_creation_error
,unsupported_stage_type_in_subflow
]
Unique id of the data flow. This field is returned only if the underlying data flow has been successfully imported.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136
Unique id of the job. This field is returned only if the corresponding job object has been successfully created.
Examples:ccfaaafd-810d-4f0e-b0a9-228c328a0136
Job name. This field is returned only if the corresponding job object has been successfully created.
Examples:Aggregator12_DataStage_1
(deprecated) original type of the job or data flow in the import file.
Possible values: [
px_job
,server_job
,connection
,table_def
]Examples:px_job
Name of the imported data flow.
Examples:cancel-reservation-job
Name of the data flow to be imported.
Examples:cancel-reservation-job
The ID of an existing asset this object refers to. If ref_asset_id is specified, the id field will be the same as ref_asset_id for backward compatibility.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136
data import status.
Possible values: [
completed
,in_progress
,failed
,skipped
,deprecated
,unsupported
,flow_conversion_failed
,flow_creation_failed
,flow_compilation_failed
,job_creation_failed
,job_run_failed
,connection_conversion_failed
,connection_creation_failed
,parameter_set_conversion_failed
,parameter_set_creation_failed
,table_definition_conversion_failed
,table_definition_creation_failed
]Examples:completed
type of the job or data connection in the import file.
Possible values: [
px_job
,server_job
,connection
,table_def
,parameter_set
,subflow
,sequence_job
]Examples:px_job
The warnings array report all the warnings in the data flow import operation.
- warnings
additional warning text.
warning object name.
warning type.
Possible values: [
unreleased_stage_type
,unreleased_feature
,credentials_file_warning
,transformer_trigger_unsupported
,transformer_buildtab_unsupported
,unsupported_secure_gateway
,placeholder_connection_parameters
,description_truncated
,empty_stage_list
,missing_parameter_set
]
Name of the import request.
Examples:seat-reservation-jobs
Import event notifications.
- notifications
The timestamp when the import notification was created. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Notification id.
Import status associated with the notification.
The on_failure option used for the import.
Estimate of remaining time in seconds.
The timestamp when the import opearton started. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import status.
Possible values: [
in_progress
,cancelled
,queued
,started
,completed
]Examples:in_progress
Import statistics. total = imported (including renamed and replaced) + skipped + failed + deprecated + unsupported + pending.
- tally
Total number of data connections.
Total number of deprecated resources in the import file.
Total number of data flows that cannot be imported due to import errors.
Total number of data flows successfully imported.
Total number of parameter sets.
Total number of data flows that have not been processed.
Total number of data flows successfully imported and renamed due to a name conflict. The renamed count is included in the imported count.
Total number of existing data flows replaced by imported flows. The replaced count is included in the imported count.
Total number of sequence jobs.
Total number of data flows skipped due to name conflicts. The skipped count is not included in the failed count or imported count.
Total number of parallel job subflows.
Total number of table definitions.
Total number of data flows to be imported.
Total number of unsupported resources in the import file.
Import the response metadata.
- metadata
Catalog id.
The timestamp when the import API was submitted. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Account ID of the user who submitted the import request.
Examples:user1@company1.com
The unique import id.
The timestamp when the import status was last updated. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import file name.
Project id.
Project name.
The URL which can be used to get the status of the import request right after it is submitted.
Response object of an import request.
Import the response entity.
- entity
Account ID of the user who cancelled the import request. This field is required only when the status field is "cancelled".
Examples:user1@company1.com
The conflict_resolution option used for the import.
The timestamp when the import opearton completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
All data flows imported or to be imported. Each ImportFlow object contains status for the individual data flow import operation.
- import_data_flows
conflict resolution status.
Possible values: [
flow_replacement_succeeded
,flow_replacement_failed
,import_flow_renamed
,import_flow_skipped
,connection_replacement_succeeded
,connection_replacement_failed
,connection_renamed
,connection_skipped
,parameter_set_replacement_succeeded
,parameter_set_replacement_failed
,parameter_set_renamed
,parameter_set_skipped
,table_definition_replacement_succeeded
,table_definition_replacement_failed
,table_definition_renamed
,table_definition_skipped
,sequence_job_replacement_succeeded
,sequence_job_replacement_failed
,sequence_job_renamed
,sequence_job_skipped
,subflow_replacement_succeeded
,subflow_replacement_failed
,subflow_renamed
,subflow_skipped
]Examples:import_flow_renamed
The timestamp when the flow import is completed. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
The errors array report all the problems preventing the data flow from being successfully imported.
- errors
additional error text.
error object name.
error stage type.
error type.
Possible values: [
unsupported_stage_type
,unsupported_feature
,empty_json
,isx_conversion_error
,model_conversion_error
,invalid_input_type
,invalid_json_format
,json_conversion_error
,flow_deletion_error
,flow_creation_error
,flow_response_parsing_error
,auth_token_error
,flow_compilation_error
,empty_stage_list
,empty_stage_node
,missing_stage_type_class_name
,dummy_stage
,missing_stage_type
,missing_repos_id
,stage_conversion_error
,unimplemented_stage_type
,job_creation_error
,job_run_error
,flow_search_error
,unsupported_job_type
,internal_error
,connection_creation_error
,flow_rename_error
,duplicate_job_error
,parameter_set_creation_error
,distributed_lock_error
,duplicate_object_error
,unbound_object_reference
,table_def_creation_error
,connection_creation_api_error
,connection_patch_api_error
,connection_deletion_api_error
,sequence_job_creation_error
,unsupported_stage_type_in_subflow
]
Unique id of the data flow. This field is returned only if the underlying data flow has been successfully imported.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136
Unique id of the job. This field is returned only if the corresponding job object has been successfully created.
Examples:ccfaaafd-810d-4f0e-b0a9-228c328a0136
Job name. This field is returned only if the corresponding job object has been successfully created.
Examples:Aggregator12_DataStage_1
(deprecated) original type of the job or data flow in the import file.
Possible values: [
px_job
,server_job
,connection
,table_def
]Examples:px_job
Name of the imported data flow.
Examples:cancel-reservation-job
Name of the data flow to be imported.
Examples:cancel-reservation-job
The ID of an existing asset this object refers to. If ref_asset_id is specified, the id field will be the same as ref_asset_id for backward compatibility.
Examples:ccfdbbfd-810d-4f0e-b0a9-228c328a0136
data import status.
Possible values: [
completed
,in_progress
,failed
,skipped
,deprecated
,unsupported
,flow_conversion_failed
,flow_creation_failed
,flow_compilation_failed
,job_creation_failed
,job_run_failed
,connection_conversion_failed
,connection_creation_failed
,parameter_set_conversion_failed
,parameter_set_creation_failed
,table_definition_conversion_failed
,table_definition_creation_failed
]Examples:completed
type of the job or data connection in the import file.
Possible values: [
px_job
,server_job
,connection
,table_def
,parameter_set
,subflow
,sequence_job
]Examples:px_job
The warnings array report all the warnings in the data flow import operation.
- warnings
additional warning text.
warning object name.
warning type.
Possible values: [
unreleased_stage_type
,unreleased_feature
,credentials_file_warning
,transformer_trigger_unsupported
,transformer_buildtab_unsupported
,unsupported_secure_gateway
,placeholder_connection_parameters
,description_truncated
,empty_stage_list
,missing_parameter_set
]
Name of the import request.
Examples:seat-reservation-jobs
Import event notifications.
- notifications
The timestamp when the import notification was created. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Notification id.
Import status associated with the notification.
The on_failure option used for the import.
Estimate of remaining time in seconds.
The timestamp when the import opearton started. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import status.
Possible values: [
in_progress
,cancelled
,queued
,started
,completed
]Examples:in_progress
Import statistics. total = imported (including renamed and replaced) + skipped + failed + deprecated + unsupported + pending.
- tally
Total number of data connections.
Total number of deprecated resources in the import file.
Total number of data flows that cannot be imported due to import errors.
Total number of data flows successfully imported.
Total number of parameter sets.
Total number of data flows that have not been processed.
Total number of data flows successfully imported and renamed due to a name conflict. The renamed count is included in the imported count.
Total number of existing data flows replaced by imported flows. The replaced count is included in the imported count.
Total number of sequence jobs.
Total number of data flows skipped due to name conflicts. The skipped count is not included in the failed count or imported count.
Total number of parallel job subflows.
Total number of table definitions.
Total number of data flows to be imported.
Total number of unsupported resources in the import file.
Import the response metadata.
- metadata
Catalog id.
The timestamp when the import API was submitted. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
Account ID of the user who submitted the import request.
Examples:user1@company1.com
The unique import id.
The timestamp when the import status was last updated. In format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339.
import file name.
Project id.
Project name.
The URL which can be used to get the status of the import request right after it is submitted.
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
Status of the import request cannot be found. This can be due to the given import_id is not valid, or the import has been completed long ago and its status information is no longer available.
An error occurred. See response for more information.
{ "entity": { "conflict_resolution": "rename", "end_time": "2021-04-08 17:28:46.819000+00:00", "import_data_flows": [ { "conflict_resolution_status": "import_flow_renamed", "end_time": "2021-04-08 17:28:46.811000+00:00", "id": "3fe0af3b-20a8-4bbe-86a8-6675c0b0d300", "job_id": "a7c0110f-920c-4a3a-aa3b-2b55ee4bdad7", "job_name": "rowgen_peek_Import_1617902925763.DataStage job", "job_type": "px_job", "name": "rowgen_peek_Import_1617902921560", "original_name": "rowgen_peek", "status": "completed", "type": "px_job" } ], "name": "Import_1617902920158", "notifications": [ { "created_at": "2021-04-08 17:28:46.819000+00:00", "id": "752eaf78-8689-41e4-8f7d-9b2286682f6f", "status": "completed" }, { "created_at": "2021-04-08 17:28:40.160000+00:00", "id": "38b13187-e18f-4905-a357-33d9981b06aa", "status": "queued" } ], "on_failure": "continue", "remaining_time": 0, "start_time": "2021-04-08 17:28:41.082000+00:00", "status": "completed", "tally": { "connections_total": 0, "deprecated": 0, "failed": 0, "imported": 1, "parameter_sets_total": 0, "pending": 0, "px_containers_total": 0, "renamed": 1, "replaced": 0, "sequence_jobs_total": 0, "skipped": 0, "table_definitions_total": 0, "total": 1, "unsupported": 0 } }, "metadata": { "created_at": "2021-04-08 17:28:40.158000+00:00", "created_by": "{ibm_id}", "id": "395d1b77-60eb-4f8f-81bd-643c20f99bfb", "modified_at": "2021-04-08 17:28:46.819000+00:00", "name": "rowgen_peek.isx", "project_id": "{project_id}", "project_name": "dstage", "url": "{url}/data_intg/v3/migration/isx_imports/395d1b77-60eb-4f8f-81bd-643c20f99bfb?project_id={project_id}" } }
{ "entity": { "conflict_resolution": "rename", "end_time": "2021-04-08 17:28:46.819000+00:00", "import_data_flows": [ { "conflict_resolution_status": "import_flow_renamed", "end_time": "2021-04-08 17:28:46.811000+00:00", "id": "3fe0af3b-20a8-4bbe-86a8-6675c0b0d300", "job_id": "a7c0110f-920c-4a3a-aa3b-2b55ee4bdad7", "job_name": "rowgen_peek_Import_1617902925763.DataStage job", "job_type": "px_job", "name": "rowgen_peek_Import_1617902921560", "original_name": "rowgen_peek", "status": "completed", "type": "px_job" } ], "name": "Import_1617902920158", "notifications": [ { "created_at": "2021-04-08 17:28:46.819000+00:00", "id": "752eaf78-8689-41e4-8f7d-9b2286682f6f", "status": "completed" }, { "created_at": "2021-04-08 17:28:40.160000+00:00", "id": "38b13187-e18f-4905-a357-33d9981b06aa", "status": "queued" } ], "on_failure": "continue", "remaining_time": 0, "start_time": "2021-04-08 17:28:41.082000+00:00", "status": "completed", "tally": { "connections_total": 0, "deprecated": 0, "failed": 0, "imported": 1, "parameter_sets_total": 0, "pending": 0, "px_containers_total": 0, "renamed": 1, "replaced": 0, "sequence_jobs_total": 0, "skipped": 0, "table_definitions_total": 0, "total": 1, "unsupported": 0 } }, "metadata": { "created_at": "2021-04-08 17:28:40.158000+00:00", "created_by": "{ibm_id}", "id": "395d1b77-60eb-4f8f-81bd-643c20f99bfb", "modified_at": "2021-04-08 17:28:46.819000+00:00", "name": "rowgen_peek.isx", "project_id": "{project_id}", "project_name": "dstage", "url": "{url}/data_intg/v3/migration/isx_imports/395d1b77-60eb-4f8f-81bd-643c20f99bfb?project_id={project_id}" } }