Introduction
You can use a collection of IBM DataStage REST APIs to process, compile, and run flows. DataStage flows are design-time assets that contain data integration logic in JSON-based schemas.
Process flows Use the processing API to manipulate data that you have read from a data source before writing it to a data target.
Compile flows Use the compile API to compile flows. All flows must be compiled before you run them. .
Run flows Use the run API to run flows. When you run a flow, the extraction, loading, and transforming tasks that were built into the flow designs are actually implemented.
You can use the DataStage REST APIs for both DataStage in Cloud Pak for Data as a service and DataStage in Cloud Pak for Data.
For more information on the DataStage service, see the following links:
The code examples on this tab use the client library that is provided for Java.
<dependency>
<groupId>com.ibm.cloud</groupId>
<artifactId>datastage</artifactId>
<version>0.0.1</version>
</dependency>
Gradle
compile 'com.ibm.cloud:datastage:0.0.1'
GitHub
The code examples on this tab use the client library that is provided for Node.js.
Installation
npm install datastage
GitHub
The code examples on this tab use the client library that is provided for Python.
Installation
pip install --upgrade "datastage>=0.0.1"
GitHub
Authentication
Before you can call an IBM DataStage API, you must first create an IAM bearer token. Tokens support authenticated requests without embedding service credentials in every call. Each token is valid for one hour. After a token expires, you must create a new one if you want to continue using the API. The recommended method to retrieve a token programmatically is to create an API key for your IBM Cloud identity and then use the IAM token API to exchange that key for a token. For more information on authentication, see the following links:
- Cloud Pak for Data as a Service: Authenticating to Watson services
- Cloud Pak for Data (this information is applicable to DataStage even though the topic title refers to Watson Machine Learning):
- If IAM integration was disabled during installation (default setting): Getting a bearer token with IAM integration disabled
- If IAM integration was enabled during installation: Getting a bearer token with IAM integration enabled
Replace {apikey}
and {url}
with your service credentials.
curl -X {request_method} -u "apikey:{apikey}" "{url}/v4/{method}"
Setting client options through external configuration
Example environment variables, where <SERVICE_URL>
is the endpoint URL, <API_KEY>
is your IAM API key and <IAM_URL>
is your IAM URL endpoint
DATASTAGE_AUTH_TYPE=iam
DATASTAGE_URL=https://api.dataplatform.cloud.ibm.com/data_intg
DATASTAGE_APIKEY=<API_KEY>
DATASTAGE_AUTH_URL=https://iam.cloud.ibm.com/identity/token
Example of constructing the service client
import com.ibm.cloud.datastage.v3.Datastage;
Datastage service = Datastage.newInstance();
Setting client options through external configuration
Example environment variables, where <SERVICE_URL>
is the endpoint URL, <API_KEY>
is your IAM API key and <IAM_URL>
is your IAM URL endpoint
DATASTAGE_AUTH_TYPE=iam
DATASTAGE_URL=https://api.dataplatform.cloud.ibm.com/data_intg
DATASTAGE_APIKEY=<API_KEY>
DATASTAGE_AUTH_URL=https://iam.cloud.ibm.com/identity/token
Example of constructing the service client
const DatastageV3 = require('datastage/datastage/v3');
const datastageService = DatastageV3.newInstance({});
Setting client options through external configuration
To authenticate when using this sdk, an external credentials file is necessary (i.e. credentials.env
).
In this credentials file you will define and set 4 required fields for authenticating your sdk use against IAM.
Example environment variables, where <API_KEY>
is your IAM API key
DATASTAGE_AUTH_TYPE=iam
DATASTAGE_URL=https://api.dataplatform.cloud.ibm.com/data_intg
DATASTAGE_APIKEY=<API_KEY>
DATASTAGE_AUTH_URL=https://iam.cloud.ibm.com/identity/token
Example of constructing the service client
import os
from datastage.datastage_v3 import DatastageV3
# define path to external credentials file
config_file = 'credentials.env'
# define a chosen service name
custom_service_name = 'DATASTAGE'
datastage_service = None
if os.path.exists(config_file):
# set environment variable to point towards credentials file path
os.environ['IBM_CREDENTIALS_FILE'] = config_file
# create datastage instance using custom service name
datastage_service = DatastageV3.new_instance(custom_service_name)
IBM Cloud URLs
The base URLs come from the service instance. To find the URL, view the service credentials by clicking the name of the service in the Resource list. Use the value of the URL. Add the method to form the complete API endpoint for your request.
https://api.dataplatform.cloud.ibm.com/data_intg
Example API request
curl --request GET --header "Content-Type: application/json" --header "Accept: application/json" --header "Authorization: Bearer <IAM token>" --url "https://api.dataplatform.cloud.ibm.com/data_intg/v3/data_intg_flows?project_id=<Project ID>&limit=10"
Replace <IAM token>
and <Project ID>
in this example with the values for your particular API call.
Error handling
DataStage uses standard HTTP response codes to indicate whether a method completed successfully. HTTP response codes in the 2xx range indicate success. A response in the 4xx range is some sort of failure, and a response in the 5xx range usually indicates an internal system error that cannot be resolved by the user. Response codes are listed with the method.
ErrorResponse
Name | Description |
---|---|
error string |
Description of the problem. |
code integer |
HTTP response code. |
code_description string |
Response message. |
warnings string |
Warnings associated with the error. |
Methods
Delete DataStage flows
Deletes the specified data flows in a project or catalog (either project_id
or catalog_id
must be set).
If the deletion of the data flows and their runs will take some time to finish, then a 202 response will be returned and the deletion will continue asynchronously.
All the data flow runs associated with the data flows will also be deleted. If a data flow is still running, it will not be deleted unless the force parameter is set to true. If a data flow is still running and the force parameter is set to true, the call returns immediately with a 202 response. The related data flows are deleted after the data flow runs are stopped.
Deletes the specified data flows in a project or catalog (either project_id
or catalog_id
must be set).
If the deletion of the data flows and their runs will take some time to finish, then a 202 response will be returned and the deletion will continue asynchronously. All the data flow runs associated with the data flows will also be deleted. If a data flow is still running, it will not be deleted unless the force parameter is set to true. If a data flow is still running and the force parameter is set to true, the call returns immediately with a 202 response. The related data flows are deleted after the data flow runs are stopped.
Deletes the specified data flows in a project or catalog (either project_id
or catalog_id
must be set).
If the deletion of the data flows and their runs will take some time to finish, then a 202 response will be returned and the deletion will continue asynchronously. All the data flow runs associated with the data flows will also be deleted. If a data flow is still running, it will not be deleted unless the force parameter is set to true. If a data flow is still running and the force parameter is set to true, the call returns immediately with a 202 response. The related data flows are deleted after the data flow runs are stopped.
Deletes the specified data flows in a project or catalog (either project_id
or catalog_id
must be set).
If the deletion of the data flows and their runs will take some time to finish, then a 202 response will be returned and the deletion will continue asynchronously. All the data flow runs associated with the data flows will also be deleted. If a data flow is still running, it will not be deleted unless the force parameter is set to true. If a data flow is still running and the force parameter is set to true, the call returns immediately with a 202 response. The related data flows are deleted after the data flow runs are stopped.
DELETE /v3/data_intg_flows
ServiceCall<Void> deleteDatastageFlows(DeleteDatastageFlowsOptions deleteDatastageFlowsOptions)
deleteDatastageFlows(params)
delete_datastage_flows(self,
id: List[str],
*,
catalog_id: str = None,
project_id: str = None,
force: bool = None,
**kwargs
) -> DetailedResponse
Request
Use the DeleteDatastageFlowsOptions.Builder
to create a DeleteDatastageFlowsOptions
object that contains the parameter values for the deleteDatastageFlows
method.
Query Parameters
The list of DataStage flow IDs to delete.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23
Whether to stop all running data flows. Running DataStage flows must be stopped before the DataStage flows can be deleted.
The deleteDatastageFlows options.
The list of DataStage flow IDs to delete.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:ViewWhether to stop all running data flows. Running DataStage flows must be stopped before the DataStage flows can be deleted.
parameters
The list of DataStage flow IDs to delete.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:value_source_lines_htmlWhether to stop all running data flows. Running DataStage flows must be stopped before the DataStage flows can be deleted.
parameters
The list of DataStage flow IDs to delete.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:value_source_lines_htmlWhether to stop all running data flows. Running DataStage flows must be stopped before the DataStage flows can be deleted.
curl -X DELETE --location --header "Authorization: Bearer {iam_token}" "{base_url}/v3/data_intg_flows?id=[]&project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
String[] ids = new String[] {flowID, cloneFlowID}; DeleteDatastageFlowsOptions deleteDatastageFlowsOptions = new DeleteDatastageFlowsOptions.Builder() .id(Arrays.asList(ids)) .projectId(projectID) .build(); datastageService.deleteDatastageFlows(deleteDatastageFlowsOptions).execute();
const params = { id: [subflow_assetID, subflowCloneID], projectId: projectID, }; const res = await datastageService.deleteDatastageSubflows(params);
response = datastage_service.delete_datastage_flows( id=createdFlowId, project_id=config['PROJECT_ID'] )
Response
Status Code
The requested operation is in progress.
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
No Sample Response
Get metadata and lock information for DataStage flows
Lists the metadata, entity and lock information for DataStage flows that are contained in the specified project.
Lists the metadata, entity and lock information for DataStage flows that are contained in the specified project.
Use the following parameters to filter the results:
| Field | Match type | Example |
| ------------------------ | ------------ | --------------------------------------- |
| entity.name
| Equals | entity.name=MyDataStageFlow
|
| entity.name
| Starts with | entity.name=starts:MyData
|
| entity.description
| Equals | entity.description=movement
|
| entity.description
| Starts with | entity.description=starts:data
|
To sort the results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time
(i.e. returning the most recently created data flows first).
| Field | Example |
| ------------------------- | ----------------------------------- |
| sort | sort=+entity.name
(sort by ascending name) |
| sort | sort=-metadata.create_time
(sort by descending creation time) |
Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time
and then in ascending order on name use: sort=-metadata.create_time
,+entity.name
.
Lists the metadata, entity and lock information for DataStage flows that are contained in the specified project.
Use the following parameters to filter the results:
| Field | Match type | Example |
| ------------------------ | ------------ | --------------------------------------- |
| entity.name
| Equals | entity.name=MyDataStageFlow
|
| entity.name
| Starts with | entity.name=starts:MyData
|
| entity.description
| Equals | entity.description=movement
|
| entity.description
| Starts with | entity.description=starts:data
|
To sort the results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time
(i.e. returning the most recently created data flows first).
| Field | Example |
| ------------------------- | ----------------------------------- |
| sort | sort=+entity.name
(sort by ascending name) |
| sort | sort=-metadata.create_time
(sort by descending creation time) |
Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time
and then in ascending order on name use: sort=-metadata.create_time
,+entity.name
.
Lists the metadata, entity and lock information for DataStage flows that are contained in the specified project.
Use the following parameters to filter the results:
| Field | Match type | Example |
| ------------------------ | ------------ | --------------------------------------- |
| entity.name
| Equals | entity.name=MyDataStageFlow
|
| entity.name
| Starts with | entity.name=starts:MyData
|
| entity.description
| Equals | entity.description=movement
|
| entity.description
| Starts with | entity.description=starts:data
|
To sort the results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time
(i.e. returning the most recently created data flows first).
| Field | Example |
| ------------------------- | ----------------------------------- |
| sort | sort=+entity.name
(sort by ascending name) |
| sort | sort=-metadata.create_time
(sort by descending creation time) |
Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time
and then in ascending order on name use: sort=-metadata.create_time
,+entity.name
.
GET /v3/data_intg_flows
ServiceCall<DataFlowPagedCollection> listDatastageFlows(ListDatastageFlowsOptions listDatastageFlowsOptions)
listDatastageFlows(params)
list_datastage_flows(self,
*,
catalog_id: str = None,
project_id: str = None,
sort: str = None,
start: str = None,
limit: int = None,
entity_name: str = None,
entity_description: str = None,
**kwargs
) -> DetailedResponse
Request
Use the ListDatastageFlowsOptions.Builder
to create a ListDatastageFlowsOptions
object that contains the parameter values for the listDatastageFlows
method.
Query Parameters
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23
The field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time.
The page token indicating where to start paging from.
The limit of the number of items to return, for example limit=50. If not specified a default of 100 will be used.
Possible values: value ≥ 1
Example:
100
Filter results based on the specified name.
Example:
MyDataStageFlow
Filter results based on the specified description.
The listDatastageFlows options.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:ViewThe field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time.
The page token indicating where to start paging from.
The limit of the number of items to return, for example limit=50. If not specified a default of 100 will be used.
Possible values: value ≥ 1
Examples:ViewFilter results based on the specified name.
Filter results based on the specified description.
parameters
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:value_source_lines_htmlThe field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time.
The page token indicating where to start paging from.
The limit of the number of items to return, for example limit=50. If not specified a default of 100 will be used.
Possible values: value ≥ 1
Examples:value_source_lines_htmlFilter results based on the specified name.
Filter results based on the specified description.
parameters
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:value_source_lines_htmlThe field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time.
The page token indicating where to start paging from.
The limit of the number of items to return, for example limit=50. If not specified a default of 100 will be used.
Possible values: value ≥ 1
Examples:value_source_lines_htmlFilter results based on the specified name.
Filter results based on the specified description.
curl -X GET --location --header "Authorization: Bearer {iam_token}" --header "Accept: application/json;charset=utf-8" "{base_url}/v3/data_intg_flows?project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23&limit=100"
ListDatastageFlowsOptions listDatastageFlowsOptions = new ListDatastageFlowsOptions.Builder() .projectId(projectID) .limit(Long.valueOf("100")) .build(); Response<DataFlowPagedCollection> response = datastageService.listDatastageFlows(listDatastageFlowsOptions).execute(); DataFlowPagedCollection dataFlowPagedCollection = response.getResult(); System.out.println(dataFlowPagedCollection);
const params = { projectId: projectID, sort: 'name', limit: 100, }; const res = await datastageService.listDatastageFlows(params);
data_flow_paged_collection = datastage_service.list_datastage_flows( project_id=config['PROJECT_ID'], limit=100 ).get_result() print(json.dumps(data_flow_paged_collection, indent=2))
Response
A page from a collection of DataStage flows.
A page from a collection of DataStage flows.
URI of a resource.
URI of a resource.
The number of data flows requested to be returned.
URI of a resource.
URI of a resource.
The total number of DataStage flows available.
A page from a collection of DataStage flows.
A page from a collection of DataStage flows.
Metadata information for datastage flow.
The underlying DataStage flow definition.
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
Entity information for a DataStage lock object.
DataStage flow ID that is locked.
Requester of the lock.
entity
Metadata information for a DataStage lock object.
Lock status.
metadata
lock
The name of the DataStage flow.
The rules of visibility for an asset.
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
rov
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
entity
System metadata about an asset.
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
usage
metadata
dataFlows
URI of a resource.
URI of a resource.
first
URI of a resource.
URI of a resource.
last
The number of data flows requested to be returned.
URI of a resource.
URI of a resource.
next
URI of a resource.
URI of a resource.
prev
The total number of DataStage flows available.
A page from a collection of DataStage flows.
A page from a collection of DataStage flows.
Metadata information for datastage flow.
The underlying DataStage flow definition.
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
Entity information for a DataStage lock object.
DataStage flow ID that is locked.
Requester of the lock.
entity
Metadata information for a DataStage lock object.
Lock status.
metadata
lock
The name of the DataStage flow.
The rules of visibility for an asset.
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
rov
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
entity
System metadata about an asset.
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
usage
metadata
data_flows
URI of a resource.
URI of a resource.
first
URI of a resource.
URI of a resource.
last
The number of data flows requested to be returned.
URI of a resource.
URI of a resource.
next
URI of a resource.
URI of a resource.
prev
The total number of DataStage flows available.
A page from a collection of DataStage flows.
A page from a collection of DataStage flows.
Metadata information for datastage flow.
The underlying DataStage flow definition.
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
Entity information for a DataStage lock object.
DataStage flow ID that is locked.
Requester of the lock.
entity
Metadata information for a DataStage lock object.
Lock status.
metadata
lock
The name of the DataStage flow.
The rules of visibility for an asset.
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
rov
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
entity
System metadata about an asset.
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
usage
metadata
data_flows
URI of a resource.
URI of a resource.
first
URI of a resource.
URI of a resource.
last
The number of data flows requested to be returned.
URI of a resource.
URI of a resource.
next
URI of a resource.
URI of a resource.
prev
The total number of DataStage flows available.
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
{ "data_flows": [ { "entity": { "data_intg_flow": { "mime_type": "application/json", "dataset": false } }, "metadata": { "asset_id": "{asset_id}", "asset_type": "data_intg_flow", "create_time": "2021-04-03 15:32:55+00:00", "creator_id": "IBMid-xxxxxxxxx", "description": " ", "href": "{url}/data_intg/v3/data_intg_flows/{asset_id}?project_id={project_id}", "name": "{job_name}", "project_id": "{project_id}", "resource_key": "{project_id}/data_intg_flow/{job_name}", "size": 5780, "usage": { "access_count": 0, "last_access_time": "2021-04-03 15:33:01.320000+00:00", "last_accessor_id": "IBMid-xxxxxxxxx", "last_modification_time": "2021-04-03 15:33:01.320000+00:00", "last_modifier_id": "IBMid-xxxxxxxxx" } } } ], "first": { "href": "{url}/data_intg/v3/data_intg_flows?project_id={project_id}&limit=2" }, "next": { "href": "{url}/data_intg/v3/data_intg_flows?project_id={project_id}&limit=2&start=g1AAAADOeJzLYWBgYMpgTmHQSklKzi9KdUhJMjTUS8rVTU7WLS3WLc4vLcnQNbLQS87JL01JzCvRy0styQHpyWMBkgwNQOr____9WWCxXCAhYmRgZKhrYKJrYBxiaGplbGRlahqVaJCFZocB8XYcgNhxHrcdhlamhlGJ-llZAD4lOMI" }, "total_count": 135 }
{ "data_flows": [ { "entity": { "data_intg_flow": { "mime_type": "application/json", "dataset": false } }, "metadata": { "asset_id": "{asset_id}", "asset_type": "data_intg_flow", "create_time": "2021-04-03 15:32:55+00:00", "creator_id": "IBMid-xxxxxxxxx", "description": " ", "href": "{url}/data_intg/v3/data_intg_flows/{asset_id}?project_id={project_id}", "name": "{job_name}", "project_id": "{project_id}", "resource_key": "{project_id}/data_intg_flow/{job_name}", "size": 5780, "usage": { "access_count": 0, "last_access_time": "2021-04-03 15:33:01.320000+00:00", "last_accessor_id": "IBMid-xxxxxxxxx", "last_modification_time": "2021-04-03 15:33:01.320000+00:00", "last_modifier_id": "IBMid-xxxxxxxxx" } } } ], "first": { "href": "{url}/data_intg/v3/data_intg_flows?project_id={project_id}&limit=2" }, "next": { "href": "{url}/data_intg/v3/data_intg_flows?project_id={project_id}&limit=2&start=g1AAAADOeJzLYWBgYMpgTmHQSklKzi9KdUhJMjTUS8rVTU7WLS3WLc4vLcnQNbLQS87JL01JzCvRy0styQHpyWMBkgwNQOr____9WWCxXCAhYmRgZKhrYKJrYBxiaGplbGRlahqVaJCFZocB8XYcgNhxHrcdhlamhlGJ-llZAD4lOMI" }, "total_count": 135 }
Create DataStage flow
Creates a DataStage flow in the specified project or catalog (either project_id
or catalog_id
must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
Creates a DataStage flow in the specified project or catalog (either project_id
or catalog_id
must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
Creates a DataStage flow in the specified project or catalog (either project_id
or catalog_id
must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
Creates a DataStage flow in the specified project or catalog (either project_id
or catalog_id
must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
POST /v3/data_intg_flows
ServiceCall<DataIntgFlow> createDatastageFlows(CreateDatastageFlowsOptions createDatastageFlowsOptions)
createDatastageFlows(params)
create_datastage_flows(self,
data_intg_flow_name: str,
*,
pipeline_flows: 'PipelineJson' = None,
catalog_id: str = None,
project_id: str = None,
asset_category: str = None,
**kwargs
) -> DetailedResponse
Request
Use the CreateDatastageFlowsOptions.Builder
to create a CreateDatastageFlowsOptions
object that contains the parameter values for the createDatastageFlows
method.
Query Parameters
The data flow name.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23
The category of the asset. Must be either SYSTEM or USER. Only a registered service can use this parameter.
Allowable values: [
system
,user
]
Pipeline json to be attached.
Pipeline flow to be stored.
The createDatastageFlows options.
The data flow name.
Pipeline flow to be stored.
Object containing app-specific data.
The document type.
Examples:ViewArray of parameter set references.
Examples:ViewDocument identifier, GUID recommended.
Examples:ViewRefers to the JSON schema used to validate documents of this type.
Examples:ViewParameters for the flow document.
Examples:ViewObject containing app-specific data.
Examples:ViewA brief description of the DataStage flow.
Examples:ViewUnique identifier.
Examples:ViewName of the pipeline.
Examples:ViewArray of pipeline nodes.
Examples:ViewReference to the runtime type.
Examples:View
pipelines
Reference to the primary (main) pipeline flow within the document.
Examples:ViewRuntime information for pipeline flow.
Examples:ViewArray of data record schemas used in the pipeline.
Examples:ViewPipeline flow version.
Examples:View
pipelineFlows
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:ViewThe category of the asset. Must be either SYSTEM or USER. Only a registered service can use this parameter.
Allowable values: [
system
,user
]
parameters
The data flow name.
Pipeline flow to be stored.
Object containing app-specific data.
The document type.
Examples:ViewArray of parameter set references.
Examples:ViewDocument identifier, GUID recommended.
Examples:ViewRefers to the JSON schema used to validate documents of this type.
Examples:ViewParameters for the flow document.
Examples:ViewObject containing app-specific data.
Examples:ViewA brief description of the DataStage flow.
Examples:ViewUnique identifier.
Examples:ViewName of the pipeline.
Examples:ViewArray of pipeline nodes.
Examples:ViewReference to the runtime type.
Examples:View
pipelines
Reference to the primary (main) pipeline flow within the document.
Examples:ViewRuntime information for pipeline flow.
Examples:ViewArray of data record schemas used in the pipeline.
Examples:ViewPipeline flow version.
Examples:View
pipelineFlows
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:value_source_lines_htmlThe category of the asset. Must be either SYSTEM or USER. Only a registered service can use this parameter.
Allowable values: [
system
,user
]
parameters
The data flow name.
Pipeline flow to be stored.
Object containing app-specific data.
The document type.
Examples:ViewArray of parameter set references.
Examples:ViewDocument identifier, GUID recommended.
Examples:ViewRefers to the JSON schema used to validate documents of this type.
Examples:ViewParameters for the flow document.
Examples:ViewObject containing app-specific data.
Examples:ViewA brief description of the DataStage flow.
Examples:ViewUnique identifier.
Examples:ViewName of the pipeline.
Examples:ViewArray of pipeline nodes.
Examples:ViewReference to the runtime type.
Examples:View
pipelines
Reference to the primary (main) pipeline flow within the document.
Examples:ViewRuntime information for pipeline flow.
Examples:ViewArray of data record schemas used in the pipeline.
Examples:ViewPipeline flow version.
Examples:View
pipeline_flows
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:value_source_lines_htmlThe category of the asset. Must be either SYSTEM or USER. Only a registered service can use this parameter.
Allowable values: [
system
,user
]
curl -X POST --location --header "Authorization: Bearer {iam_token}" --header "Accept: application/json;charset=utf-8" --header "Content-Type: application/json;charset=utf-8" --data '{}' "{base_url}/v3/data_intg_flows?data_intg_flow_name={data_intg_flow_name}&project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
PipelineJson exampleFlow = PipelineFlowHelper.buildPipelineFlow(flowJson); CreateDatastageFlowsOptions createDatastageFlowsOptions = new CreateDatastageFlowsOptions.Builder() .dataIntgFlowName(flowName) .pipelineFlows(exampleFlow) .projectId(projectID) .build(); Response<DataIntgFlow> response = datastageService.createDatastageFlows(createDatastageFlowsOptions).execute(); DataIntgFlow dataIntgFlow = response.getResult(); System.out.println(dataIntgFlow);
const pipelineJsonFromFile = JSON.parse(fs.readFileSync('testInput/rowgen_peek.json', 'utf-8')); const params = { dataIntgFlowName, pipelineFlows: pipelineJsonFromFile, projectId: projectID, assetCategory: 'system', }; const res = await datastageService.createDatastageFlows(params);
data_intg_flow = datastage_service.create_datastage_flows( data_intg_flow_name='testFlowJob1', pipeline_flows=UtilHelper.readJsonFileToDict('inputFiles/exampleFlow.json'), project_id=config['PROJECT_ID'] ).get_result() print(json.dumps(data_intg_flow, indent=2))
Response
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
System metadata about an asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
Entity information for a DataStage lock object.
DataStage flow ID that is locked.
Requester of the lock.
entity
Metadata information for a DataStage lock object.
Lock status.
metadata
lock
The name of the DataStage flow.
The rules of visibility for an asset.
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
rov
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
entity
System metadata about an asset.
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
usage
metadata
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
Entity information for a DataStage lock object.
DataStage flow ID that is locked.
Requester of the lock.
entity
Metadata information for a DataStage lock object.
Lock status.
metadata
lock
The name of the DataStage flow.
The rules of visibility for an asset.
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
rov
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
entity
System metadata about an asset.
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
usage
metadata
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
Entity information for a DataStage lock object.
DataStage flow ID that is locked.
Requester of the lock.
entity
Metadata information for a DataStage lock object.
Lock status.
metadata
lock
The name of the DataStage flow.
The rules of visibility for an asset.
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
rov
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
entity
System metadata about an asset.
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
usage
metadata
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
{ "entity": { "data_intg_flow": { "dataset": false, "mime_type": "application/json" } }, "metadata": { "asset_id": "{asset_id}", "asset_type": "data_intg_flow", "href": "{url}/data_intg/v3/data_intg_flows/{asset_id}", "name": "{job_name}", "origin_country": "US", "resource_key": "{project_id}/data_intg_flow/{job_name}" } }
{ "entity": { "data_intg_flow": { "dataset": false, "mime_type": "application/json" } }, "metadata": { "asset_id": "{asset_id}", "asset_type": "data_intg_flow", "href": "{url}/data_intg/v3/data_intg_flows/{asset_id}", "name": "{job_name}", "origin_country": "US", "resource_key": "{project_id}/data_intg_flow/{job_name}" } }
Get DataStage flow
Lists the DataStage flow that is contained in the specified project. Attachments, metadata and a limited number of attributes from the entity of each DataStage flow is returned.
Lists the DataStage flow that is contained in the specified project. Attachments, metadata and a limited number of attributes from the entity of each DataStage flow is returned.
Lists the DataStage flow that is contained in the specified project. Attachments, metadata and a limited number of attributes from the entity of each DataStage flow is returned.
Lists the DataStage flow that is contained in the specified project. Attachments, metadata and a limited number of attributes from the entity of each DataStage flow is returned.
GET /v3/data_intg_flows/{data_intg_flow_id}
ServiceCall<DataIntgFlowJson> getDatastageFlows(GetDatastageFlowsOptions getDatastageFlowsOptions)
getDatastageFlows(params)
get_datastage_flows(self,
data_intg_flow_id: str,
*,
catalog_id: str = None,
project_id: str = None,
**kwargs
) -> DetailedResponse
Request
Use the GetDatastageFlowsOptions.Builder
to create a GetDatastageFlowsOptions
object that contains the parameter values for the getDatastageFlows
method.
Path Parameters
The DataStage flow ID to use.
Query Parameters
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23
The getDatastageFlows options.
The DataStage flow ID to use.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:View
parameters
The DataStage flow ID to use.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:value_source_lines_html
parameters
The DataStage flow ID to use.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:value_source_lines_html
curl -X GET --location --header "Authorization: Bearer {iam_token}" --header "Accept: application/json;charset=utf-8" "{base_url}/v3/data_intg_flows/{data_intg_flow_id}?project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
GetDatastageFlowsOptions getDatastageFlowsOptions = new GetDatastageFlowsOptions.Builder() .dataIntgFlowId(flowID) .projectId(projectID) .build(); Response<DataIntgFlowJson> response = datastageService.getDatastageFlows(getDatastageFlowsOptions).execute(); DataIntgFlowJson dataIntgFlowJson = response.getResult(); System.out.println(dataIntgFlowJson);
const params = { dataIntgFlowId: assetID, projectId: projectID, }; const res = await datastageService.getDatastageFlows(params);
data_intg_flow_json = datastage_service.get_datastage_flows( data_intg_flow_id=createdFlowId, project_id=config['PROJECT_ID'] ).get_result() print(json.dumps(data_intg_flow_json, indent=2))
Response
A pipeline JSON containing operations to apply to source(s).
Pipeline flow to be stored.
The underlying DataStage flow definition.
System metadata about an asset.
A pipeline JSON containing operations to apply to source(s).
Pipeline flow to be stored.
Object containing app-specific data.
The document type.
Examples:ViewArray of parameter set references.
Examples:ViewDocument identifier, GUID recommended.
Examples:ViewRefers to the JSON schema used to validate documents of this type.
Examples:ViewParameters for the flow document.
Examples:ViewObject containing app-specific data.
Examples:ViewA brief description of the DataStage flow.
Examples:ViewUnique identifier.
Examples:ViewName of the pipeline.
Examples:ViewArray of pipeline nodes.
Examples:ViewReference to the runtime type.
Examples:View
pipelines
Reference to the primary (main) pipeline flow within the document.
Examples:ViewRuntime information for pipeline flow.
Examples:ViewArray of data record schemas used in the pipeline.
Examples:ViewPipeline flow version.
Examples:View
attachments
The underlying DataStage flow definition.
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
Entity information for a DataStage lock object.
DataStage flow ID that is locked.
Requester of the lock.
entity
Metadata information for a DataStage lock object.
Lock status.
metadata
lock
The name of the DataStage flow.
The rules of visibility for an asset.
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
rov
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
entity
System metadata about an asset.
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
usage
metadata
A pipeline JSON containing operations to apply to source(s).
Pipeline flow to be stored.
Object containing app-specific data.
The document type.
Examples:ViewArray of parameter set references.
Examples:ViewDocument identifier, GUID recommended.
Examples:ViewRefers to the JSON schema used to validate documents of this type.
Examples:ViewParameters for the flow document.
Examples:ViewObject containing app-specific data.
Examples:ViewA brief description of the DataStage flow.
Examples:ViewUnique identifier.
Examples:ViewName of the pipeline.
Examples:ViewArray of pipeline nodes.
Examples:ViewReference to the runtime type.
Examples:View
pipelines
Reference to the primary (main) pipeline flow within the document.
Examples:ViewRuntime information for pipeline flow.
Examples:ViewArray of data record schemas used in the pipeline.
Examples:ViewPipeline flow version.
Examples:View
attachments
The underlying DataStage flow definition.
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
Entity information for a DataStage lock object.
DataStage flow ID that is locked.
Requester of the lock.
entity
Metadata information for a DataStage lock object.
Lock status.
metadata
lock
The name of the DataStage flow.
The rules of visibility for an asset.
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
rov
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
entity
System metadata about an asset.
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
usage
metadata
A pipeline JSON containing operations to apply to source(s).
Pipeline flow to be stored.
Object containing app-specific data.
The document type.
Examples:ViewArray of parameter set references.
Examples:ViewDocument identifier, GUID recommended.
Examples:ViewRefers to the JSON schema used to validate documents of this type.
Examples:ViewParameters for the flow document.
Examples:ViewObject containing app-specific data.
Examples:ViewA brief description of the DataStage flow.
Examples:ViewUnique identifier.
Examples:ViewName of the pipeline.
Examples:ViewArray of pipeline nodes.
Examples:ViewReference to the runtime type.
Examples:View
pipelines
Reference to the primary (main) pipeline flow within the document.
Examples:ViewRuntime information for pipeline flow.
Examples:ViewArray of data record schemas used in the pipeline.
Examples:ViewPipeline flow version.
Examples:View
attachments
The underlying DataStage flow definition.
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
Entity information for a DataStage lock object.
DataStage flow ID that is locked.
Requester of the lock.
entity
Metadata information for a DataStage lock object.
Lock status.
metadata
lock
The name of the DataStage flow.
The rules of visibility for an asset.
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
rov
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
entity
System metadata about an asset.
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
usage
metadata
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
Unexpected error.
{ "attachments": { "app_data": { "datastage": { "external_parameters": [] } }, "doc_type": "pipeline", "id": "98cc1fa0-0fd8-4d55-9b27-d477096b4b37", "json_schema": "{url}/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.json", "pipelines": [ { "app_data": { "datastage": { "runtime_column_propagation": "false" }, "ui_data": { "comments": [] } }, "id": "287b2b30-95ff-4cc8-b18f-92e23c464134", "nodes": [ { "app_data": { "datastage": { "outputs_order": "46e18367-1820-4fe8-8c7c-d8badbc76aa3" }, "ui_data": { "image": "../graphics/palette/PxRowGenerator.svg", "label": "RowGen_1", "x_pos": 239, "y_pos": 236 } }, "id": "77e6d535-8312-4692-8850-c129dcf921ed", "op": "PxRowGenerator", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "55b884a7-9cfb-4e02-802b-82444ee95bb5" }, "ui_data": { "label": "outPort" } }, "id": "46e18367-1820-4fe8-8c7c-d8badbc76aa3", "parameters": { "buf_free_run": 50, "disk_write_inc": 1048576, "max_mem_buf_size": 3145728, "queue_upper_size": 0, "records": 10 }, "schema_ref": "07fed318-4370-4c95-bbbc-16d4a91421bb" } ], "parameters": { "input_count": 0, "output_count": 1 }, "type": "binding" }, { "app_data": { "datastage": { "inputs_order": "9e842525-7bbf-4a42-ae95-49ae325e0c87" }, "ui_data": { "image": "../graphics/palette/informix.svg", "label": "informixTgt", "x_pos": 690, "y_pos": 229 } }, "connection": { "project_ref": "{project_id}", "properties": { "create_statement": "CREATE TABLE custid(customer_num int)", "table_action": "append", "table_name": "custid", "write_mode": "insert" }, "ref": "85193161-aa63-4cc5-80e7-7bfcdd59c438" }, "id": "8b4933d9-32c0-4c40-9c47-d8791ab12baf", "inputs": [ { "app_data": { "datastage": {}, "ui_data": { "label": "inPort" } }, "id": "9e842525-7bbf-4a42-ae95-49ae325e0c87", "links": [ { "app_data": { "datastage": {}, "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "Link_3", "label": "Link_3", "outline": true, "path": "", "position": "middle" } ] } }, "id": "55b884a7-9cfb-4e02-802b-82444ee95bb5", "link_name": "Link_3", "node_id_ref": "77e6d535-8312-4692-8850-c129dcf921ed", "port_id_ref": "46e18367-1820-4fe8-8c7c-d8badbc76aa3", "type_attr": "PRIMARY" } ], "parameters": { "part_coll": "part_type" }, "schema_ref": "07fed318-4370-4c95-bbbc-16d4a91421bb" } ], "op": "informix", "parameters": { "input_count": 1, "output_count": 0 }, "type": "binding" } ], "runtime_ref": "pxOsh" } ], "primary_pipeline": "287b2b30-95ff-4cc8-b18f-92e23c464134", "schemas": [ { "fields": [ { "app_data": { "column_reference": "customer_num", "is_unicode_string": false, "odbc_type": "INTEGER", "table_def": "Saved\\\\Link_3\\\\ifx_customer", "type_code": "INT32" }, "metadata": { "decimal_precision": 0, "decimal_scale": 0, "is_key": false, "is_signed": true, "item_index": 0, "max_length": 0, "min_length": 0 }, "name": "customer_num", "nullable": false, "type": "integer" } ], "id": "07fed318-4370-4c95-bbbc-16d4a91421bb" } ], "version": "3.0" }, "entity": { "data_intg_flow": { "mime_type": "application/json", "dataset": false } }, "metadata": { "asset_id": "{asset_id}", "asset_type": "data_intg_flow", "create_time": "2021-04-08 17:14:08+00:00", "creator_id": "IBMid-xxxxxxxxxx", "description": "", "name": "{job_name}", "project_id": "{project_id}", "resource_key": "{project_id}/data_intg_flow/{job_name}", "size": 2712, "usage": { "access_count": 0, "last_access_time": "2021-04-08 17:14:10.193000+00:00", "last_accessor_id": "IBMid-xxxxxxxxxx", "last_modification_time": "2021-04-08 17:14:10.193000+00:00", "last_modifier_id": "IBMid-xxxxxxxxxx" } } }
{ "attachments": { "app_data": { "datastage": { "external_parameters": [] } }, "doc_type": "pipeline", "id": "98cc1fa0-0fd8-4d55-9b27-d477096b4b37", "json_schema": "{url}/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.json", "pipelines": [ { "app_data": { "datastage": { "runtime_column_propagation": "false" }, "ui_data": { "comments": [] } }, "id": "287b2b30-95ff-4cc8-b18f-92e23c464134", "nodes": [ { "app_data": { "datastage": { "outputs_order": "46e18367-1820-4fe8-8c7c-d8badbc76aa3" }, "ui_data": { "image": "../graphics/palette/PxRowGenerator.svg", "label": "RowGen_1", "x_pos": 239, "y_pos": 236 } }, "id": "77e6d535-8312-4692-8850-c129dcf921ed", "op": "PxRowGenerator", "outputs": [ { "app_data": { "datastage": { "is_source_of_link": "55b884a7-9cfb-4e02-802b-82444ee95bb5" }, "ui_data": { "label": "outPort" } }, "id": "46e18367-1820-4fe8-8c7c-d8badbc76aa3", "parameters": { "buf_free_run": 50, "disk_write_inc": 1048576, "max_mem_buf_size": 3145728, "queue_upper_size": 0, "records": 10 }, "schema_ref": "07fed318-4370-4c95-bbbc-16d4a91421bb" } ], "parameters": { "input_count": 0, "output_count": 1 }, "type": "binding" }, { "app_data": { "datastage": { "inputs_order": "9e842525-7bbf-4a42-ae95-49ae325e0c87" }, "ui_data": { "image": "../graphics/palette/informix.svg", "label": "informixTgt", "x_pos": 690, "y_pos": 229 } }, "connection": { "project_ref": "{project_id}", "properties": { "create_statement": "CREATE TABLE custid(customer_num int)", "table_action": "append", "table_name": "custid", "write_mode": "insert" }, "ref": "85193161-aa63-4cc5-80e7-7bfcdd59c438" }, "id": "8b4933d9-32c0-4c40-9c47-d8791ab12baf", "inputs": [ { "app_data": { "datastage": {}, "ui_data": { "label": "inPort" } }, "id": "9e842525-7bbf-4a42-ae95-49ae325e0c87", "links": [ { "app_data": { "datastage": {}, "ui_data": { "decorations": [ { "class_name": "", "hotspot": false, "id": "Link_3", "label": "Link_3", "outline": true, "path": "", "position": "middle" } ] } }, "id": "55b884a7-9cfb-4e02-802b-82444ee95bb5", "link_name": "Link_3", "node_id_ref": "77e6d535-8312-4692-8850-c129dcf921ed", "port_id_ref": "46e18367-1820-4fe8-8c7c-d8badbc76aa3", "type_attr": "PRIMARY" } ], "parameters": { "part_coll": "part_type" }, "schema_ref": "07fed318-4370-4c95-bbbc-16d4a91421bb" } ], "op": "informix", "parameters": { "input_count": 1, "output_count": 0 }, "type": "binding" } ], "runtime_ref": "pxOsh" } ], "primary_pipeline": "287b2b30-95ff-4cc8-b18f-92e23c464134", "schemas": [ { "fields": [ { "app_data": { "column_reference": "customer_num", "is_unicode_string": false, "odbc_type": "INTEGER", "table_def": "Saved\\\\Link_3\\\\ifx_customer", "type_code": "INT32" }, "metadata": { "decimal_precision": 0, "decimal_scale": 0, "is_key": false, "is_signed": true, "item_index": 0, "max_length": 0, "min_length": 0 }, "name": "customer_num", "nullable": false, "type": "integer" } ], "id": "07fed318-4370-4c95-bbbc-16d4a91421bb" } ], "version": "3.0" }, "entity": { "data_intg_flow": { "mime_type": "application/json", "dataset": false } }, "metadata": { "asset_id": "{asset_id}", "asset_type": "data_intg_flow", "create_time": "2021-04-08 17:14:08+00:00", "creator_id": "IBMid-xxxxxxxxxx", "description": "", "name": "{job_name}", "project_id": "{project_id}", "resource_key": "{project_id}/data_intg_flow/{job_name}", "size": 2712, "usage": { "access_count": 0, "last_access_time": "2021-04-08 17:14:10.193000+00:00", "last_accessor_id": "IBMid-xxxxxxxxxx", "last_modification_time": "2021-04-08 17:14:10.193000+00:00", "last_modifier_id": "IBMid-xxxxxxxxxx" } } }
Update DataStage flow
Modifies a data flow in the specified project or catalog (either project_id
or catalog_id
must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
Modifies a data flow in the specified project or catalog (either project_id
or catalog_id
must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
Modifies a data flow in the specified project or catalog (either project_id
or catalog_id
must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
Modifies a data flow in the specified project or catalog (either project_id
or catalog_id
must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
PUT /v3/data_intg_flows/{data_intg_flow_id}
ServiceCall<DataIntgFlow> updateDatastageFlows(UpdateDatastageFlowsOptions updateDatastageFlowsOptions)
updateDatastageFlows(params)
update_datastage_flows(self,
data_intg_flow_id: str,
data_intg_flow_name: str,
*,
pipeline_flows: 'PipelineJson' = None,
catalog_id: str = None,
project_id: str = None,
**kwargs
) -> DetailedResponse
Request
Use the UpdateDatastageFlowsOptions.Builder
to create a UpdateDatastageFlowsOptions
object that contains the parameter values for the updateDatastageFlows
method.
Path Parameters
The DataStage flow ID to use.
Query Parameters
The data flow name.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23
Pipeline json to be attached.
Pipeline flow to be stored.
The updateDatastageFlows options.
The DataStage flow ID to use.
The data flow name.
Pipeline flow to be stored.
Object containing app-specific data.
The document type.
Examples:ViewArray of parameter set references.
Examples:ViewDocument identifier, GUID recommended.
Examples:ViewRefers to the JSON schema used to validate documents of this type.
Examples:ViewParameters for the flow document.
Examples:ViewObject containing app-specific data.
Examples:ViewA brief description of the DataStage flow.
Examples:ViewUnique identifier.
Examples:ViewName of the pipeline.
Examples:ViewArray of pipeline nodes.
Examples:ViewReference to the runtime type.
Examples:View
pipelines
Reference to the primary (main) pipeline flow within the document.
Examples:ViewRuntime information for pipeline flow.
Examples:ViewArray of data record schemas used in the pipeline.
Examples:ViewPipeline flow version.
Examples:View
pipelineFlows
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:View
parameters
The DataStage flow ID to use.
The data flow name.
Pipeline flow to be stored.
Object containing app-specific data.
The document type.
Examples:ViewArray of parameter set references.
Examples:ViewDocument identifier, GUID recommended.
Examples:ViewRefers to the JSON schema used to validate documents of this type.
Examples:ViewParameters for the flow document.
Examples:ViewObject containing app-specific data.
Examples:ViewA brief description of the DataStage flow.
Examples:ViewUnique identifier.
Examples:ViewName of the pipeline.
Examples:ViewArray of pipeline nodes.
Examples:ViewReference to the runtime type.
Examples:View
pipelines
Reference to the primary (main) pipeline flow within the document.
Examples:ViewRuntime information for pipeline flow.
Examples:ViewArray of data record schemas used in the pipeline.
Examples:ViewPipeline flow version.
Examples:View
pipelineFlows
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:value_source_lines_html
parameters
The DataStage flow ID to use.
The data flow name.
Pipeline flow to be stored.
Object containing app-specific data.
The document type.
Examples:ViewArray of parameter set references.
Examples:ViewDocument identifier, GUID recommended.
Examples:ViewRefers to the JSON schema used to validate documents of this type.
Examples:ViewParameters for the flow document.
Examples:ViewObject containing app-specific data.
Examples:ViewA brief description of the DataStage flow.
Examples:ViewUnique identifier.
Examples:ViewName of the pipeline.
Examples:ViewArray of pipeline nodes.
Examples:ViewReference to the runtime type.
Examples:View
pipelines
Reference to the primary (main) pipeline flow within the document.
Examples:ViewRuntime information for pipeline flow.
Examples:ViewArray of data record schemas used in the pipeline.
Examples:ViewPipeline flow version.
Examples:View
pipeline_flows
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:value_source_lines_html
curl -X PUT --location --header "Authorization: Bearer {iam_token}" --header "Accept: application/json;charset=utf-8" --header "Content-Type: application/json;charset=utf-8" --data '{}' "{base_url}/v3/data_intg_flows/{data_intg_flow_id}?data_intg_flow_name={data_intg_flow_name}&project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
PipelineJson exampleFlowUpdated = PipelineFlowHelper.buildPipelineFlow(updatedFlowJson); UpdateDatastageFlowsOptions updateDatastageFlowsOptions = new UpdateDatastageFlowsOptions.Builder() .dataIntgFlowId(flowID) .dataIntgFlowName(flowName) .pipelineFlows(exampleFlowUpdated) .projectId(projectID) .build(); Response<DataIntgFlow> response = datastageService.updateDatastageFlows(updateDatastageFlowsOptions).execute(); DataIntgFlow dataIntgFlow = response.getResult(); System.out.println(dataIntgFlow);
const params = { dataIntgFlowId: assetID, dataIntgFlowName, pipelineFlows: pipelineJsonFromFile, projectId: projectID, assetCategory: 'system', }; const res = await datastageService.updateDatastageFlows(params);
data_intg_flow = datastage_service.update_datastage_flows( data_intg_flow_id=createdFlowId, data_intg_flow_name='testFlowJob1Updated', pipeline_flows=UtilHelper.readJsonFileToDict('inputFiles/exampleFlowUpdated.json'), project_id=config['PROJECT_ID'] ).get_result() print(json.dumps(data_intg_flow, indent=2))
Response
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
System metadata about an asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
Entity information for a DataStage lock object.
DataStage flow ID that is locked.
Requester of the lock.
entity
Metadata information for a DataStage lock object.
Lock status.
metadata
lock
The name of the DataStage flow.
The rules of visibility for an asset.
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
rov
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
entity
System metadata about an asset.
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
usage
metadata
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
Entity information for a DataStage lock object.
DataStage flow ID that is locked.
Requester of the lock.
entity
Metadata information for a DataStage lock object.
Lock status.
metadata
lock
The name of the DataStage flow.
The rules of visibility for an asset.
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
rov
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
entity
System metadata about an asset.
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
usage
metadata
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
Entity information for a DataStage lock object.
DataStage flow ID that is locked.
Requester of the lock.
entity
Metadata information for a DataStage lock object.
Lock status.
metadata
lock
The name of the DataStage flow.
The rules of visibility for an asset.
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
rov
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
entity
System metadata about an asset.
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
usage
metadata
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
{ "attachments": [ { "asset_type": "data_intg_flow", "attachment_id": "9081dd6b-0ab7-47b5-8233-c10c6e64509d", "href": "{url}/v2/assets/{asset_id}/attachments/9081dd6b-0ab7-47b5-8233-c10c6e64509d?project_id={project_id}", "mime": "application/json", "name": "data_intg_flows", "object_key": "data_intg_flow/{project_id}{asset_id}", "object_key_is_read_only": false, "private_url": false } ], "entity": { "data_intg_flow": { "dataset": false, "mime_type": "application/json" } }, "metadata": { "asset_id": "{asset_id}", "asset_type": "data_intg_flow", "catalog_id": "{catalog_id}", "create_time": "2021-04-08 17:14:08+00:00", "creator_id": "IBMid-xxxxxxxxxx", "description": "", "href": "{url}/data_intg/v3/data_intg_flows/{asset_id}?catalog_id={catalog_id}", "name": "{job_name}", "origin_country": "us", "project_id": "{project_id}", "resource_key": "{project_id}/data_intg_flow/{job_name}", "size": 2712, "tags": [], "usage": { "access_count": 0, "last_access_time": "2021-04-08 17:21:33.936000+00:00", "last_accessor_id": "IBMid-xxxxxxxxxx" } } }
{ "attachments": [ { "asset_type": "data_intg_flow", "attachment_id": "9081dd6b-0ab7-47b5-8233-c10c6e64509d", "href": "{url}/v2/assets/{asset_id}/attachments/9081dd6b-0ab7-47b5-8233-c10c6e64509d?project_id={project_id}", "mime": "application/json", "name": "data_intg_flows", "object_key": "data_intg_flow/{project_id}{asset_id}", "object_key_is_read_only": false, "private_url": false } ], "entity": { "data_intg_flow": { "dataset": false, "mime_type": "application/json" } }, "metadata": { "asset_id": "{asset_id}", "asset_type": "data_intg_flow", "catalog_id": "{catalog_id}", "create_time": "2021-04-08 17:14:08+00:00", "creator_id": "IBMid-xxxxxxxxxx", "description": "", "href": "{url}/data_intg/v3/data_intg_flows/{asset_id}?catalog_id={catalog_id}", "name": "{job_name}", "origin_country": "us", "project_id": "{project_id}", "resource_key": "{project_id}/data_intg_flow/{job_name}", "size": 2712, "tags": [], "usage": { "access_count": 0, "last_access_time": "2021-04-08 17:21:33.936000+00:00", "last_accessor_id": "IBMid-xxxxxxxxxx" } } }
Clone DataStage flow
Create a DataStage flow in the specified project or catalog based on an existing DataStage flow in the same project or catalog.
Create a DataStage flow in the specified project or catalog based on an existing DataStage flow in the same project or catalog.
Create a DataStage flow in the specified project or catalog based on an existing DataStage flow in the same project or catalog.
Create a DataStage flow in the specified project or catalog based on an existing DataStage flow in the same project or catalog.
POST /v3/data_intg_flows/{data_intg_flow_id}/clone
ServiceCall<DataIntgFlow> cloneDatastageFlows(CloneDatastageFlowsOptions cloneDatastageFlowsOptions)
cloneDatastageFlows(params)
clone_datastage_flows(self,
data_intg_flow_id: str,
*,
catalog_id: str = None,
project_id: str = None,
**kwargs
) -> DetailedResponse
Request
Use the CloneDatastageFlowsOptions.Builder
to create a CloneDatastageFlowsOptions
object that contains the parameter values for the cloneDatastageFlows
method.
Path Parameters
The DataStage flow ID to use.
Query Parameters
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23
The cloneDatastageFlows options.
The DataStage flow ID to use.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:View
parameters
The DataStage flow ID to use.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:value_source_lines_html
parameters
The DataStage flow ID to use.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:value_source_lines_html
curl -X POST --location --header "Authorization: Bearer {iam_token}" --header "Accept: application/json;charset=utf-8" "{base_url}/v3/data_intg_flows/{data_intg_flow_id}/clone?project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
CloneDatastageFlowsOptions cloneDatastageFlowsOptions = new CloneDatastageFlowsOptions.Builder() .dataIntgFlowId(flowID) .projectId(projectID) .build(); Response<DataIntgFlow> response = datastageService.cloneDatastageFlows(cloneDatastageFlowsOptions).execute(); DataIntgFlow dataIntgFlow = response.getResult(); System.out.println(dataIntgFlow);
const params = { dataIntgFlowId: assetID, projectId: projectID, }; const res = await datastageService.cloneDatastageFlows(params);
data_intg_flow = datastage_service.clone_datastage_flows( data_intg_flow_id=createdFlowId, project_id=config['PROJECT_ID'] ).get_result() print(json.dumps(data_intg_flow, indent=2))
Response
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
System metadata about an asset.
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
Entity information for a DataStage lock object.
DataStage flow ID that is locked.
Requester of the lock.
entity
Metadata information for a DataStage lock object.
Lock status.
metadata
lock
The name of the DataStage flow.
The rules of visibility for an asset.
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
rov
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
entity
System metadata about an asset.
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
usage
metadata
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
Entity information for a DataStage lock object.
DataStage flow ID that is locked.
Requester of the lock.
entity
Metadata information for a DataStage lock object.
Lock status.
metadata
lock
The name of the DataStage flow.
The rules of visibility for an asset.
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
rov
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
entity
System metadata about an asset.
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
usage
metadata
A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).
Metadata information for datastage flow.
The underlying DataStage flow definition.
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
Entity information for a DataStage lock object.
DataStage flow ID that is locked.
Requester of the lock.
entity
Metadata information for a DataStage lock object.
Lock status.
metadata
lock
The name of the DataStage flow.
The rules of visibility for an asset.
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
rov
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
entity
System metadata about an asset.
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
usage
metadata
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
{ "entity": { "data_intg_flow": { "dataset": false, "mime_type": "application/json" } }, "metadata": { "asset_id": "{asset_id}", "asset_type": "data_intg_flow", "href": "{url}/data_intg/v3/data_intg_flows/{asset_id}", "name": "{job_name_copy}", "origin_country": "US", "resource_key": "{project_id}/data_intg_flow/{job_name_copy}" } }
{ "entity": { "data_intg_flow": { "dataset": false, "mime_type": "application/json" } }, "metadata": { "asset_id": "{asset_id}", "asset_type": "data_intg_flow", "href": "{url}/data_intg/v3/data_intg_flows/{asset_id}", "name": "{job_name_copy}", "origin_country": "US", "resource_key": "{project_id}/data_intg_flow/{job_name_copy}" } }
Compile DataStage flow to generate runtime assets
Generate the runtime assets for a DataStage flow in the specified project or catalog for a specified runtime type. Either project_id or catalog_id must be specified.
Generate the runtime assets for a DataStage flow in the specified project or catalog for a specified runtime type. Either project_id or catalog_id must be specified.
Generate the runtime assets for a DataStage flow in the specified project or catalog for a specified runtime type. Either project_id or catalog_id must be specified.
Generate the runtime assets for a DataStage flow in the specified project or catalog for a specified runtime type. Either project_id or catalog_id must be specified.
POST /v3/ds_codegen/compile/{data_intg_flow_id}
ServiceCall<FlowCompileResponse> compileDatastageFlows(CompileDatastageFlowsOptions compileDatastageFlowsOptions)
compileDatastageFlows(params)
compile_datastage_flows(self,
data_intg_flow_id: str,
*,
catalog_id: str = None,
project_id: str = None,
runtime_type: str = None,
**kwargs
) -> DetailedResponse
Request
Use the CompileDatastageFlowsOptions.Builder
to create a CompileDatastageFlowsOptions
object that contains the parameter values for the compileDatastageFlows
method.
Path Parameters
The DataStage flow ID to use.
Query Parameters
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23
The type of the runtime to use. e.g. dspxosh or Spark etc. If not provided queried from within pipeline flow if available otherwise default of dspxosh is used.
The compileDatastageFlows options.
The DataStage flow ID to use.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:ViewThe type of the runtime to use. e.g. dspxosh or Spark etc. If not provided queried from within pipeline flow if available otherwise default of dspxosh is used.
parameters
The DataStage flow ID to use.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:value_source_lines_htmlThe type of the runtime to use. e.g. dspxosh or Spark etc. If not provided queried from within pipeline flow if available otherwise default of dspxosh is used.
parameters
The DataStage flow ID to use.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:value_source_lines_htmlThe type of the runtime to use. e.g. dspxosh or Spark etc. If not provided queried from within pipeline flow if available otherwise default of dspxosh is used.
curl -X POST --location --header "Authorization: Bearer {iam_token}" --header "Accept: application/json;charset=utf-8" "{base_url}/v3/ds_codegen/compile/{data_intg_flow_id}?project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
CompileDatastageFlowsOptions compileDatastageFlowsOptions = new CompileDatastageFlowsOptions.Builder() .dataIntgFlowId(flowID) .projectId(projectID) .build(); Response<FlowCompileResponse> response = datastageService.compileDatastageFlows(compileDatastageFlowsOptions).execute(); FlowCompileResponse flowCompileResponse = response.getResult(); System.out.println(flowCompileResponse);
const params = { dataIntgFlowId: assetID, projectId: projectID, }; const res = await datastageService.compileDatastageFlows(params);
flow_compile_response = datastage_service.compile_datastage_flows( data_intg_flow_id=createdFlowId, project_id=config['PROJECT_ID'] ).get_result() print(json.dumps(flow_compile_response, indent=2))
Response
Describes the compile response model.
Compile result for DataStage flow.
message
Compile response type. For example ok or error.
Describes the compile response model.
Compile result for DataStage flow.
Compile response type. For example ok or error.
Describes the compile response model.
Compile result for DataStage flow.
Compile response type. For example ok or error.
Describes the compile response model.
Compile result for DataStage flow.
Compile response type. For example ok or error.
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
Request object contains invalid information. Server is not able to process the request object.
Unexpected error.
{ "message": { "flowName": "{job_name}", "flow_name": "{job_name}", "result": "success", "runtime_code": "{compiled_OSH}", "runtime_type": "dspxosh" }, "type": "ok" }
{ "message": { "flowName": "{job_name}", "flow_name": "{job_name}", "result": "success", "runtime_code": "{compiled_OSH}", "runtime_type": "dspxosh" }, "type": "ok" }
Delete DataStage subflows
Deletes the specified data subflows in a project or catalog (either project_id
or catalog_id
must be set).
If the deletion of the data subflows will take some time to finish, then a 202 response will be returned and the deletion will continue asynchronously.
Deletes the specified data subflows in a project or catalog (either project_id
or catalog_id
must be set).
If the deletion of the data subflows will take some time to finish, then a 202 response will be returned and the deletion will continue asynchronously.
Deletes the specified data subflows in a project or catalog (either project_id
or catalog_id
must be set).
If the deletion of the data subflows will take some time to finish, then a 202 response will be returned and the deletion will continue asynchronously.
Deletes the specified data subflows in a project or catalog (either project_id
or catalog_id
must be set).
If the deletion of the data subflows will take some time to finish, then a 202 response will be returned and the deletion will continue asynchronously.
DELETE /v3/data_intg_flows/subflows
ServiceCall<Void> deleteDatastageSubflows(DeleteDatastageSubflowsOptions deleteDatastageSubflowsOptions)
deleteDatastageSubflows(params)
delete_datastage_subflows(self,
id: List[str],
*,
catalog_id: str = None,
project_id: str = None,
**kwargs
) -> DetailedResponse
Request
Use the DeleteDatastageSubflowsOptions.Builder
to create a DeleteDatastageSubflowsOptions
object that contains the parameter values for the deleteDatastageSubflows
method.
Query Parameters
The list of DataStage subflow IDs to delete.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23
The deleteDatastageSubflows options.
The list of DataStage subflow IDs to delete.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:View
parameters
The list of DataStage subflow IDs to delete.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:value_source_lines_html
parameters
The list of DataStage subflow IDs to delete.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:value_source_lines_html
curl -X DELETE --location --header "Authorization: Bearer {iam_token}" "{base_url}/v3/data_intg_flows/subflows?id=[]&project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
String[] ids = new String[] {subflowID, cloneSubflowID}; DeleteDatastageSubflowsOptions deleteDatastageSubflowsOptions = new DeleteDatastageSubflowsOptions.Builder() .id(Arrays.asList(ids)) .projectId(projectID) .build(); datastageService.deleteDatastageSubflows(deleteDatastageSubflowsOptions).execute();
const params = { id: [assetID, cloneID], projectId: projectID, force: true, }; const res = await datastageService.deleteDatastageFlows(params);
response = datastage_service.delete_datastage_subflows( id=createdSubflowId, project_id=config['PROJECT_ID'] )
Response
Status Code
The requested operation is in progress.
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
No Sample Response
Get metadata and lock information for DataStage subflows
Lists the metadata, entity and lock information for DataStage subflows that are contained in the specified project.
Lists the metadata, entity and lock information for DataStage subflows that are contained in the specified project.
Use the following parameters to filter the results:
| Field | Match type | Example |
| ------------------------ | ------------ | --------------------------------------- |
| entity.name
| Equals | entity.name=MyDataStageSubFlow
|
| entity.name
| Starts with | entity.name=starts:MyData
|
| entity.description
| Equals | entity.description=movement
|
| entity.description
| Starts with | entity.description=starts:data
|
To sort the results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time
(i.e. returning the most recently created data flows first).
| Field | Example |
| ------------------------- | ----------------------------------- |
| sort | sort=+entity.name
(sort by ascending name) |
| sort | sort=-metadata.create_time
(sort by descending creation time) |
Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time
and then in ascending order on name use: sort=-metadata.create_time
,+entity.name
.
Lists the metadata, entity and lock information for DataStage subflows that are contained in the specified project.
Use the following parameters to filter the results:
| Field | Match type | Example |
| ------------------------ | ------------ | --------------------------------------- |
| entity.name
| Equals | entity.name=MyDataStageSubFlow
|
| entity.name
| Starts with | entity.name=starts:MyData
|
| entity.description
| Equals | entity.description=movement
|
| entity.description
| Starts with | entity.description=starts:data
|
To sort the results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time
(i.e. returning the most recently created data flows first).
| Field | Example |
| ------------------------- | ----------------------------------- |
| sort | sort=+entity.name
(sort by ascending name) |
| sort | sort=-metadata.create_time
(sort by descending creation time) |
Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time
and then in ascending order on name use: sort=-metadata.create_time
,+entity.name
.
Lists the metadata, entity and lock information for DataStage subflows that are contained in the specified project.
Use the following parameters to filter the results:
| Field | Match type | Example |
| ------------------------ | ------------ | --------------------------------------- |
| entity.name
| Equals | entity.name=MyDataStageSubFlow
|
| entity.name
| Starts with | entity.name=starts:MyData
|
| entity.description
| Equals | entity.description=movement
|
| entity.description
| Starts with | entity.description=starts:data
|
To sort the results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time
(i.e. returning the most recently created data flows first).
| Field | Example |
| ------------------------- | ----------------------------------- |
| sort | sort=+entity.name
(sort by ascending name) |
| sort | sort=-metadata.create_time
(sort by descending creation time) |
Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time
and then in ascending order on name use: sort=-metadata.create_time
,+entity.name
.
GET /v3/data_intg_flows/subflows
ServiceCall<DataFlowPagedCollection> listDatastageSubflows(ListDatastageSubflowsOptions listDatastageSubflowsOptions)
listDatastageSubflows(params)
list_datastage_subflows(self,
*,
catalog_id: str = None,
project_id: str = None,
sort: str = None,
start: str = None,
limit: int = None,
entity_name: str = None,
entity_description: str = None,
**kwargs
) -> DetailedResponse
Request
Use the ListDatastageSubflowsOptions.Builder
to create a ListDatastageSubflowsOptions
object that contains the parameter values for the listDatastageSubflows
method.
Query Parameters
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23
The field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time.
The page token indicating where to start paging from.
The limit of the number of items to return, for example limit=50. If not specified a default of 100 will be used.
Possible values: value ≥ 1
Example:
100
Filter results based on the specified name.
Example:
MyDataStageSubFlow
Filter results based on the specified description.
The listDatastageSubflows options.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:ViewThe field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time.
The page token indicating where to start paging from.
The limit of the number of items to return, for example limit=50. If not specified a default of 100 will be used.
Possible values: value ≥ 1
Examples:ViewFilter results based on the specified name.
Filter results based on the specified description.
parameters
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:value_source_lines_htmlThe field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time.
The page token indicating where to start paging from.
The limit of the number of items to return, for example limit=50. If not specified a default of 100 will be used.
Possible values: value ≥ 1
Examples:value_source_lines_htmlFilter results based on the specified name.
Filter results based on the specified description.
parameters
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:value_source_lines_htmlThe field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time.
The page token indicating where to start paging from.
The limit of the number of items to return, for example limit=50. If not specified a default of 100 will be used.
Possible values: value ≥ 1
Examples:value_source_lines_htmlFilter results based on the specified name.
Filter results based on the specified description.
curl -X GET --location --header "Authorization: Bearer {iam_token}" --header "Accept: application/json;charset=utf-8" "{base_url}/v3/data_intg_flows/subflows?project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23&limit=100"
ListDatastageSubflowsOptions listDatastageSubflowsOptions = new ListDatastageSubflowsOptions.Builder() .projectId(projectID) .limit(Long.valueOf("100")) .build(); Response<DataFlowPagedCollection> response = datastageService.listDatastageSubflows(listDatastageSubflowsOptions).execute(); DataFlowPagedCollection dataFlowPagedCollection = response.getResult(); System.out.println(dataFlowPagedCollection);
const params = { projectId: projectID, sort: 'name', limit: 100, }; const res = await datastageService.listDatastageSubflows(params);
data_flow_paged_collection = datastage_service.list_datastage_subflows( project_id=config['PROJECT_ID'], limit=100 ).get_result() print(json.dumps(data_flow_paged_collection, indent=2))
Response
A page from a collection of DataStage flows.
A page from a collection of DataStage flows.
URI of a resource.
URI of a resource.
The number of data flows requested to be returned.
URI of a resource.
URI of a resource.
The total number of DataStage flows available.
A page from a collection of DataStage flows.
A page from a collection of DataStage flows.
Metadata information for datastage flow.
The underlying DataStage flow definition.
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
Entity information for a DataStage lock object.
DataStage flow ID that is locked.
Requester of the lock.
entity
Metadata information for a DataStage lock object.
Lock status.
metadata
lock
The name of the DataStage flow.
The rules of visibility for an asset.
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
rov
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
entity
System metadata about an asset.
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
usage
metadata
dataFlows
URI of a resource.
URI of a resource.
first
URI of a resource.
URI of a resource.
last
The number of data flows requested to be returned.
URI of a resource.
URI of a resource.
next
URI of a resource.
URI of a resource.
prev
The total number of DataStage flows available.
A page from a collection of DataStage flows.
A page from a collection of DataStage flows.
Metadata information for datastage flow.
The underlying DataStage flow definition.
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
Entity information for a DataStage lock object.
DataStage flow ID that is locked.
Requester of the lock.
entity
Metadata information for a DataStage lock object.
Lock status.
metadata
lock
The name of the DataStage flow.
The rules of visibility for an asset.
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
rov
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
entity
System metadata about an asset.
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
usage
metadata
data_flows
URI of a resource.
URI of a resource.
first
URI of a resource.
URI of a resource.
last
The number of data flows requested to be returned.
URI of a resource.
URI of a resource.
next
URI of a resource.
URI of a resource.
prev
The total number of DataStage flows available.
A page from a collection of DataStage flows.
A page from a collection of DataStage flows.
Metadata information for datastage flow.
The underlying DataStage flow definition.
Asset type object.
Asset type object.
The description of the DataStage flow.
Lock information for a DataStage flow asset.
Entity information for a DataStage lock object.
DataStage flow ID that is locked.
Requester of the lock.
entity
Metadata information for a DataStage lock object.
Lock status.
metadata
lock
The name of the DataStage flow.
The rules of visibility for an asset.
An array of members belonging to AssetEntityROV.
The values for mode are 0 (public, searchable and viewable by all), 8 (private, searchable by all, but not viewable unless view permission given) or 16 (hidden, only searchable by users with view permissions).
rov
A read-only field that can be used to distinguish between different types of data flow based on the service that created it.
entity
System metadata about an asset.
The ID of the asset.
The type of the asset.
The ID of the catalog which contains the asset.
catalog_id
orproject_id
is required.The timestamp when the asset was created (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that created the asset.
The description of the asset.
URL that can be used to get the asset.
name of the asset.
origin of the asset.
The ID of the project which contains the asset.
catalog_id
orproject_id
is required.This is a unique string that uniquely identifies an asset.
size of the asset.
Custom data to be associated with a given object.
A list of tags that can be used to identify different types of data flow.
Metadata usage information about an asset.
Number of times this asset has been accessed.
The timestamp when the asset was last accessed (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last accessed the asset.
The timestamp when the asset was last modified (in format YYYY-MM-DDTHH:mm:ssZ or YYYY-MM-DDTHH:mm:ss.sssZ, matching the date-time format as specified by RFC 3339).
The IAM ID of the user that last modified the asset.
usage
metadata
data_flows
URI of a resource.
URI of a resource.
first
URI of a resource.
URI of a resource.
last
The number of data flows requested to be returned.
URI of a resource.
URI of a resource.
next
URI of a resource.
URI of a resource.
prev
The total number of DataStage flows available.
Status Code
The requested operation completed successfully.
You are not authorized to access the service. See response for more information.
You are not permitted to perform this action. See response for more information.
An error occurred. See response for more information.
{ "data_flows": [ { "entity": { "data_intg_subflow": { "mime_type": "application/json", "dataset": false } }, "metadata": { "asset_id": "{asset_id}", "asset_type": "data_intg_subflow", "create_time": "2021-04-03 15:32:55+00:00", "creator_id": "IBMid-xxxxxxxxx", "description": " ", "href": "{url}/data_intg/v3/data_intg_flows/subflows/{asset_id}?project_id={project_id}", "name": "{job_name}", "project_id": "{project_id}", "resource_key": "{project_id}/data_intg_subflow/{job_name}", "size": 5780, "usage": { "access_count": 0, "last_access_time": "2021-04-03 15:33:01.320000+00:00", "last_accessor_id": "IBMid-xxxxxxxxx", "last_modification_time": "2021-04-03 15:33:01.320000+00:00", "last_modifier_id": "IBMid-xxxxxxxxx" } } } ], "first": { "href": "{url}/data_intg/v3/data_intg_flows/subflows?project_id={project_id}&limit=2" }, "next": { "href": "{url}/data_intg/v3/data_intg_flows/subflows?project_id={project_id}&limit=2&start=g1AAAADOeJzLYWBgYMpgTmHQSklKzi9KdUhJMjTUS8rVTU7WLS3WLc4vLcnQNbLQS87JL01JzCvRy0styQHpyWMBkgwNQOr____9WWCxXCAhYmRgZKhrYKJrYBxiaGplbGRlahqVaJCFZocB8XYcgNhxHrcdhlamhlGJ-llZAD4lOMI" }, "total_count": 1 }
{ "data_flows": [ { "entity": { "data_intg_subflow": { "mime_type": "application/json", "dataset": false } }, "metadata": { "asset_id": "{asset_id}", "asset_type": "data_intg_subflow", "create_time": "2021-04-03 15:32:55+00:00", "creator_id": "IBMid-xxxxxxxxx", "description": " ", "href": "{url}/data_intg/v3/data_intg_flows/subflows/{asset_id}?project_id={project_id}", "name": "{job_name}", "project_id": "{project_id}", "resource_key": "{project_id}/data_intg_subflow/{job_name}", "size": 5780, "usage": { "access_count": 0, "last_access_time": "2021-04-03 15:33:01.320000+00:00", "last_accessor_id": "IBMid-xxxxxxxxx", "last_modification_time": "2021-04-03 15:33:01.320000+00:00", "last_modifier_id": "IBMid-xxxxxxxxx" } } } ], "first": { "href": "{url}/data_intg/v3/data_intg_flows/subflows?project_id={project_id}&limit=2" }, "next": { "href": "{url}/data_intg/v3/data_intg_flows/subflows?project_id={project_id}&limit=2&start=g1AAAADOeJzLYWBgYMpgTmHQSklKzi9KdUhJMjTUS8rVTU7WLS3WLc4vLcnQNbLQS87JL01JzCvRy0styQHpyWMBkgwNQOr____9WWCxXCAhYmRgZKhrYKJrYBxiaGplbGRlahqVaJCFZocB8XYcgNhxHrcdhlamhlGJ-llZAD4lOMI" }, "total_count": 1 }
Create DataStage subflow
Creates a DataStage subflow in the specified project or catalog (either project_id
or catalog_id
must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
Creates a DataStage subflow in the specified project or catalog (either project_id
or catalog_id
must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
Creates a DataStage subflow in the specified project or catalog (either project_id
or catalog_id
must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
Creates a DataStage subflow in the specified project or catalog (either project_id
or catalog_id
must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.
POST /v3/data_intg_flows/subflows
ServiceCall<DataIntgFlow> createDatastageSubflows(CreateDatastageSubflowsOptions createDatastageSubflowsOptions)
createDatastageSubflows(params)
create_datastage_subflows(self,
data_intg_subflow_name: str,
*,
pipeline_flows: 'PipelineJson' = None,
catalog_id: str = None,
project_id: str = None,
asset_category: str = None,
**kwargs
) -> DetailedResponse
Request
Use the CreateDatastageSubflowsOptions.Builder
to create a CreateDatastageSubflowsOptions
object that contains the parameter values for the createDatastageSubflows
method.
Query Parameters
The DataStage subflow name.
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Example:
bd0dbbfd-810d-4f0e-b0a9-228c328a8e23
The category of the asset. Must be either SYSTEM or USER. Only a registered service can use this parameter.
Allowable values: [
system
,user
]
Pipeline json to be attached.
Pipeline flow to be stored.
The createDatastageSubflows options.
The DataStage subflow name.
Pipeline flow to be stored.
Object containing app-specific data.
The document type.
Examples:ViewArray of parameter set references.
Examples:ViewDocument identifier, GUID recommended.
Examples:ViewRefers to the JSON schema used to validate documents of this type.
Examples:ViewParameters for the flow document.
Examples:ViewObject containing app-specific data.
Examples:ViewA brief description of the DataStage flow.
Examples:ViewUnique identifier.
Examples:ViewName of the pipeline.
Examples:ViewArray of pipeline nodes.
Examples:ViewReference to the runtime type.
Examples:View
pipelines
Reference to the primary (main) pipeline flow within the document.
Examples:ViewRuntime information for pipeline flow.
Examples:ViewArray of data record schemas used in the pipeline.
Examples:ViewPipeline flow version.
Examples:View
pipelineFlows
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:ViewThe category of the asset. Must be either SYSTEM or USER. Only a registered service can use this parameter.
Allowable values: [
system
,user
]
parameters
The DataStage subflow name.
Pipeline flow to be stored.
Object containing app-specific data.
The document type.
Examples:ViewArray of parameter set references.
Examples:ViewDocument identifier, GUID recommended.
Examples:ViewRefers to the JSON schema used to validate documents of this type.
Examples:ViewParameters for the flow document.
Examples:ViewObject containing app-specific data.
Examples:ViewA brief description of the DataStage flow.
Examples:ViewUnique identifier.
Examples:ViewName of the pipeline.
Examples:ViewArray of pipeline nodes.
Examples:ViewReference to the runtime type.
Examples:View
pipelines
Reference to the primary (main) pipeline flow within the document.
Examples:ViewRuntime information for pipeline flow.
Examples:ViewArray of data record schemas used in the pipeline.
Examples:ViewPipeline flow version.
Examples:View
pipelineFlows
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:value_source_lines_htmlThe category of the asset. Must be either SYSTEM or USER. Only a registered service can use this parameter.
Allowable values: [
system
,user
]
parameters
The DataStage subflow name.
Pipeline flow to be stored.
Object containing app-specific data.
The document type.
Examples:ViewArray of parameter set references.
Examples:ViewDocument identifier, GUID recommended.
Examples:ViewRefers to the JSON schema used to validate documents of this type.
Examples:ViewParameters for the flow document.
Examples:ViewObject containing app-specific data.
Examples:ViewA brief description of the DataStage flow.
Examples:ViewUnique identifier.
Examples:ViewName of the pipeline.
Examples:ViewArray of pipeline nodes.
Examples:ViewReference to the runtime type.
Examples:View
pipelines
Reference to the primary (main) pipeline flow within the document.
Examples:ViewRuntime information for pipeline flow.
Examples:ViewArray of data record schemas used in the pipeline.
Examples:ViewPipeline flow version.
Examples:View
pipeline_flows
The ID of the catalog to use.
catalog_id
orproject_id
is required.The ID of the project to use.
catalog_id
orproject_id
is required.Examples:value_source_lines_htmlThe category of the asset. Must be either SYSTEM or USER. Only a registered service can use this parameter.
Allowable values: [
system
,user
]
curl -X POST --location --header "Authorization: Bearer {iam_token}" --header "Accept: application/json;charset=utf-8" --header "Content-Type: application/json;charset=utf-8" --data '{}' "{base_url}/v3/data_intg_flows/subflows?data_intg_subflow_name={data_intg_subflow_name}&project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
PipelineJson exampleSubFlow = PipelineFlowHelper.buildPipelineFlow(subFlowJson); CreateDatastageSubflowsOptions createDatastageSubflowsOptions = new CreateDatastageSubflowsOptions.Builder() .dataIntgSubflowName(subflowName) .pipelineFlows(exampleSubFlow) .projectId(projectID) .build(); Response<DataIntgFlow> response = datastageService.createDatastageSubflows(createDatastageSubflowsOptions).execute(); DataIntgFlow dataIntgFlow = response.getResult(); System.out.println(dataIntgFlow);
const params = { dataIntgSubflowName: dataIntgSubFlowName, pipelineFlows: pipelineJsonFromFile, projectId: projectID, assetCategory: 'system', }; const res = await datastageService.createDatastageSubflows(params);
data_intg_flow = datastage_service.create_datastage_subflows( data_intg_subflow_name='testSubflow1', pipeline_flows=UtilHelper.readJsonFileToDict('inputFiles/exampleSubflow.json'), project_id=config['PROJECT_ID'] ).get_result() print(json.dumps(data_intg_flow, indent=2))