IBM Cloud API Docs

Introduction

You can use a collection of IBM DataStage REST APIs to process, compile, and run flows. DataStage flows are design-time assets that contain data integration logic in JSON-based schemas.

Process flows Use the processing API to manipulate data that you have read from a data source before writing it to a data target.

Compile flows Use the compile API to compile flows. All flows must be compiled before you run them. .

Run flows Use the run API to run flows. When you run a flow, the extraction, loading, and transforming tasks that were built into the flow designs are actually implemented.

You can use the DataStage REST APIs for both DataStage in Cloud Pak for Data as a service and DataStage in Cloud Pak for Data.

For more information on the DataStage service, see the following links:

The code examples on this tab use the client library that is provided for Java.

<dependency>
<groupId>com.ibm.cloud</groupId>
<artifactId>datastage</artifactId>
<version>0.0.1</version>
</dependency>

Gradle

compile 'com.ibm.cloud:datastage:0.0.1'

GitHub

The code examples on this tab use the client library that is provided for Node.js.

Installation

npm install datastage

GitHub

The code examples on this tab use the client library that is provided for Python.

Installation

pip install --upgrade "datastage>=0.0.1"

GitHub

Authentication

Before you can call an IBM DataStage API, you must first create an IAM bearer token. Tokens support authenticated requests without embedding service credentials in every call. Each token is valid for one hour. After a token expires, you must create a new one if you want to continue using the API. The recommended method to retrieve a token programmatically is to create an API key for your IBM Cloud identity and then use the IAM token API to exchange that key for a token. For more information on authentication, see the following links:

Replace {apikey} and {url} with your service credentials.

curl -X {request_method} -u "apikey:{apikey}" "{url}/v4/{method}"

Setting client options through external configuration

Example environment variables, where <SERVICE_URL> is the endpoint URL, <API_KEY> is your IAM API key and <IAM_URL> is your IAM URL endpoint

DATASTAGE_AUTH_TYPE=iam
DATASTAGE_URL=https://api.dataplatform.cloud.ibm.com/data_intg
DATASTAGE_APIKEY=<API_KEY>
DATASTAGE_AUTH_URL=https://iam.cloud.ibm.com/identity/token

Example of constructing the service client

import com.ibm.cloud.datastage.v3.Datastage;
Datastage service = Datastage.newInstance();

Setting client options through external configuration

Example environment variables, where <SERVICE_URL> is the endpoint URL, <API_KEY> is your IAM API key and <IAM_URL> is your IAM URL endpoint

DATASTAGE_AUTH_TYPE=iam
DATASTAGE_URL=https://api.dataplatform.cloud.ibm.com/data_intg
DATASTAGE_APIKEY=<API_KEY>
DATASTAGE_AUTH_URL=https://iam.cloud.ibm.com/identity/token

Example of constructing the service client

const DatastageV3 = require('datastage/datastage/v3');
const datastageService = DatastageV3.newInstance({});

Setting client options through external configuration

To authenticate when using this sdk, an external credentials file is necessary (i.e. credentials.env). In this credentials file you will define and set 4 required fields for authenticating your sdk use against IAM.

Example environment variables, where <API_KEY> is your IAM API key

DATASTAGE_AUTH_TYPE=iam
DATASTAGE_URL=https://api.dataplatform.cloud.ibm.com/data_intg
DATASTAGE_APIKEY=<API_KEY>
DATASTAGE_AUTH_URL=https://iam.cloud.ibm.com/identity/token

Example of constructing the service client

import os
from datastage.datastage_v3 import DatastageV3

# define path to external credentials file
config_file = 'credentials.env'
# define a chosen service name
custom_service_name = 'DATASTAGE'

datastage_service = None

if os.path.exists(config_file):
    # set environment variable to point towards credentials file path
	os.environ['IBM_CREDENTIALS_FILE'] = config_file

    # create datastage instance using custom service name
	datastage_service = DatastageV3.new_instance(custom_service_name)

Endpoint URLs

Identify the base URL for your service instance.

IBM Cloud URLs

The base URLs come from the service instance. To find the URL, view the service credentials by clicking the name of the service in the Resource list. Use the value of the URL. Add the method to form the complete API endpoint for your request.

https://api.dataplatform.cloud.ibm.com/data_intg

Example API request

curl --request GET --header "Content-Type: application/json" --header "Accept: application/json" --header "Authorization: Bearer <IAM token>" --url "https://api.dataplatform.cloud.ibm.com/data_intg/v3/data_intg_flows?project_id=<Project ID>&limit=10"

Replace <IAM token> and <Project ID> in this example with the values for your particular API call.

Auditing

You can monitor API activity within your account by using the DataStage service. Whenever an API method is called, an event is generated that you can then track and audit from within Acitivty Tracker. The specific event type is listed for each individual method.

Error handling

DataStage uses standard HTTP response codes to indicate whether a method completed successfully. HTTP response codes in the 2xx range indicate success. A response in the 4xx range is some sort of failure, and a response in the 5xx range usually indicates an internal system error that cannot be resolved by the user. Response codes are listed with the method.

ErrorResponse

Name Description
error
string
Description of the problem.
code
integer
HTTP response code.
code_description
string
Response message.
warnings
string
Warnings associated with the error.

Methods

Delete DataStage flows

Deletes the specified data flows in a project or catalog (either project_id or catalog_id must be set).

If the deletion of the data flows and their runs will take some time to finish, then a 202 response will be returned and the deletion will continue asynchronously.

All the data flow runs associated with the data flows will also be deleted. If a data flow is still running, it will not be deleted unless the force parameter is set to true. If a data flow is still running and the force parameter is set to true, the call returns immediately with a 202 response. The related data flows are deleted after the data flow runs are stopped.

Deletes the specified data flows in a project or catalog (either project_id or catalog_id must be set).

If the deletion of the data flows and their runs will take some time to finish, then a 202 response will be returned and the deletion will continue asynchronously. All the data flow runs associated with the data flows will also be deleted. If a data flow is still running, it will not be deleted unless the force parameter is set to true. If a data flow is still running and the force parameter is set to true, the call returns immediately with a 202 response. The related data flows are deleted after the data flow runs are stopped.

Deletes the specified data flows in a project or catalog (either project_id or catalog_id must be set).

If the deletion of the data flows and their runs will take some time to finish, then a 202 response will be returned and the deletion will continue asynchronously. All the data flow runs associated with the data flows will also be deleted. If a data flow is still running, it will not be deleted unless the force parameter is set to true. If a data flow is still running and the force parameter is set to true, the call returns immediately with a 202 response. The related data flows are deleted after the data flow runs are stopped.

Deletes the specified data flows in a project or catalog (either project_id or catalog_id must be set).

If the deletion of the data flows and their runs will take some time to finish, then a 202 response will be returned and the deletion will continue asynchronously. All the data flow runs associated with the data flows will also be deleted. If a data flow is still running, it will not be deleted unless the force parameter is set to true. If a data flow is still running and the force parameter is set to true, the call returns immediately with a 202 response. The related data flows are deleted after the data flow runs are stopped.

DELETE /v3/data_intg_flows
ServiceCall<Void> deleteDatastageFlows(DeleteDatastageFlowsOptions deleteDatastageFlowsOptions)
deleteDatastageFlows(params)
delete_datastage_flows(
        self,
        id: List[str],
        *,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        force: Optional[bool] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the DeleteDatastageFlowsOptions.Builder to create a DeleteDatastageFlowsOptions object that contains the parameter values for the deleteDatastageFlows method.

Query Parameters

  • The list of DataStage flow IDs to delete.

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Example: 4c9adbb4-28ef-4a7d-b273-1cee0c38021f

  • Whether to stop all running data flows. Running DataStage flows must be stopped before the DataStage flows can be deleted.

The deleteDatastageFlows options.

parameters

  • The list of DataStage flow IDs to delete.

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • Whether to stop all running data flows. Running DataStage flows must be stopped before the DataStage flows can be deleted.

parameters

  • The list of DataStage flow IDs to delete.

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • Whether to stop all running data flows. Running DataStage flows must be stopped before the DataStage flows can be deleted.

  • curl -X DELETE --location --header "Authorization: Bearer {iam_token}"   "{base_url}/v3/data_intg_flows?id=[]&project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
  • String[] ids = new String[] {flowID, cloneFlowID};
    DeleteDatastageFlowsOptions deleteDatastageFlowsOptions = new DeleteDatastageFlowsOptions.Builder()
      .id(Arrays.asList(ids))
      .projectId(projectID)
      .build();
    
    datastageService.deleteDatastageFlows(deleteDatastageFlowsOptions).execute();
  •      const params = {
           id: [subflow_assetID, subflowCloneID],
           projectId: projectID,
         };
         const res = await datastageService.deleteDatastageSubflows(params);
  • response = datastage_service.delete_datastage_flows(
      id=createdFlowId,
      project_id=config['PROJECT_ID']
    )

Response

Status Code

  • The requested operation is in progress.

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Get metadata for DataStage flows

Lists the metadata and entity for DataStage flows that are contained in the specified project.

Use the following parameters to filter the results:

| Field | Match type | Example | | ------------------------ | ------------ | --------------------------------------- | | entity.name | Equals | entity.name=MyDataStageFlow | | entity.name | Starts with | entity.name=starts:MyData | | entity.description | Equals | entity.description=movement | | entity.description | Starts with | entity.description=starts:data |

To sort the results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time (i.e. returning the most recently created data flows first).

| Field | Example | | ------------------------- | ----------------------------------- | | sort | sort=+entity.name (sort by ascending name) | | sort | sort=-metadata.create_time (sort by descending creation time) |

Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time and then in ascending order on name use: sort=-metadata.create_time,+entity.name

Lists the metadata and entity for DataStage flows that are contained in the specified project.

Use the following parameters to filter the results:

| Field | Match type | Example | | ------------------------ | ------------ | --------------------------------------- | | entity.name | Equals | entity.name=MyDataStageFlow | | entity.name | Starts with | entity.name=starts:MyData | | entity.description | Equals | entity.description=movement | | entity.description | Starts with | entity.description=starts:data |

To sort the results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time (i.e. returning the most recently created data flows first).

| Field | Example | | ------------------------- | ----------------------------------- | | sort | sort=+entity.name (sort by ascending name) | | sort | sort=-metadata.create_time (sort by descending creation time) |

Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time and then in ascending order on name use: sort=-metadata.create_time,+entity.name.

Lists the metadata and entity for DataStage flows that are contained in the specified project.

Use the following parameters to filter the results:

| Field | Match type | Example | | ------------------------ | ------------ | --------------------------------------- | | entity.name | Equals | entity.name=MyDataStageFlow | | entity.name | Starts with | entity.name=starts:MyData | | entity.description | Equals | entity.description=movement | | entity.description | Starts with | entity.description=starts:data |

To sort the results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time (i.e. returning the most recently created data flows first).

| Field | Example | | ------------------------- | ----------------------------------- | | sort | sort=+entity.name (sort by ascending name) | | sort | sort=-metadata.create_time (sort by descending creation time) |

Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time and then in ascending order on name use: sort=-metadata.create_time,+entity.name.

Lists the metadata and entity for DataStage flows that are contained in the specified project.

Use the following parameters to filter the results:

| Field | Match type | Example | | ------------------------ | ------------ | --------------------------------------- | | entity.name | Equals | entity.name=MyDataStageFlow | | entity.name | Starts with | entity.name=starts:MyData | | entity.description | Equals | entity.description=movement | | entity.description | Starts with | entity.description=starts:data |

To sort the results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time (i.e. returning the most recently created data flows first).

| Field | Example | | ------------------------- | ----------------------------------- | | sort | sort=+entity.name (sort by ascending name) | | sort | sort=-metadata.create_time (sort by descending creation time) |

Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time and then in ascending order on name use: sort=-metadata.create_time,+entity.name.

GET /v3/data_intg_flows
ServiceCall<DataFlowPagedCollection> listDatastageFlows(ListDatastageFlowsOptions listDatastageFlowsOptions)
listDatastageFlows(params)
list_datastage_flows(
        self,
        *,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        sort: Optional[str] = None,
        start: Optional[str] = None,
        limit: Optional[int] = None,
        entity_name: Optional[str] = None,
        entity_description: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the ListDatastageFlowsOptions.Builder to create a ListDatastageFlowsOptions object that contains the parameter values for the listDatastageFlows method.

Query Parameters

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Example: 4c9adbb4-28ef-4a7d-b273-1cee0c38021f

  • The field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time.

  • The page token indicating where to start paging from.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e2

  • The limit of the number of items to return for each page, for example limit=50. If not specified a default of 100 will be used. The maximum value of limit is 200.

    Example: 100

  • Filter results based on the specified name.

  • Filter results based on the specified description.

The listDatastageFlows options.

parameters

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • The field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time.

  • The page token indicating where to start paging from.

    Examples:
  • The limit of the number of items to return for each page, for example limit=50. If not specified a default of 100 will be used. The maximum value of limit is 200.

    Examples:
  • Filter results based on the specified name.

  • Filter results based on the specified description.

parameters

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • The field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time.

  • The page token indicating where to start paging from.

    Examples:
  • The limit of the number of items to return for each page, for example limit=50. If not specified a default of 100 will be used. The maximum value of limit is 200.

    Examples:
  • Filter results based on the specified name.

  • Filter results based on the specified description.

  • curl -X GET --location --header "Authorization: Bearer {iam_token}"   --header "Accept: application/json;charset=utf-8"   "{base_url}/v3/data_intg_flows?project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23&limit=100"
  • ListDatastageFlowsOptions listDatastageFlowsOptions = new ListDatastageFlowsOptions.Builder()
      .projectId(projectID)
      .limit(Long.valueOf("100"))
      .build();
    
    Response<DataFlowPagedCollection> response = datastageService.listDatastageFlows(listDatastageFlowsOptions).execute();
    DataFlowPagedCollection dataFlowPagedCollection = response.getResult();
    
    System.out.println(dataFlowPagedCollection);
  •      const params = {
           projectId: projectID,
           sort: 'name',
           limit: 100,
         };
         const res = await datastageService.listDatastageFlows(params);
  • data_flow_paged_collection = datastage_service.list_datastage_flows(
      project_id=config['PROJECT_ID'],
      limit=100
    ).get_result()
    
    print(json.dumps(data_flow_paged_collection, indent=2))

Response

A page from a collection of DataStage flows.

A page from a collection of DataStage flows.

A page from a collection of DataStage flows.

A page from a collection of DataStage flows.

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An error occurred. See response for more information.

Example responses
  • {
      "data_flows": [
        {
          "entity": {
            "description": " ",
            "name": "{job_name}",
            "rov": {
              "mode": 0
            }
          },
          "metadata": {
            "asset_id": "{asset_id}",
            "asset_type": "data_intg_flow",
            "create_time": "2021-04-03 15:32:55+00:00",
            "creator_id": "IBMid-xxxxxxxxx",
            "description": " ",
            "href": "{url}/data_intg/v3/data_intg_flows/{asset_id}?project_id={project_id}",
            "name": "{job_name}",
            "project_id": "{project_id}",
            "resource_key": "{project_id}/data_intg_flow/{job_name}",
            "size": 5780,
            "usage": {
              "access_count": 0,
              "last_access_time": "2021-04-03 15:33:01.320000+00:00",
              "last_accessor_id": "IBMid-xxxxxxxxx",
              "last_modification_time": "2021-04-03 15:33:01.320000+00:00",
              "last_modifier_id": "IBMid-xxxxxxxxx"
            }
          }
        }
      ],
      "first": {
        "href": "{url}/data_intg/v3/data_intg_flows?project_id={project_id}&limit=2"
      },
      "next": {
        "href": "{url}/data_intg/v3/data_intg_flows?project_id={project_id}&limit=2&start=g1AAAADOeJzLYWBgYMpgTmHQSklKzi9KdUhJMjTUS8rVTU7WLS3WLc4vLcnQNbLQS87JL01JzCvRy0styQHpyWMBkgwNQOr____9WWCxXCAhYmRgZKhrYKJrYBxiaGplbGRlahqVaJCFZocB8XYcgNhxHrcdhlamhlGJ-llZAD4lOMI"
      },
      "total_count": 135
    }
  • {
      "data_flows": [
        {
          "entity": {
            "description": " ",
            "name": "{job_name}",
            "rov": {
              "mode": 0
            }
          },
          "metadata": {
            "asset_id": "{asset_id}",
            "asset_type": "data_intg_flow",
            "create_time": "2021-04-03 15:32:55+00:00",
            "creator_id": "IBMid-xxxxxxxxx",
            "description": " ",
            "href": "{url}/data_intg/v3/data_intg_flows/{asset_id}?project_id={project_id}",
            "name": "{job_name}",
            "project_id": "{project_id}",
            "resource_key": "{project_id}/data_intg_flow/{job_name}",
            "size": 5780,
            "usage": {
              "access_count": 0,
              "last_access_time": "2021-04-03 15:33:01.320000+00:00",
              "last_accessor_id": "IBMid-xxxxxxxxx",
              "last_modification_time": "2021-04-03 15:33:01.320000+00:00",
              "last_modifier_id": "IBMid-xxxxxxxxx"
            }
          }
        }
      ],
      "first": {
        "href": "{url}/data_intg/v3/data_intg_flows?project_id={project_id}&limit=2"
      },
      "next": {
        "href": "{url}/data_intg/v3/data_intg_flows?project_id={project_id}&limit=2&start=g1AAAADOeJzLYWBgYMpgTmHQSklKzi9KdUhJMjTUS8rVTU7WLS3WLc4vLcnQNbLQS87JL01JzCvRy0styQHpyWMBkgwNQOr____9WWCxXCAhYmRgZKhrYKJrYBxiaGplbGRlahqVaJCFZocB8XYcgNhxHrcdhlamhlGJ-llZAD4lOMI"
      },
      "total_count": 135
    }

Create DataStage flow

Creates a DataStage flow in the specified project or catalog (either project_id or catalog_id must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.

Creates a DataStage flow in the specified project or catalog (either project_id or catalog_id must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.

Creates a DataStage flow in the specified project or catalog (either project_id or catalog_id must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.

Creates a DataStage flow in the specified project or catalog (either project_id or catalog_id must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.

POST /v3/data_intg_flows
ServiceCall<DataIntgFlow> createDatastageFlows(CreateDatastageFlowsOptions createDatastageFlowsOptions)
createDatastageFlows(params)
create_datastage_flows(
        self,
        data_intg_flow_name: str,
        *,
        pipeline_flows: Optional['PipelineJson'] = None,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        directory_asset_id: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the CreateDatastageFlowsOptions.Builder to create a CreateDatastageFlowsOptions object that contains the parameter values for the createDatastageFlows method.

Query Parameters

  • The data flow name.

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Example: 4c9adbb4-28ef-4a7d-b273-1cee0c38021f

  • The directory asset ID.

Pipeline json to be attached.

The createDatastageFlows options.

parameters

  • The data flow name.

  • Pipeline flow to be stored.

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • The directory asset ID.

parameters

  • The data flow name.

  • Pipeline flow to be stored.

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • The directory asset ID.

  • curl -X POST --location --header "Authorization: Bearer {iam_token}"   --header "Accept: application/json;charset=utf-8"   --header "Content-Type: application/json;charset=utf-8"   --data '{}'   "{base_url}/v3/data_intg_flows?data_intg_flow_name={data_intg_flow_name}&project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
  • PipelineJson exampleFlow = PipelineFlowHelper.buildPipelineFlow(flowJson);
    CreateDatastageFlowsOptions createDatastageFlowsOptions = new CreateDatastageFlowsOptions.Builder()
      .dataIntgFlowName(flowName)
      .pipelineFlows(exampleFlow)
      .projectId(projectID)
      .build();
    
    Response<DataIntgFlow> response = datastageService.createDatastageFlows(createDatastageFlowsOptions).execute();
    DataIntgFlow dataIntgFlow = response.getResult();
    System.out.println(dataIntgFlow);
  •      const pipelineJsonFromFile = JSON.parse(fs.readFileSync('testInput/rowgen_peek.json', 'utf-8'));
         const params = {
           dataIntgFlowName,
           pipelineFlows: pipelineJsonFromFile,
           projectId: projectID,
           assetCategory: 'system',
         };
         const res = await datastageService.createDatastageFlows(params);
  • data_intg_flow = datastage_service.create_datastage_flows(
      data_intg_flow_name='testFlowJob1',
      pipeline_flows=UtilHelper.readJsonFileToDict('inputFiles/exampleFlow.json'),
      project_id=config['PROJECT_ID']
    ).get_result()
    
    print(json.dumps(data_intg_flow, indent=2))

Response

A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).

A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).

A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).

A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An error occurred. See response for more information.

Example responses
  • {
      "entity": {
        "data_intg_flow": {
          "dataset": false,
          "mime_type": "application/json"
        }
      },
      "metadata": {
        "asset_id": "{asset_id}",
        "asset_type": "data_intg_flow",
        "href": "{url}/data_intg/v3/data_intg_flows/{asset_id}",
        "name": "{job_name}",
        "origin_country": "US",
        "resource_key": "{project_id}/data_intg_flow/{job_name}"
      }
    }
  • {
      "entity": {
        "data_intg_flow": {
          "dataset": false,
          "mime_type": "application/json"
        }
      },
      "metadata": {
        "asset_id": "{asset_id}",
        "asset_type": "data_intg_flow",
        "href": "{url}/data_intg/v3/data_intg_flows/{asset_id}",
        "name": "{job_name}",
        "origin_country": "US",
        "resource_key": "{project_id}/data_intg_flow/{job_name}"
      }
    }

Get DataStage flow

Lists the DataStage flow that is contained in the specified project. Attachments, metadata and a limited number of attributes from the entity of each DataStage flow is returned.

Lists the DataStage flow that is contained in the specified project. Attachments, metadata and a limited number of attributes from the entity of each DataStage flow is returned.

Lists the DataStage flow that is contained in the specified project. Attachments, metadata and a limited number of attributes from the entity of each DataStage flow is returned.

Lists the DataStage flow that is contained in the specified project. Attachments, metadata and a limited number of attributes from the entity of each DataStage flow is returned.

GET /v3/data_intg_flows/{data_intg_flow_id}
ServiceCall<DataIntgFlowJson> getDatastageFlows(GetDatastageFlowsOptions getDatastageFlowsOptions)
getDatastageFlows(params)
get_datastage_flows(
        self,
        data_intg_flow_id: str,
        *,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the GetDatastageFlowsOptions.Builder to create a GetDatastageFlowsOptions object that contains the parameter values for the getDatastageFlows method.

Path Parameters

  • The DataStage flow ID to use.

Query Parameters

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Example: 4c9adbb4-28ef-4a7d-b273-1cee0c38021f

The getDatastageFlows options.

parameters

  • The DataStage flow ID to use.

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:

parameters

  • The DataStage flow ID to use.

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • curl -X GET --location --header "Authorization: Bearer {iam_token}"   --header "Accept: application/json;charset=utf-8"   "{base_url}/v3/data_intg_flows/{data_intg_flow_id}?project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
  • GetDatastageFlowsOptions getDatastageFlowsOptions = new GetDatastageFlowsOptions.Builder()
      .dataIntgFlowId(flowID)
      .projectId(projectID)
      .build();
    
    Response<DataIntgFlowJson> response = datastageService.getDatastageFlows(getDatastageFlowsOptions).execute();
    DataIntgFlowJson dataIntgFlowJson = response.getResult();
    
    System.out.println(dataIntgFlowJson);
  •      const params = {
           dataIntgFlowId: assetID,
           projectId: projectID,
         };
         const res = await datastageService.getDatastageFlows(params);
  • data_intg_flow_json = datastage_service.get_datastage_flows(
      data_intg_flow_id=createdFlowId,
      project_id=config['PROJECT_ID']
    ).get_result()
    
    print(json.dumps(data_intg_flow_json, indent=2))

Response

A pipeline JSON containing operations to apply to source(s).

A pipeline JSON containing operations to apply to source(s).

A pipeline JSON containing operations to apply to source(s).

A pipeline JSON containing operations to apply to source(s).

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • Unexpected error.

Example responses
  • {
      "attachments": {
        "app_data": {
          "datastage": {
            "external_parameters": []
          }
        },
        "doc_type": "pipeline",
        "id": "98cc1fa0-0fd8-4d55-9b27-d477096b4b37",
        "json_schema": "{url}/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.json",
        "pipelines": [
          {
            "app_data": {
              "datastage": {
                "runtime_column_propagation": "false"
              },
              "ui_data": {
                "comments": []
              }
            },
            "id": "287b2b30-95ff-4cc8-b18f-92e23c464134",
            "nodes": [
              {
                "app_data": {
                  "datastage": {
                    "outputs_order": "46e18367-1820-4fe8-8c7c-d8badbc76aa3"
                  },
                  "ui_data": {
                    "image": "../graphics/palette/PxRowGenerator.svg",
                    "label": "RowGen_1",
                    "x_pos": 239,
                    "y_pos": 236
                  }
                },
                "id": "77e6d535-8312-4692-8850-c129dcf921ed",
                "op": "PxRowGenerator",
                "outputs": [
                  {
                    "app_data": {
                      "datastage": {
                        "is_source_of_link": "55b884a7-9cfb-4e02-802b-82444ee95bb5"
                      },
                      "ui_data": {
                        "label": "outPort"
                      }
                    },
                    "id": "46e18367-1820-4fe8-8c7c-d8badbc76aa3",
                    "parameters": {
                      "buf_free_run": 50,
                      "disk_write_inc": 1048576,
                      "max_mem_buf_size": 3145728,
                      "queue_upper_size": 0,
                      "records": 10
                    },
                    "schema_ref": "07fed318-4370-4c95-bbbc-16d4a91421bb"
                  }
                ],
                "parameters": {
                  "input_count": 0,
                  "output_count": 1
                },
                "type": "binding"
              },
              {
                "app_data": {
                  "datastage": {
                    "inputs_order": "9e842525-7bbf-4a42-ae95-49ae325e0c87"
                  },
                  "ui_data": {
                    "image": "../graphics/palette/informix.svg",
                    "label": "informixTgt",
                    "x_pos": 690,
                    "y_pos": 229
                  }
                },
                "connection": {
                  "project_ref": "{project_id}",
                  "properties": {
                    "create_statement": "CREATE TABLE custid(customer_num int)",
                    "table_action": "append",
                    "table_name": "custid",
                    "write_mode": "insert"
                  },
                  "ref": "85193161-aa63-4cc5-80e7-7bfcdd59c438"
                },
                "id": "8b4933d9-32c0-4c40-9c47-d8791ab12baf",
                "inputs": [
                  {
                    "app_data": {
                      "datastage": {},
                      "ui_data": {
                        "label": "inPort"
                      }
                    },
                    "id": "9e842525-7bbf-4a42-ae95-49ae325e0c87",
                    "links": [
                      {
                        "app_data": {
                          "datastage": {},
                          "ui_data": {
                            "decorations": [
                              {
                                "class_name": "",
                                "hotspot": false,
                                "id": "Link_3",
                                "label": "Link_3",
                                "outline": true,
                                "path": "",
                                "position": "middle"
                              }
                            ]
                          }
                        },
                        "id": "55b884a7-9cfb-4e02-802b-82444ee95bb5",
                        "link_name": "Link_3",
                        "node_id_ref": "77e6d535-8312-4692-8850-c129dcf921ed",
                        "port_id_ref": "46e18367-1820-4fe8-8c7c-d8badbc76aa3",
                        "type_attr": "PRIMARY"
                      }
                    ],
                    "parameters": {
                      "part_coll": "part_type"
                    },
                    "schema_ref": "07fed318-4370-4c95-bbbc-16d4a91421bb"
                  }
                ],
                "op": "informix",
                "parameters": {
                  "input_count": 1,
                  "output_count": 0
                },
                "type": "binding"
              }
            ],
            "runtime_ref": "pxOsh"
          }
        ],
        "primary_pipeline": "287b2b30-95ff-4cc8-b18f-92e23c464134",
        "schemas": [
          {
            "fields": [
              {
                "app_data": {
                  "column_reference": "customer_num",
                  "is_unicode_string": false,
                  "odbc_type": "INTEGER",
                  "table_def": "Saved\\\\Link_3\\\\ifx_customer",
                  "type_code": "INT32"
                },
                "metadata": {
                  "decimal_precision": 0,
                  "decimal_scale": 0,
                  "is_key": false,
                  "is_signed": true,
                  "item_index": 0,
                  "max_length": 0,
                  "min_length": 0
                },
                "name": "customer_num",
                "nullable": false,
                "type": "integer"
              }
            ],
            "id": "07fed318-4370-4c95-bbbc-16d4a91421bb"
          }
        ],
        "version": "3.0"
      },
      "entity": {
        "description": "",
        "name": "{job_name}",
        "rov": {
          "mode": 0
        }
      },
      "metadata": {
        "asset_id": "{asset_id}",
        "asset_type": "data_intg_flow",
        "create_time": "2021-04-08 17:14:08+00:00",
        "creator_id": "IBMid-xxxxxxxxxx",
        "description": "",
        "name": "{job_name}",
        "project_id": "{project_id}",
        "resource_key": "{project_id}/data_intg_flow/{job_name}",
        "size": 2712,
        "usage": {
          "access_count": 0,
          "last_access_time": "2021-04-08 17:14:10.193000+00:00",
          "last_accessor_id": "IBMid-xxxxxxxxxx",
          "last_modification_time": "2021-04-08 17:14:10.193000+00:00",
          "last_modifier_id": "IBMid-xxxxxxxxxx"
        }
      }
    }
  • {
      "attachments": {
        "app_data": {
          "datastage": {
            "external_parameters": []
          }
        },
        "doc_type": "pipeline",
        "id": "98cc1fa0-0fd8-4d55-9b27-d477096b4b37",
        "json_schema": "{url}/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.json",
        "pipelines": [
          {
            "app_data": {
              "datastage": {
                "runtime_column_propagation": "false"
              },
              "ui_data": {
                "comments": []
              }
            },
            "id": "287b2b30-95ff-4cc8-b18f-92e23c464134",
            "nodes": [
              {
                "app_data": {
                  "datastage": {
                    "outputs_order": "46e18367-1820-4fe8-8c7c-d8badbc76aa3"
                  },
                  "ui_data": {
                    "image": "../graphics/palette/PxRowGenerator.svg",
                    "label": "RowGen_1",
                    "x_pos": 239,
                    "y_pos": 236
                  }
                },
                "id": "77e6d535-8312-4692-8850-c129dcf921ed",
                "op": "PxRowGenerator",
                "outputs": [
                  {
                    "app_data": {
                      "datastage": {
                        "is_source_of_link": "55b884a7-9cfb-4e02-802b-82444ee95bb5"
                      },
                      "ui_data": {
                        "label": "outPort"
                      }
                    },
                    "id": "46e18367-1820-4fe8-8c7c-d8badbc76aa3",
                    "parameters": {
                      "buf_free_run": 50,
                      "disk_write_inc": 1048576,
                      "max_mem_buf_size": 3145728,
                      "queue_upper_size": 0,
                      "records": 10
                    },
                    "schema_ref": "07fed318-4370-4c95-bbbc-16d4a91421bb"
                  }
                ],
                "parameters": {
                  "input_count": 0,
                  "output_count": 1
                },
                "type": "binding"
              },
              {
                "app_data": {
                  "datastage": {
                    "inputs_order": "9e842525-7bbf-4a42-ae95-49ae325e0c87"
                  },
                  "ui_data": {
                    "image": "../graphics/palette/informix.svg",
                    "label": "informixTgt",
                    "x_pos": 690,
                    "y_pos": 229
                  }
                },
                "connection": {
                  "project_ref": "{project_id}",
                  "properties": {
                    "create_statement": "CREATE TABLE custid(customer_num int)",
                    "table_action": "append",
                    "table_name": "custid",
                    "write_mode": "insert"
                  },
                  "ref": "85193161-aa63-4cc5-80e7-7bfcdd59c438"
                },
                "id": "8b4933d9-32c0-4c40-9c47-d8791ab12baf",
                "inputs": [
                  {
                    "app_data": {
                      "datastage": {},
                      "ui_data": {
                        "label": "inPort"
                      }
                    },
                    "id": "9e842525-7bbf-4a42-ae95-49ae325e0c87",
                    "links": [
                      {
                        "app_data": {
                          "datastage": {},
                          "ui_data": {
                            "decorations": [
                              {
                                "class_name": "",
                                "hotspot": false,
                                "id": "Link_3",
                                "label": "Link_3",
                                "outline": true,
                                "path": "",
                                "position": "middle"
                              }
                            ]
                          }
                        },
                        "id": "55b884a7-9cfb-4e02-802b-82444ee95bb5",
                        "link_name": "Link_3",
                        "node_id_ref": "77e6d535-8312-4692-8850-c129dcf921ed",
                        "port_id_ref": "46e18367-1820-4fe8-8c7c-d8badbc76aa3",
                        "type_attr": "PRIMARY"
                      }
                    ],
                    "parameters": {
                      "part_coll": "part_type"
                    },
                    "schema_ref": "07fed318-4370-4c95-bbbc-16d4a91421bb"
                  }
                ],
                "op": "informix",
                "parameters": {
                  "input_count": 1,
                  "output_count": 0
                },
                "type": "binding"
              }
            ],
            "runtime_ref": "pxOsh"
          }
        ],
        "primary_pipeline": "287b2b30-95ff-4cc8-b18f-92e23c464134",
        "schemas": [
          {
            "fields": [
              {
                "app_data": {
                  "column_reference": "customer_num",
                  "is_unicode_string": false,
                  "odbc_type": "INTEGER",
                  "table_def": "Saved\\\\Link_3\\\\ifx_customer",
                  "type_code": "INT32"
                },
                "metadata": {
                  "decimal_precision": 0,
                  "decimal_scale": 0,
                  "is_key": false,
                  "is_signed": true,
                  "item_index": 0,
                  "max_length": 0,
                  "min_length": 0
                },
                "name": "customer_num",
                "nullable": false,
                "type": "integer"
              }
            ],
            "id": "07fed318-4370-4c95-bbbc-16d4a91421bb"
          }
        ],
        "version": "3.0"
      },
      "entity": {
        "description": "",
        "name": "{job_name}",
        "rov": {
          "mode": 0
        }
      },
      "metadata": {
        "asset_id": "{asset_id}",
        "asset_type": "data_intg_flow",
        "create_time": "2021-04-08 17:14:08+00:00",
        "creator_id": "IBMid-xxxxxxxxxx",
        "description": "",
        "name": "{job_name}",
        "project_id": "{project_id}",
        "resource_key": "{project_id}/data_intg_flow/{job_name}",
        "size": 2712,
        "usage": {
          "access_count": 0,
          "last_access_time": "2021-04-08 17:14:10.193000+00:00",
          "last_accessor_id": "IBMid-xxxxxxxxxx",
          "last_modification_time": "2021-04-08 17:14:10.193000+00:00",
          "last_modifier_id": "IBMid-xxxxxxxxxx"
        }
      }
    }

Update DataStage flow

Modifies a data flow in the specified project or catalog (either project_id or catalog_id must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.

Modifies a data flow in the specified project or catalog (either project_id or catalog_id must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.

Modifies a data flow in the specified project or catalog (either project_id or catalog_id must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.

Modifies a data flow in the specified project or catalog (either project_id or catalog_id must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.

PUT /v3/data_intg_flows/{data_intg_flow_id}
ServiceCall<DataIntgFlow> updateDatastageFlows(UpdateDatastageFlowsOptions updateDatastageFlowsOptions)
updateDatastageFlows(params)
update_datastage_flows(
        self,
        data_intg_flow_id: str,
        data_intg_flow_name: str,
        *,
        pipeline_flows: Optional['PipelineJson'] = None,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        directory_asset_id: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the UpdateDatastageFlowsOptions.Builder to create a UpdateDatastageFlowsOptions object that contains the parameter values for the updateDatastageFlows method.

Path Parameters

  • The DataStage flow ID to use.

Query Parameters

  • The data flow name.

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Example: 4c9adbb4-28ef-4a7d-b273-1cee0c38021f

  • The directory asset ID.

Pipeline json to be attached.

The updateDatastageFlows options.

parameters

  • The DataStage flow ID to use.

  • The data flow name.

  • Pipeline flow to be stored.

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • The directory asset ID.

parameters

  • The DataStage flow ID to use.

  • The data flow name.

  • Pipeline flow to be stored.

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • The directory asset ID.

  • curl -X PUT --location --header "Authorization: Bearer {iam_token}"   --header "Accept: application/json;charset=utf-8"   --header "Content-Type: application/json;charset=utf-8"   --data '{}'   "{base_url}/v3/data_intg_flows/{data_intg_flow_id}?data_intg_flow_name={data_intg_flow_name}&project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
  • PipelineJson exampleFlowUpdated = PipelineFlowHelper.buildPipelineFlow(updatedFlowJson);
    UpdateDatastageFlowsOptions updateDatastageFlowsOptions = new UpdateDatastageFlowsOptions.Builder()
      .dataIntgFlowId(flowID)
      .dataIntgFlowName(flowName)
      .pipelineFlows(exampleFlowUpdated)
      .projectId(projectID)
      .build();
    
    Response<DataIntgFlow> response = datastageService.updateDatastageFlows(updateDatastageFlowsOptions).execute();
    DataIntgFlow dataIntgFlow = response.getResult();
    
    System.out.println(dataIntgFlow);
  •      const params = {
           dataIntgFlowId: assetID,
           dataIntgFlowName,
           pipelineFlows: pipelineJsonFromFile,
           projectId: projectID,
           assetCategory: 'system',
         };
         const res = await datastageService.updateDatastageFlows(params);
  • data_intg_flow = datastage_service.update_datastage_flows(
      data_intg_flow_id=createdFlowId,
      data_intg_flow_name='testFlowJob1Updated',
      pipeline_flows=UtilHelper.readJsonFileToDict('inputFiles/exampleFlowUpdated.json'),
      project_id=config['PROJECT_ID']
    ).get_result()
    
    print(json.dumps(data_intg_flow, indent=2))

Response

A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).

A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).

A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).

A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An error occurred. See response for more information.

Example responses
  • {
      "attachments": [
        {
          "asset_type": "data_intg_flow",
          "attachment_id": "9081dd6b-0ab7-47b5-8233-c10c6e64509d",
          "href": "{url}/v2/assets/{asset_id}/attachments/9081dd6b-0ab7-47b5-8233-c10c6e64509d?project_id={project_id}",
          "mime": "application/json",
          "name": "data_intg_flows",
          "object_key": "data_intg_flow/{project_id}{asset_id}",
          "object_key_is_read_only": false,
          "private_url": false
        }
      ],
      "entity": {
        "data_intg_flow": {
          "dataset": false,
          "mime_type": "application/json"
        }
      },
      "metadata": {
        "asset_id": "{asset_id}",
        "asset_type": "data_intg_flow",
        "catalog_id": "{catalog_id}",
        "create_time": "2021-04-08 17:14:08+00:00",
        "creator_id": "IBMid-xxxxxxxxxx",
        "description": "",
        "href": "{url}/data_intg/v3/data_intg_flows/{asset_id}?catalog_id={catalog_id}",
        "name": "{job_name}",
        "origin_country": "us",
        "project_id": "{project_id}",
        "resource_key": "{project_id}/data_intg_flow/{job_name}",
        "size": 2712,
        "tags": [],
        "usage": {
          "access_count": 0,
          "last_access_time": "2021-04-08 17:21:33.936000+00:00",
          "last_accessor_id": "IBMid-xxxxxxxxxx"
        }
      }
    }
  • {
      "attachments": [
        {
          "asset_type": "data_intg_flow",
          "attachment_id": "9081dd6b-0ab7-47b5-8233-c10c6e64509d",
          "href": "{url}/v2/assets/{asset_id}/attachments/9081dd6b-0ab7-47b5-8233-c10c6e64509d?project_id={project_id}",
          "mime": "application/json",
          "name": "data_intg_flows",
          "object_key": "data_intg_flow/{project_id}{asset_id}",
          "object_key_is_read_only": false,
          "private_url": false
        }
      ],
      "entity": {
        "data_intg_flow": {
          "dataset": false,
          "mime_type": "application/json"
        }
      },
      "metadata": {
        "asset_id": "{asset_id}",
        "asset_type": "data_intg_flow",
        "catalog_id": "{catalog_id}",
        "create_time": "2021-04-08 17:14:08+00:00",
        "creator_id": "IBMid-xxxxxxxxxx",
        "description": "",
        "href": "{url}/data_intg/v3/data_intg_flows/{asset_id}?catalog_id={catalog_id}",
        "name": "{job_name}",
        "origin_country": "us",
        "project_id": "{project_id}",
        "resource_key": "{project_id}/data_intg_flow/{job_name}",
        "size": 2712,
        "tags": [],
        "usage": {
          "access_count": 0,
          "last_access_time": "2021-04-08 17:21:33.936000+00:00",
          "last_accessor_id": "IBMid-xxxxxxxxxx"
        }
      }
    }

Modifies attributes of a DataStage flow

Modifies attributes of a DataStage flow in the specified project or catalog (either project_id or catalog_id must be set).

Modifies attributes of a DataStage flow in the specified project or catalog (either project_id or catalog_id must be set).

Modifies attributes of a DataStage flow in the specified project or catalog (either project_id or catalog_id must be set).

Modifies attributes of a DataStage flow in the specified project or catalog (either project_id or catalog_id must be set).

PUT /v3/data_intg_flows/{data_intg_flow_id}/attributes
ServiceCall<DataIntgFlow> patchAttributesDatastageFlow(PatchAttributesDatastageFlowOptions patchAttributesDatastageFlowOptions)
patchAttributesDatastageFlow(params)
patch_attributes_datastage_flow(
        self,
        data_intg_flow_id: str,
        *,
        description: Optional[str] = None,
        directory_asset_id: Optional[str] = None,
        name: Optional[str] = None,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the PatchAttributesDatastageFlowOptions.Builder to create a PatchAttributesDatastageFlowOptions object that contains the parameter values for the patchAttributesDatastageFlow method.

Path Parameters

  • The DataStage flow ID to use.

Query Parameters

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Example: 4c9adbb4-28ef-4a7d-b273-1cee0c38021f

attributes of flow to modify.

The patchAttributesDatastageFlow options.

parameters

  • The DataStage flow ID to use.

  • description of the asset.

  • The directory asset ID of the asset.

  • name of the asset.

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:

parameters

  • The DataStage flow ID to use.

  • description of the asset.

  • The directory asset ID of the asset.

  • name of the asset.

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:

Response

A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).

A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).

A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).

A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An error occurred. See response for more information.

Example responses
  • {
      "attachments": [
        {
          "asset_type": "data_intg_flow",
          "attachment_id": "9081dd6b-0ab7-47b5-8233-c10c6e64509d",
          "href": "{url}/v2/assets/{asset_id}/attachments/9081dd6b-0ab7-47b5-8233-c10c6e64509d?project_id={project_id}",
          "mime": "application/json",
          "name": "data_intg_flows",
          "object_key": "data_intg_flow/{project_id}{asset_id}",
          "object_key_is_read_only": false,
          "private_url": false
        }
      ],
      "entity": {
        "data_intg_flow": {
          "dataset": false,
          "mime_type": "application/json"
        }
      },
      "metadata": {
        "asset_id": "{asset_id}",
        "asset_type": "data_intg_flow",
        "catalog_id": "{catalog_id}",
        "create_time": "2021-04-08 17:14:08+00:00",
        "creator_id": "IBMid-xxxxxxxxxx",
        "description": "",
        "href": "{url}/data_intg/v3/data_intg_flows/{asset_id}?catalog_id={catalog_id}",
        "name": "{job_name}",
        "origin_country": "us",
        "project_id": "{project_id}",
        "resource_key": "{project_id}/data_intg_flow/{job_name}",
        "size": 2712,
        "tags": [],
        "usage": {
          "access_count": 0,
          "last_access_time": "2021-04-08 17:21:33.936000+00:00",
          "last_accessor_id": "IBMid-xxxxxxxxxx"
        }
      }
    }
  • {
      "attachments": [
        {
          "asset_type": "data_intg_flow",
          "attachment_id": "9081dd6b-0ab7-47b5-8233-c10c6e64509d",
          "href": "{url}/v2/assets/{asset_id}/attachments/9081dd6b-0ab7-47b5-8233-c10c6e64509d?project_id={project_id}",
          "mime": "application/json",
          "name": "data_intg_flows",
          "object_key": "data_intg_flow/{project_id}{asset_id}",
          "object_key_is_read_only": false,
          "private_url": false
        }
      ],
      "entity": {
        "data_intg_flow": {
          "dataset": false,
          "mime_type": "application/json"
        }
      },
      "metadata": {
        "asset_id": "{asset_id}",
        "asset_type": "data_intg_flow",
        "catalog_id": "{catalog_id}",
        "create_time": "2021-04-08 17:14:08+00:00",
        "creator_id": "IBMid-xxxxxxxxxx",
        "description": "",
        "href": "{url}/data_intg/v3/data_intg_flows/{asset_id}?catalog_id={catalog_id}",
        "name": "{job_name}",
        "origin_country": "us",
        "project_id": "{project_id}",
        "resource_key": "{project_id}/data_intg_flow/{job_name}",
        "size": 2712,
        "tags": [],
        "usage": {
          "access_count": 0,
          "last_access_time": "2021-04-08 17:21:33.936000+00:00",
          "last_accessor_id": "IBMid-xxxxxxxxxx"
        }
      }
    }

Clone DataStage flow

Create a DataStage flow in the specified project or catalog or space based on an existing DataStage flow in the same project or catalog or space.

Create a DataStage flow in the specified project or catalog or space based on an existing DataStage flow in the same project or catalog or space.

Create a DataStage flow in the specified project or catalog or space based on an existing DataStage flow in the same project or catalog or space.

Create a DataStage flow in the specified project or catalog or space based on an existing DataStage flow in the same project or catalog or space.

POST /v3/data_intg_flows/{data_intg_flow_id}/clone
ServiceCall<DataIntgFlow> cloneDatastageFlows(CloneDatastageFlowsOptions cloneDatastageFlowsOptions)
cloneDatastageFlows(params)
clone_datastage_flows(
        self,
        data_intg_flow_id: str,
        *,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        directory_asset_id: Optional[str] = None,
        data_intg_flow_name: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the CloneDatastageFlowsOptions.Builder to create a CloneDatastageFlowsOptions object that contains the parameter values for the cloneDatastageFlows method.

Path Parameters

  • The DataStage flow ID to use.

Query Parameters

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Example: 4c9adbb4-28ef-4a7d-b273-1cee0c38021f

  • The directory asset ID.

  • The data flow name.

The cloneDatastageFlows options.

parameters

  • The DataStage flow ID to use.

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • The directory asset ID.

  • The data flow name.

parameters

  • The DataStage flow ID to use.

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • The directory asset ID.

  • The data flow name.

  • curl -X POST --location --header "Authorization: Bearer {iam_token}"   --header "Accept: application/json;charset=utf-8"   "{base_url}/v3/data_intg_flows/{data_intg_flow_id}/clone?project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
  • CloneDatastageFlowsOptions cloneDatastageFlowsOptions = new CloneDatastageFlowsOptions.Builder()
      .dataIntgFlowId(flowID)
      .projectId(projectID)
      .build();
    
    Response<DataIntgFlow> response = datastageService.cloneDatastageFlows(cloneDatastageFlowsOptions).execute();
    DataIntgFlow dataIntgFlow = response.getResult();
    
    System.out.println(dataIntgFlow);
  •      const params = {
           dataIntgFlowId: assetID,
           projectId: projectID,
         };
         const res = await datastageService.cloneDatastageFlows(params);
  • data_intg_flow = datastage_service.clone_datastage_flows(
      data_intg_flow_id=createdFlowId,
      project_id=config['PROJECT_ID']
    ).get_result()
    
    print(json.dumps(data_intg_flow, indent=2))

Response

A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).

A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).

A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).

A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An error occurred. See response for more information.

Example responses
  • {
      "entity": {
        "data_intg_flow": {
          "dataset": false,
          "mime_type": "application/json"
        }
      },
      "metadata": {
        "asset_id": "{asset_id}",
        "asset_type": "data_intg_flow",
        "href": "{url}/data_intg/v3/data_intg_flows/{asset_id}",
        "name": "{job_name_copy}",
        "origin_country": "US",
        "resource_key": "{project_id}/data_intg_flow/{job_name_copy}"
      }
    }
  • {
      "entity": {
        "data_intg_flow": {
          "dataset": false,
          "mime_type": "application/json"
        }
      },
      "metadata": {
        "asset_id": "{asset_id}",
        "asset_type": "data_intg_flow",
        "href": "{url}/data_intg/v3/data_intg_flows/{asset_id}",
        "name": "{job_name_copy}",
        "origin_country": "US",
        "resource_key": "{project_id}/data_intg_flow/{job_name_copy}"
      }
    }

Compile DataStage flow to generate runtime assets

Generate the runtime assets for a DataStage flow in the specified project or catalog for a specified runtime type. Either project_id or catalog_id must be specified.

Generate the runtime assets for a DataStage flow in the specified project or catalog for a specified runtime type. Either project_id or catalog_id must be specified.

Generate the runtime assets for a DataStage flow in the specified project or catalog for a specified runtime type. Either project_id or catalog_id must be specified.

Generate the runtime assets for a DataStage flow in the specified project or catalog for a specified runtime type. Either project_id or catalog_id must be specified.

POST /v3/ds_codegen/compile/{data_intg_flow_id}
ServiceCall<FlowCompileResponse> compileDatastageFlows(CompileDatastageFlowsOptions compileDatastageFlowsOptions)
compileDatastageFlows(params)
compile_datastage_flows(
        self,
        data_intg_flow_id: str,
        *,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        runtime_type: Optional[str] = None,
        enable_sql_pushdown: Optional[bool] = None,
        enable_async_compile: Optional[bool] = None,
        enable_pushdown_source: Optional[bool] = None,
        enable_push_processing_to_source: Optional[bool] = None,
        enable_push_join_to_source: Optional[bool] = None,
        enable_pushdown_target: Optional[bool] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the CompileDatastageFlowsOptions.Builder to create a CompileDatastageFlowsOptions object that contains the parameter values for the compileDatastageFlows method.

Path Parameters

  • The DataStage flow ID to use.

Query Parameters

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Example: 4c9adbb4-28ef-4a7d-b273-1cee0c38021f

  • The type of the runtime to use. e.g. dspxosh or Spark etc. If not provided queried from within pipeline flow if available otherwise default of dspxosh is used.

  • Whether to enable the SQL pushdown code generation or not. When this flag is set to true and enable_pushdown_source is not specified, enable_pushdown_source will be set to true. When this flag is set to true and enable_pushdown_target is not specified, enable_pushdown_target will be set to true.

  • Whether to compile the flow asynchronously or not. When set to true, the compile request will be queued and then compiled. Response will be returned immediately as "Compiling". For compile status, call get compile status api.

  • Whether to enable the push sql to source connectors. Setting this flag to true will automatically set enable_sql_pushdown to true if the latter is not specified or is explicitly set to false. When this flag is set to true and enable_push_processing_to_source is not specified, enable_push_processing_to_source will be automatically set to true as well. When this flag is set to true and enable_push_join_to_source is not speicified, enable_push_join_to_source will be automatically set to true as well.

  • Whether to enable pushing processing stages to source connectors or not. Setting this flag to true will automatically set enable_pushdown_source to true if the latter is not specified or is explicitly set to false.

  • Whether to enable pushing join/lookup stages to source connectors or not. Setting this flag to true will automatically set enable_pushdown_source to true if the latter is not specified or is explicitly set to false.

  • Whether to enable the push sql to target connectors. Setting this flag to true will automatically set enable_sql_pushdown to true if the latter is not specified or is explicitly set to false.

The compileDatastageFlows options.

parameters

  • The DataStage flow ID to use.

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • The type of the runtime to use. e.g. dspxosh or Spark etc. If not provided queried from within pipeline flow if available otherwise default of dspxosh is used.

  • Whether to enable the SQL pushdown code generation or not. When this flag is set to true and enable_pushdown_source is not specified, enable_pushdown_source will be set to true. When this flag is set to true and enable_pushdown_target is not specified, enable_pushdown_target will be set to true.

  • Whether to compile the flow asynchronously or not. When set to true, the compile request will be queued and then compiled. Response will be returned immediately as "Compiling". For compile status, call get compile status api.

  • Whether to enable the push sql to source connectors. Setting this flag to true will automatically set enable_sql_pushdown to true if the latter is not specified or is explicitly set to false. When this flag is set to true and enable_push_processing_to_source is not specified, enable_push_processing_to_source will be automatically set to true as well. When this flag is set to true and enable_push_join_to_source is not speicified, enable_push_join_to_source will be automatically set to true as well.

  • Whether to enable pushing processing stages to source connectors or not. Setting this flag to true will automatically set enable_pushdown_source to true if the latter is not specified or is explicitly set to false.

  • Whether to enable pushing join/lookup stages to source connectors or not. Setting this flag to true will automatically set enable_pushdown_source to true if the latter is not specified or is explicitly set to false.

  • Whether to enable the push sql to target connectors. Setting this flag to true will automatically set enable_sql_pushdown to true if the latter is not specified or is explicitly set to false.

parameters

  • The DataStage flow ID to use.

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • The type of the runtime to use. e.g. dspxosh or Spark etc. If not provided queried from within pipeline flow if available otherwise default of dspxosh is used.

  • Whether to enable the SQL pushdown code generation or not. When this flag is set to true and enable_pushdown_source is not specified, enable_pushdown_source will be set to true. When this flag is set to true and enable_pushdown_target is not specified, enable_pushdown_target will be set to true.

  • Whether to compile the flow asynchronously or not. When set to true, the compile request will be queued and then compiled. Response will be returned immediately as "Compiling". For compile status, call get compile status api.

  • Whether to enable the push sql to source connectors. Setting this flag to true will automatically set enable_sql_pushdown to true if the latter is not specified or is explicitly set to false. When this flag is set to true and enable_push_processing_to_source is not specified, enable_push_processing_to_source will be automatically set to true as well. When this flag is set to true and enable_push_join_to_source is not speicified, enable_push_join_to_source will be automatically set to true as well.

  • Whether to enable pushing processing stages to source connectors or not. Setting this flag to true will automatically set enable_pushdown_source to true if the latter is not specified or is explicitly set to false.

  • Whether to enable pushing join/lookup stages to source connectors or not. Setting this flag to true will automatically set enable_pushdown_source to true if the latter is not specified or is explicitly set to false.

  • Whether to enable the push sql to target connectors. Setting this flag to true will automatically set enable_sql_pushdown to true if the latter is not specified or is explicitly set to false.

  • curl -X POST --location --header "Authorization: Bearer {iam_token}"   --header "Accept: application/json;charset=utf-8"   "{base_url}/v3/ds_codegen/compile/{data_intg_flow_id}?project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
  • CompileDatastageFlowsOptions compileDatastageFlowsOptions = new CompileDatastageFlowsOptions.Builder()
      .dataIntgFlowId(flowID)
      .projectId(projectID)
      .build();
    
    Response<FlowCompileResponse> response = datastageService.compileDatastageFlows(compileDatastageFlowsOptions).execute();
    FlowCompileResponse flowCompileResponse = response.getResult();
    
    System.out.println(flowCompileResponse);
  •      const params = {
           dataIntgFlowId: assetID,
           projectId: projectID,
         };
         const res = await datastageService.compileDatastageFlows(params);
  • flow_compile_response = datastage_service.compile_datastage_flows(
      data_intg_flow_id=createdFlowId,
      project_id=config['PROJECT_ID']
    ).get_result()
    
    print(json.dumps(flow_compile_response, indent=2))

Response

Describes the compile response model.

Describes the compile response model.

Describes the compile response model.

Describes the compile response model.

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • Request object contains invalid information. Server is not able to process the request object.

  • Unexpected error.

Example responses

Delete DataStage subflows

Deletes the specified data subflows in a project or catalog (either project_id or catalog_id must be set).

If the deletion of the data subflows will take some time to finish, then a 202 response will be returned and the deletion will continue asynchronously.

Deletes the specified data subflows in a project or catalog (either project_id or catalog_id must be set).

If the deletion of the data subflows will take some time to finish, then a 202 response will be returned and the deletion will continue asynchronously.

Deletes the specified data subflows in a project or catalog (either project_id or catalog_id must be set).

If the deletion of the data subflows will take some time to finish, then a 202 response will be returned and the deletion will continue asynchronously.

Deletes the specified data subflows in a project or catalog (either project_id or catalog_id must be set).

If the deletion of the data subflows will take some time to finish, then a 202 response will be returned and the deletion will continue asynchronously.

DELETE /v3/data_intg_flows/subflows
ServiceCall<Void> deleteDatastageSubflows(DeleteDatastageSubflowsOptions deleteDatastageSubflowsOptions)
deleteDatastageSubflows(params)
delete_datastage_subflows(
        self,
        id: List[str],
        *,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the DeleteDatastageSubflowsOptions.Builder to create a DeleteDatastageSubflowsOptions object that contains the parameter values for the deleteDatastageSubflows method.

Query Parameters

  • The list of DataStage subflow IDs to delete.

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Example: 4c9adbb4-28ef-4a7d-b273-1cee0c38021f

The deleteDatastageSubflows options.

parameters

  • The list of DataStage subflow IDs to delete.

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:

parameters

  • The list of DataStage subflow IDs to delete.

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • curl -X DELETE --location --header "Authorization: Bearer {iam_token}"   "{base_url}/v3/data_intg_flows/subflows?id=[]&project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
  • String[] ids = new String[] {subflowID, cloneSubflowID};
    DeleteDatastageSubflowsOptions deleteDatastageSubflowsOptions = new DeleteDatastageSubflowsOptions.Builder()
      .id(Arrays.asList(ids))
      .projectId(projectID)
      .build();
    
    datastageService.deleteDatastageSubflows(deleteDatastageSubflowsOptions).execute();
  •      const params = {
           id: [assetID, cloneID],
           projectId: projectID,
           force: true,
         };
         const res = await datastageService.deleteDatastageFlows(params);
  • response = datastage_service.delete_datastage_subflows(
      id=createdSubflowId,
      project_id=config['PROJECT_ID']
    )

Response

Status Code

  • The requested operation is in progress.

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Get metadata for DataStage subflows

Lists the metadata and entity for DataStage subflows that are contained in the specified project.

Use the following parameters to filter the results:

| Field | Match type | Example | | ------------------------ | ------------ | --------------------------------------- | | entity.name | Equals | entity.name=MyDataStageSubFlow | | entity.name | Starts with | entity.name=starts:MyData | | entity.description | Equals | entity.description=movement | | entity.description | Starts with | entity.description=starts:data |

To sort the results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time (i.e. returning the most recently created data flows first).

| Field | Example | | ------------------------- | ----------------------------------- | | sort | sort=+entity.name (sort by ascending name) | | sort | sort=-metadata.create_time (sort by descending creation time) |

Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time and then in ascending order on name use: sort=-metadata.create_time,+entity.name

Lists the metadata and entity for DataStage subflows that are contained in the specified project.

Use the following parameters to filter the results:

| Field | Match type | Example | | ------------------------ | ------------ | --------------------------------------- | | entity.name | Equals | entity.name=MyDataStageSubFlow | | entity.name | Starts with | entity.name=starts:MyData | | entity.description | Equals | entity.description=movement | | entity.description | Starts with | entity.description=starts:data |

To sort the results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time (i.e. returning the most recently created data flows first).

| Field | Example | | ------------------------- | ----------------------------------- | | sort | sort=+entity.name (sort by ascending name) | | sort | sort=-metadata.create_time (sort by descending creation time) |

Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time and then in ascending order on name use: sort=-metadata.create_time,+entity.name.

Lists the metadata and entity for DataStage subflows that are contained in the specified project.

Use the following parameters to filter the results:

| Field | Match type | Example | | ------------------------ | ------------ | --------------------------------------- | | entity.name | Equals | entity.name=MyDataStageSubFlow | | entity.name | Starts with | entity.name=starts:MyData | | entity.description | Equals | entity.description=movement | | entity.description | Starts with | entity.description=starts:data |

To sort the results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time (i.e. returning the most recently created data flows first).

| Field | Example | | ------------------------- | ----------------------------------- | | sort | sort=+entity.name (sort by ascending name) | | sort | sort=-metadata.create_time (sort by descending creation time) |

Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time and then in ascending order on name use: sort=-metadata.create_time,+entity.name.

Lists the metadata and entity for DataStage subflows that are contained in the specified project.

Use the following parameters to filter the results:

| Field | Match type | Example | | ------------------------ | ------------ | --------------------------------------- | | entity.name | Equals | entity.name=MyDataStageSubFlow | | entity.name | Starts with | entity.name=starts:MyData | | entity.description | Equals | entity.description=movement | | entity.description | Starts with | entity.description=starts:data |

To sort the results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time (i.e. returning the most recently created data flows first).

| Field | Example | | ------------------------- | ----------------------------------- | | sort | sort=+entity.name (sort by ascending name) | | sort | sort=-metadata.create_time (sort by descending creation time) |

Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time and then in ascending order on name use: sort=-metadata.create_time,+entity.name.

GET /v3/data_intg_flows/subflows
ServiceCall<DataFlowPagedCollection> listDatastageSubflows(ListDatastageSubflowsOptions listDatastageSubflowsOptions)
listDatastageSubflows(params)
list_datastage_subflows(
        self,
        *,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        sort: Optional[str] = None,
        start: Optional[str] = None,
        limit: Optional[int] = None,
        entity_name: Optional[str] = None,
        entity_description: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the ListDatastageSubflowsOptions.Builder to create a ListDatastageSubflowsOptions object that contains the parameter values for the listDatastageSubflows method.

Query Parameters

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Example: 4c9adbb4-28ef-4a7d-b273-1cee0c38021f

  • The field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time.

  • The page token indicating where to start paging from.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e2

  • The limit of the number of items to return for each page, for example limit=50. If not specified a default of 100 will be used. The maximum value of limit is 200.

    Example: 100

  • Filter results based on the specified name.

  • Filter results based on the specified description.

The listDatastageSubflows options.

parameters

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • The field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time.

  • The page token indicating where to start paging from.

    Examples:
  • The limit of the number of items to return for each page, for example limit=50. If not specified a default of 100 will be used. The maximum value of limit is 200.

    Examples:
  • Filter results based on the specified name.

  • Filter results based on the specified description.

parameters

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • The field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time.

  • The page token indicating where to start paging from.

    Examples:
  • The limit of the number of items to return for each page, for example limit=50. If not specified a default of 100 will be used. The maximum value of limit is 200.

    Examples:
  • Filter results based on the specified name.

  • Filter results based on the specified description.

  • curl -X GET --location --header "Authorization: Bearer {iam_token}"   --header "Accept: application/json;charset=utf-8"   "{base_url}/v3/data_intg_flows/subflows?project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23&limit=100"
  • ListDatastageSubflowsOptions listDatastageSubflowsOptions = new ListDatastageSubflowsOptions.Builder()
      .projectId(projectID)
      .limit(Long.valueOf("100"))
      .build();
    
    Response<DataFlowPagedCollection> response = datastageService.listDatastageSubflows(listDatastageSubflowsOptions).execute();
    DataFlowPagedCollection dataFlowPagedCollection = response.getResult();
    
    System.out.println(dataFlowPagedCollection);
  •      const params = {
           projectId: projectID,
           sort: 'name',
           limit: 100,
         };
         const res = await datastageService.listDatastageSubflows(params);
  • data_flow_paged_collection = datastage_service.list_datastage_subflows(
      project_id=config['PROJECT_ID'],
      limit=100
    ).get_result()
    
    print(json.dumps(data_flow_paged_collection, indent=2))

Response

A page from a collection of DataStage flows.

A page from a collection of DataStage flows.

A page from a collection of DataStage flows.

A page from a collection of DataStage flows.

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An error occurred. See response for more information.

Example responses
  • {
      "data_flows": [
        {
          "entity": {
            "data_intg_subflow": {
              "dataset": false,
              "mime_type": "application/json"
            }
          },
          "metadata": {
            "asset_id": "{asset_id}",
            "asset_type": "data_intg_subflow",
            "create_time": "2021-04-03 15:32:55+00:00",
            "creator_id": "IBMid-xxxxxxxxx",
            "description": " ",
            "href": "{url}/data_intg/v3/data_intg_flows/subflows/{asset_id}?project_id={project_id}",
            "name": "{job_name}",
            "project_id": "{project_id}",
            "resource_key": "{project_id}/data_intg_subflow/{job_name}",
            "size": 5780,
            "usage": {
              "access_count": 0,
              "last_access_time": "2021-04-03 15:33:01.320000+00:00",
              "last_accessor_id": "IBMid-xxxxxxxxx",
              "last_modification_time": "2021-04-03 15:33:01.320000+00:00",
              "last_modifier_id": "IBMid-xxxxxxxxx"
            }
          }
        }
      ],
      "first": {
        "href": "{url}/data_intg/v3/data_intg_flows/subflows?project_id={project_id}&limit=2"
      },
      "next": {
        "href": "{url}/data_intg/v3/data_intg_flows/subflows?project_id={project_id}&limit=2&start=g1AAAADOeJzLYWBgYMpgTmHQSklKzi9KdUhJMjTUS8rVTU7WLS3WLc4vLcnQNbLQS87JL01JzCvRy0styQHpyWMBkgwNQOr____9WWCxXCAhYmRgZKhrYKJrYBxiaGplbGRlahqVaJCFZocB8XYcgNhxHrcdhlamhlGJ-llZAD4lOMI"
      },
      "total_count": 1
    }
  • {
      "data_flows": [
        {
          "entity": {
            "data_intg_subflow": {
              "dataset": false,
              "mime_type": "application/json"
            }
          },
          "metadata": {
            "asset_id": "{asset_id}",
            "asset_type": "data_intg_subflow",
            "create_time": "2021-04-03 15:32:55+00:00",
            "creator_id": "IBMid-xxxxxxxxx",
            "description": " ",
            "href": "{url}/data_intg/v3/data_intg_flows/subflows/{asset_id}?project_id={project_id}",
            "name": "{job_name}",
            "project_id": "{project_id}",
            "resource_key": "{project_id}/data_intg_subflow/{job_name}",
            "size": 5780,
            "usage": {
              "access_count": 0,
              "last_access_time": "2021-04-03 15:33:01.320000+00:00",
              "last_accessor_id": "IBMid-xxxxxxxxx",
              "last_modification_time": "2021-04-03 15:33:01.320000+00:00",
              "last_modifier_id": "IBMid-xxxxxxxxx"
            }
          }
        }
      ],
      "first": {
        "href": "{url}/data_intg/v3/data_intg_flows/subflows?project_id={project_id}&limit=2"
      },
      "next": {
        "href": "{url}/data_intg/v3/data_intg_flows/subflows?project_id={project_id}&limit=2&start=g1AAAADOeJzLYWBgYMpgTmHQSklKzi9KdUhJMjTUS8rVTU7WLS3WLc4vLcnQNbLQS87JL01JzCvRy0styQHpyWMBkgwNQOr____9WWCxXCAhYmRgZKhrYKJrYBxiaGplbGRlahqVaJCFZocB8XYcgNhxHrcdhlamhlGJ-llZAD4lOMI"
      },
      "total_count": 1
    }

Create DataStage subflow

Creates a DataStage subflow in the specified project or catalog (either project_id or catalog_id must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.

Creates a DataStage subflow in the specified project or catalog (either project_id or catalog_id must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.

Creates a DataStage subflow in the specified project or catalog (either project_id or catalog_id must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.

Creates a DataStage subflow in the specified project or catalog (either project_id or catalog_id must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.

POST /v3/data_intg_flows/subflows
ServiceCall<DataIntgFlow> createDatastageSubflows(CreateDatastageSubflowsOptions createDatastageSubflowsOptions)
createDatastageSubflows(params)
create_datastage_subflows(
        self,
        data_intg_subflow_name: str,
        *,
        entity: Optional['SubFlowEntityJson'] = None,
        pipeline_flows: Optional['PipelineJson'] = None,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        directory_asset_id: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the CreateDatastageSubflowsOptions.Builder to create a CreateDatastageSubflowsOptions object that contains the parameter values for the createDatastageSubflows method.

Query Parameters

  • The DataStage subflow name.

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Example: 4c9adbb4-28ef-4a7d-b273-1cee0c38021f

  • The directory asset ID.

Pipeline json to be attached.

The createDatastageSubflows options.

parameters

  • The DataStage subflow name.

  • Pipeline flow to be stored.

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • The directory asset ID.

parameters

  • The DataStage subflow name.

  • Pipeline flow to be stored.

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • The directory asset ID.

  • curl -X POST --location --header "Authorization: Bearer {iam_token}"   --header "Accept: application/json;charset=utf-8"   --header "Content-Type: application/json;charset=utf-8"   --data '{}'   "{base_url}/v3/data_intg_flows/subflows?data_intg_subflow_name={data_intg_subflow_name}&project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
  • PipelineJson exampleSubFlow = PipelineFlowHelper.buildPipelineFlow(subFlowJson);
    CreateDatastageSubflowsOptions createDatastageSubflowsOptions = new CreateDatastageSubflowsOptions.Builder()
      .dataIntgSubflowName(subflowName)
      .pipelineFlows(exampleSubFlow)
      .projectId(projectID)
      .build();
    
    Response<DataIntgFlow> response = datastageService.createDatastageSubflows(createDatastageSubflowsOptions).execute();
    DataIntgFlow dataIntgFlow = response.getResult();
    
    System.out.println(dataIntgFlow);
  •      const params = {
           dataIntgSubflowName: dataIntgSubFlowName,
           pipelineFlows: pipelineJsonFromFile,
           projectId: projectID,
           assetCategory: 'system',
         };
         const res = await datastageService.createDatastageSubflows(params);
  • data_intg_flow = datastage_service.create_datastage_subflows(
      data_intg_subflow_name='testSubflow1',
      pipeline_flows=UtilHelper.readJsonFileToDict('inputFiles/exampleSubflow.json'),
      project_id=config['PROJECT_ID']
    ).get_result()
    
    print(json.dumps(data_intg_flow, indent=2))

Response

A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).

A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).

A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).

A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Get DataStage subflow

Lists the DataStage subflow that is contained in the specified project. Attachments, metadata and a limited number of attributes from the entity of each DataStage flow is returned.

Lists the DataStage subflow that is contained in the specified project. Attachments, metadata and a limited number of attributes from the entity of each DataStage flow is returned.

Lists the DataStage subflow that is contained in the specified project. Attachments, metadata and a limited number of attributes from the entity of each DataStage flow is returned.

Lists the DataStage subflow that is contained in the specified project. Attachments, metadata and a limited number of attributes from the entity of each DataStage flow is returned.

GET /v3/data_intg_flows/subflows/{data_intg_subflow_id}
ServiceCall<DataIntgFlowJson> getDatastageSubflows(GetDatastageSubflowsOptions getDatastageSubflowsOptions)
getDatastageSubflows(params)
get_datastage_subflows(
        self,
        data_intg_subflow_id: str,
        *,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the GetDatastageSubflowsOptions.Builder to create a GetDatastageSubflowsOptions object that contains the parameter values for the getDatastageSubflows method.

Path Parameters

  • The DataStage subflow ID to use.

Query Parameters

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Example: 4c9adbb4-28ef-4a7d-b273-1cee0c38021f

The getDatastageSubflows options.

parameters

  • The DataStage subflow ID to use.

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:

parameters

  • The DataStage subflow ID to use.

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • curl -X GET --location --header "Authorization: Bearer {iam_token}"   --header "Accept: application/json;charset=utf-8"   "{base_url}/v3/data_intg_flows/subflows/{data_intg_subflow_id}?project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
  • GetDatastageSubflowsOptions getDatastageSubflowsOptions = new GetDatastageSubflowsOptions.Builder()
      .dataIntgSubflowId(subflowID)
      .projectId(projectID)
      .build();
    
    Response<DataIntgFlowJson> response = datastageService.getDatastageSubflows(getDatastageSubflowsOptions).execute();
    DataIntgFlowJson dataIntgFlowJson = response.getResult();
    
    System.out.println(dataIntgFlowJson);
  •      const params = {
           dataIntgSubflowId: subflow_assetID,
           projectId: projectID,
         };
         const res = await datastageService.getDatastageSubflows(params);
  • data_intg_flow_json = datastage_service.get_datastage_subflows(
      data_intg_subflow_id=createdSubflowId,
      project_id=config['PROJECT_ID']
    ).get_result()
    
    print(json.dumps(data_intg_flow_json, indent=2))

Response

A pipeline JSON containing operations to apply to source(s).

A pipeline JSON containing operations to apply to source(s).

A pipeline JSON containing operations to apply to source(s).

A pipeline JSON containing operations to apply to source(s).

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • Unexpected error.

Example responses
  • {
      "attachments": {
        "app_data": {
          "datastage": {
            "version": "3.0.2"
          }
        },
        "doc_type": "pipeline",
        "id": "913abf38-fac2-4c56-815b-f6f21e140fa3",
        "json_schema": "https://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.json",
        "pipelines": [
          {
            "app_data": {
              "datastage": {
                "runtimecolumnpropagation": "true"
              },
              "ui_data": {
                "comments": []
              }
            },
            "id": "abd53940-0ab2-4559-978e-864800ee875a",
            "nodes": [
              {
                "app_data": {
                  "datastage": {
                    "outputs_order": "5e514391-fc64-4ad9-b7ef-d164783d1484"
                  },
                  "ui_data": {
                    "image": "",
                    "label": "Entry node 1",
                    "x_pos": 48,
                    "y_pos": 48
                  }
                },
                "id": "602a1843-4cb2-4a28-93f3-f6d08e9910b6",
                "outputs": [
                  {
                    "app_data": {
                      "datastage": {
                        "is_source_of_link": "aaac7610-cf58-4b7c-9431-643afe952621"
                      },
                      "ui_data": {
                        "label": "outPort"
                      }
                    },
                    "id": "5e514391-fc64-4ad9-b7ef-d164783d1484",
                    "schema_ref": "a479344e-7835-42b8-a5f5-7d88bc490dfe"
                  }
                ],
                "type": "binding"
              },
              {
                "app_data": {
                  "datastage": {
                    "inputs_order": "fb38f373-7b2c-4e70-8629-c4e5e05a7cff",
                    "outputs_order": "c539d891-84a8-481e-82fa-a6c90e588e1d"
                  },
                  "ui_data": {
                    "expanded_height": 200,
                    "expanded_width": 300,
                    "image": "../graphics/palette/Standardize.svg",
                    "is_expanded": false,
                    "label": "ContainerC3",
                    "x_pos": 192,
                    "y_pos": 48
                  }
                },
                "id": "a2fb41ad-5088-4849-a3cc-453a6416492c",
                "inputs": [
                  {
                    "app_data": {
                      "datastage": {},
                      "ui_data": {
                        "label": "inPort"
                      }
                    },
                    "id": "fb38f373-7b2c-4e70-8629-c4e5e05a7cff",
                    "links": [
                      {
                        "app_data": {
                          "datastage": {},
                          "ui_data": {
                            "decorations": [
                              {
                                "class_name": "",
                                "hotspot": false,
                                "id": "DSLink1E",
                                "label": "DSLink1E",
                                "outline": true,
                                "path": "",
                                "position": "middle"
                              }
                            ]
                          }
                        },
                        "id": "aaac7610-cf58-4b7c-9431-643afe952621",
                        "link_name": "DSLink1E",
                        "node_id_ref": "602a1843-4cb2-4a28-93f3-f6d08e9910b6",
                        "port_id_ref": "5e514391-fc64-4ad9-b7ef-d164783d1484",
                        "type_attr": "PRIMARY"
                      }
                    ],
                    "parameters": {
                      "runtime_column_propagation": 0
                    },
                    "schema_ref": "a479344e-7835-42b8-a5f5-7d88bc490dfe"
                  }
                ],
                "outputs": [
                  {
                    "app_data": {
                      "datastage": {
                        "is_source_of_link": "78c4cdcd-2f6b-474a-805f-f15d00b7cac2"
                      },
                      "ui_data": {
                        "label": "outPort"
                      }
                    },
                    "id": "c539d891-84a8-481e-82fa-a6c90e588e1d",
                    "schema_ref": "d4ba6846-debd-47c5-90ec-dda663728a36"
                  }
                ],
                "parameters": {
                  "input_count": 1,
                  "output_count": 1
                },
                "subflow_ref": {
                  "pipeline_id_ref": "default_pipeline_id",
                  "url": "app_defined"
                },
                "type": "super_node"
              },
              {
                "app_data": {
                  "datastage": {
                    "inputs_order": "2a52a02d-113c-4d9b-8f36-609414be8bf5"
                  },
                  "ui_data": {
                    "image": "",
                    "label": "Exit node 1",
                    "x_pos": 384,
                    "y_pos": 48
                  }
                },
                "id": "547dcda4-a052-432d-ae4b-06df14e8e5b3",
                "inputs": [
                  {
                    "app_data": {
                      "datastage": {},
                      "ui_data": {
                        "label": "inPort"
                      }
                    },
                    "id": "2a52a02d-113c-4d9b-8f36-609414be8bf5",
                    "links": [
                      {
                        "app_data": {
                          "datastage": {},
                          "ui_data": {
                            "decorations": [
                              {
                                "class_name": "",
                                "hotspot": false,
                                "id": "DSLink2E",
                                "label": "DSLink2E",
                                "outline": true,
                                "path": "",
                                "position": "middle"
                              }
                            ]
                          }
                        },
                        "id": "78c4cdcd-2f6b-474a-805f-f15d00b7cac2",
                        "link_name": "DSLink2E",
                        "node_id_ref": "a2fb41ad-5088-4849-a3cc-453a6416492c",
                        "port_id_ref": "c539d891-84a8-481e-82fa-a6c90e588e1d",
                        "type_attr": "PRIMARY"
                      }
                    ],
                    "schema_ref": "d4ba6846-debd-47c5-90ec-dda663728a36"
                  }
                ],
                "type": "binding"
              }
            ],
            "runtime_ref": "pxOsh"
          }
        ],
        "primary_pipeline": "abd53940-0ab2-4559-978e-864800ee875a",
        "runtimes": [
          {
            "id": "pxOsh",
            "name": "pxOsh"
          }
        ],
        "schemas": [
          {
            "fields": [
              {
                "app_data": {
                  "column_reference": "col1",
                  "is_unicode_string": false,
                  "odbc_type": "INTEGER",
                  "table_def": "Basic3\\\\Basic3\\\\Basic3",
                  "type_code": "INT32"
                },
                "metadata": {
                  "decimal_precision": 0,
                  "decimal_scale": 0,
                  "is_key": true,
                  "is_signed": true,
                  "item_index": 0,
                  "max_length": 0,
                  "min_length": 0
                },
                "name": "col1",
                "nullable": false,
                "type": "integer"
              },
              {
                "app_data": {
                  "column_reference": "col2",
                  "is_unicode_string": false,
                  "odbc_type": "CHAR",
                  "table_def": "Basic3\\\\Basic3\\\\Basic3",
                  "type_code": "STRING"
                },
                "metadata": {
                  "decimal_precision": 0,
                  "decimal_scale": 0,
                  "is_key": false,
                  "is_signed": false,
                  "item_index": 0,
                  "max_length": 5,
                  "min_length": 5
                },
                "name": "col2",
                "nullable": false,
                "type": "string"
              },
              {
                "app_data": {
                  "column_reference": "col3",
                  "is_unicode_string": false,
                  "odbc_type": "VARCHAR",
                  "table_def": "Basic3\\\\Basic3\\\\Basic3",
                  "type_code": "STRING"
                },
                "metadata": {
                  "decimal_precision": 0,
                  "decimal_scale": 0,
                  "is_key": false,
                  "is_signed": false,
                  "item_index": 0,
                  "max_length": 10,
                  "min_length": 0
                },
                "name": "col3",
                "nullable": false,
                "type": "string"
              }
            ],
            "id": "a479344e-7835-42b8-a5f5-7d88bc490dfe"
          },
          {
            "fields": [
              {
                "app_data": {
                  "column_reference": "col1",
                  "is_unicode_string": false,
                  "odbc_type": "INTEGER",
                  "table_def": "Basic3\\\\Basic3\\\\Basic3",
                  "type_code": "INT32"
                },
                "metadata": {
                  "decimal_precision": 0,
                  "decimal_scale": 0,
                  "is_key": true,
                  "is_signed": true,
                  "item_index": 0,
                  "max_length": 0,
                  "min_length": 0
                },
                "name": "col1",
                "nullable": false,
                "type": "integer"
              },
              {
                "app_data": {
                  "column_reference": "col2",
                  "is_unicode_string": false,
                  "odbc_type": "CHAR",
                  "table_def": "Basic3\\\\Basic3\\\\Basic3",
                  "type_code": "STRING"
                },
                "metadata": {
                  "decimal_precision": 0,
                  "decimal_scale": 0,
                  "is_key": false,
                  "is_signed": false,
                  "item_index": 0,
                  "max_length": 5,
                  "min_length": 5
                },
                "name": "col2",
                "nullable": false,
                "type": "string"
              },
              {
                "app_data": {
                  "column_reference": "col3",
                  "is_unicode_string": false,
                  "odbc_type": "VARCHAR",
                  "table_def": "Basic3\\\\Basic3\\\\Basic3",
                  "type_code": "STRING"
                },
                "metadata": {
                  "decimal_precision": 0,
                  "decimal_scale": 0,
                  "is_key": false,
                  "is_signed": false,
                  "item_index": 0,
                  "max_length": 10,
                  "min_length": 0
                },
                "name": "col3",
                "nullable": false,
                "type": "string"
              }
            ],
            "id": "d4ba6846-debd-47c5-90ec-dda663728a36"
          }
        ],
        "version": "3.0"
      },
      "entity": {
        "data_intg_subflow": {
          "dataset": false,
          "mime_type": "application/json"
        }
      },
      "metadata": {
        "asset_id": "7ad1e03c-5380-4bfa-8317-3604b95954c1",
        "asset_type": "data_intg_subflow",
        "catalog_id": "e35806c5-5314-4677-bb8a-416d3c628d41",
        "create_time": "2021-05-10 19:11:04+00:00",
        "creator_id": "IBMid-xxxxxxxxx",
        "description": "",
        "name": "NSC2_Subflow",
        "origin_country": "us",
        "project_id": "{project_id}",
        "resource_key": "baa8b445-9bea-4c7b-9930-233f57f8c629/data_intg_subflow/NSC2_Subflow",
        "size": 5117,
        "tags": [],
        "usage": {
          "access_count": 0,
          "last_access_time": "2021-05-10 19:11:05.474000+00:00",
          "last_accessor_id": "IBMid-xxxxxxxxx"
        }
      }
    }
  • {
      "attachments": {
        "app_data": {
          "datastage": {
            "version": "3.0.2"
          }
        },
        "doc_type": "pipeline",
        "id": "913abf38-fac2-4c56-815b-f6f21e140fa3",
        "json_schema": "https://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.json",
        "pipelines": [
          {
            "app_data": {
              "datastage": {
                "runtimecolumnpropagation": "true"
              },
              "ui_data": {
                "comments": []
              }
            },
            "id": "abd53940-0ab2-4559-978e-864800ee875a",
            "nodes": [
              {
                "app_data": {
                  "datastage": {
                    "outputs_order": "5e514391-fc64-4ad9-b7ef-d164783d1484"
                  },
                  "ui_data": {
                    "image": "",
                    "label": "Entry node 1",
                    "x_pos": 48,
                    "y_pos": 48
                  }
                },
                "id": "602a1843-4cb2-4a28-93f3-f6d08e9910b6",
                "outputs": [
                  {
                    "app_data": {
                      "datastage": {
                        "is_source_of_link": "aaac7610-cf58-4b7c-9431-643afe952621"
                      },
                      "ui_data": {
                        "label": "outPort"
                      }
                    },
                    "id": "5e514391-fc64-4ad9-b7ef-d164783d1484",
                    "schema_ref": "a479344e-7835-42b8-a5f5-7d88bc490dfe"
                  }
                ],
                "type": "binding"
              },
              {
                "app_data": {
                  "datastage": {
                    "inputs_order": "fb38f373-7b2c-4e70-8629-c4e5e05a7cff",
                    "outputs_order": "c539d891-84a8-481e-82fa-a6c90e588e1d"
                  },
                  "ui_data": {
                    "expanded_height": 200,
                    "expanded_width": 300,
                    "image": "../graphics/palette/Standardize.svg",
                    "is_expanded": false,
                    "label": "ContainerC3",
                    "x_pos": 192,
                    "y_pos": 48
                  }
                },
                "id": "a2fb41ad-5088-4849-a3cc-453a6416492c",
                "inputs": [
                  {
                    "app_data": {
                      "datastage": {},
                      "ui_data": {
                        "label": "inPort"
                      }
                    },
                    "id": "fb38f373-7b2c-4e70-8629-c4e5e05a7cff",
                    "links": [
                      {
                        "app_data": {
                          "datastage": {},
                          "ui_data": {
                            "decorations": [
                              {
                                "class_name": "",
                                "hotspot": false,
                                "id": "DSLink1E",
                                "label": "DSLink1E",
                                "outline": true,
                                "path": "",
                                "position": "middle"
                              }
                            ]
                          }
                        },
                        "id": "aaac7610-cf58-4b7c-9431-643afe952621",
                        "link_name": "DSLink1E",
                        "node_id_ref": "602a1843-4cb2-4a28-93f3-f6d08e9910b6",
                        "port_id_ref": "5e514391-fc64-4ad9-b7ef-d164783d1484",
                        "type_attr": "PRIMARY"
                      }
                    ],
                    "parameters": {
                      "runtime_column_propagation": 0
                    },
                    "schema_ref": "a479344e-7835-42b8-a5f5-7d88bc490dfe"
                  }
                ],
                "outputs": [
                  {
                    "app_data": {
                      "datastage": {
                        "is_source_of_link": "78c4cdcd-2f6b-474a-805f-f15d00b7cac2"
                      },
                      "ui_data": {
                        "label": "outPort"
                      }
                    },
                    "id": "c539d891-84a8-481e-82fa-a6c90e588e1d",
                    "schema_ref": "d4ba6846-debd-47c5-90ec-dda663728a36"
                  }
                ],
                "parameters": {
                  "input_count": 1,
                  "output_count": 1
                },
                "subflow_ref": {
                  "pipeline_id_ref": "default_pipeline_id",
                  "url": "app_defined"
                },
                "type": "super_node"
              },
              {
                "app_data": {
                  "datastage": {
                    "inputs_order": "2a52a02d-113c-4d9b-8f36-609414be8bf5"
                  },
                  "ui_data": {
                    "image": "",
                    "label": "Exit node 1",
                    "x_pos": 384,
                    "y_pos": 48
                  }
                },
                "id": "547dcda4-a052-432d-ae4b-06df14e8e5b3",
                "inputs": [
                  {
                    "app_data": {
                      "datastage": {},
                      "ui_data": {
                        "label": "inPort"
                      }
                    },
                    "id": "2a52a02d-113c-4d9b-8f36-609414be8bf5",
                    "links": [
                      {
                        "app_data": {
                          "datastage": {},
                          "ui_data": {
                            "decorations": [
                              {
                                "class_name": "",
                                "hotspot": false,
                                "id": "DSLink2E",
                                "label": "DSLink2E",
                                "outline": true,
                                "path": "",
                                "position": "middle"
                              }
                            ]
                          }
                        },
                        "id": "78c4cdcd-2f6b-474a-805f-f15d00b7cac2",
                        "link_name": "DSLink2E",
                        "node_id_ref": "a2fb41ad-5088-4849-a3cc-453a6416492c",
                        "port_id_ref": "c539d891-84a8-481e-82fa-a6c90e588e1d",
                        "type_attr": "PRIMARY"
                      }
                    ],
                    "schema_ref": "d4ba6846-debd-47c5-90ec-dda663728a36"
                  }
                ],
                "type": "binding"
              }
            ],
            "runtime_ref": "pxOsh"
          }
        ],
        "primary_pipeline": "abd53940-0ab2-4559-978e-864800ee875a",
        "runtimes": [
          {
            "id": "pxOsh",
            "name": "pxOsh"
          }
        ],
        "schemas": [
          {
            "fields": [
              {
                "app_data": {
                  "column_reference": "col1",
                  "is_unicode_string": false,
                  "odbc_type": "INTEGER",
                  "table_def": "Basic3\\\\Basic3\\\\Basic3",
                  "type_code": "INT32"
                },
                "metadata": {
                  "decimal_precision": 0,
                  "decimal_scale": 0,
                  "is_key": true,
                  "is_signed": true,
                  "item_index": 0,
                  "max_length": 0,
                  "min_length": 0
                },
                "name": "col1",
                "nullable": false,
                "type": "integer"
              },
              {
                "app_data": {
                  "column_reference": "col2",
                  "is_unicode_string": false,
                  "odbc_type": "CHAR",
                  "table_def": "Basic3\\\\Basic3\\\\Basic3",
                  "type_code": "STRING"
                },
                "metadata": {
                  "decimal_precision": 0,
                  "decimal_scale": 0,
                  "is_key": false,
                  "is_signed": false,
                  "item_index": 0,
                  "max_length": 5,
                  "min_length": 5
                },
                "name": "col2",
                "nullable": false,
                "type": "string"
              },
              {
                "app_data": {
                  "column_reference": "col3",
                  "is_unicode_string": false,
                  "odbc_type": "VARCHAR",
                  "table_def": "Basic3\\\\Basic3\\\\Basic3",
                  "type_code": "STRING"
                },
                "metadata": {
                  "decimal_precision": 0,
                  "decimal_scale": 0,
                  "is_key": false,
                  "is_signed": false,
                  "item_index": 0,
                  "max_length": 10,
                  "min_length": 0
                },
                "name": "col3",
                "nullable": false,
                "type": "string"
              }
            ],
            "id": "a479344e-7835-42b8-a5f5-7d88bc490dfe"
          },
          {
            "fields": [
              {
                "app_data": {
                  "column_reference": "col1",
                  "is_unicode_string": false,
                  "odbc_type": "INTEGER",
                  "table_def": "Basic3\\\\Basic3\\\\Basic3",
                  "type_code": "INT32"
                },
                "metadata": {
                  "decimal_precision": 0,
                  "decimal_scale": 0,
                  "is_key": true,
                  "is_signed": true,
                  "item_index": 0,
                  "max_length": 0,
                  "min_length": 0
                },
                "name": "col1",
                "nullable": false,
                "type": "integer"
              },
              {
                "app_data": {
                  "column_reference": "col2",
                  "is_unicode_string": false,
                  "odbc_type": "CHAR",
                  "table_def": "Basic3\\\\Basic3\\\\Basic3",
                  "type_code": "STRING"
                },
                "metadata": {
                  "decimal_precision": 0,
                  "decimal_scale": 0,
                  "is_key": false,
                  "is_signed": false,
                  "item_index": 0,
                  "max_length": 5,
                  "min_length": 5
                },
                "name": "col2",
                "nullable": false,
                "type": "string"
              },
              {
                "app_data": {
                  "column_reference": "col3",
                  "is_unicode_string": false,
                  "odbc_type": "VARCHAR",
                  "table_def": "Basic3\\\\Basic3\\\\Basic3",
                  "type_code": "STRING"
                },
                "metadata": {
                  "decimal_precision": 0,
                  "decimal_scale": 0,
                  "is_key": false,
                  "is_signed": false,
                  "item_index": 0,
                  "max_length": 10,
                  "min_length": 0
                },
                "name": "col3",
                "nullable": false,
                "type": "string"
              }
            ],
            "id": "d4ba6846-debd-47c5-90ec-dda663728a36"
          }
        ],
        "version": "3.0"
      },
      "entity": {
        "data_intg_subflow": {
          "dataset": false,
          "mime_type": "application/json"
        }
      },
      "metadata": {
        "asset_id": "7ad1e03c-5380-4bfa-8317-3604b95954c1",
        "asset_type": "data_intg_subflow",
        "catalog_id": "e35806c5-5314-4677-bb8a-416d3c628d41",
        "create_time": "2021-05-10 19:11:04+00:00",
        "creator_id": "IBMid-xxxxxxxxx",
        "description": "",
        "name": "NSC2_Subflow",
        "origin_country": "us",
        "project_id": "{project_id}",
        "resource_key": "baa8b445-9bea-4c7b-9930-233f57f8c629/data_intg_subflow/NSC2_Subflow",
        "size": 5117,
        "tags": [],
        "usage": {
          "access_count": 0,
          "last_access_time": "2021-05-10 19:11:05.474000+00:00",
          "last_accessor_id": "IBMid-xxxxxxxxx"
        }
      }
    }

Update DataStage subflow

Modifies a data subflow in the specified project or catalog (either project_id or catalog_id must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.

Modifies a data subflow in the specified project or catalog (either project_id or catalog_id must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.

Modifies a data subflow in the specified project or catalog (either project_id or catalog_id must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.

Modifies a data subflow in the specified project or catalog (either project_id or catalog_id must be set). All subsequent calls to use the data flow must specify the project or catalog ID the data flow was created in.

PUT /v3/data_intg_flows/subflows/{data_intg_subflow_id}
ServiceCall<DataIntgFlow> updateDatastageSubflows(UpdateDatastageSubflowsOptions updateDatastageSubflowsOptions)
updateDatastageSubflows(params)
update_datastage_subflows(
        self,
        data_intg_subflow_id: str,
        data_intg_subflow_name: str,
        *,
        entity: Optional['SubFlowEntityJson'] = None,
        pipeline_flows: Optional['PipelineJson'] = None,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        directory_asset_id: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the UpdateDatastageSubflowsOptions.Builder to create a UpdateDatastageSubflowsOptions object that contains the parameter values for the updateDatastageSubflows method.

Path Parameters

  • The DataStage subflow ID to use.

Query Parameters

  • The DataStage subflow name.

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Example: 4c9adbb4-28ef-4a7d-b273-1cee0c38021f

  • The directory asset ID.

Pipeline json to be attached.

The updateDatastageSubflows options.

parameters

  • The DataStage subflow ID to use.

  • The DataStage subflow name.

  • Pipeline flow to be stored.

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • The directory asset ID.

parameters

  • The DataStage subflow ID to use.

  • The DataStage subflow name.

  • Pipeline flow to be stored.

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • The directory asset ID.

  • curl -X PUT --location --header "Authorization: Bearer {iam_token}"   --header "Accept: application/json;charset=utf-8"   --header "Content-Type: application/json;charset=utf-8"   --data '{}'   "{base_url}/v3/data_intg_flows/subflows/{data_intg_subflow_id}?data_intg_subflow_name={data_intg_subflow_name}&project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
  • PipelineJson exampleSubFlowUpdated = PipelineFlowHelper.buildPipelineFlow(updatedSubFlowJson);
    UpdateDatastageSubflowsOptions updateDatastageSubflowsOptions = new UpdateDatastageSubflowsOptions.Builder()
      .dataIntgSubflowId(subflowID)
      .dataIntgSubflowName(subflowName)
      .pipelineFlows(exampleSubFlowUpdated)
      .projectId(projectID)
      .build();
    
    Response<DataIntgFlow> response = datastageService.updateDatastageSubflows(updateDatastageSubflowsOptions).execute();
    DataIntgFlow dataIntgFlow = response.getResult();
    
    System.out.println(dataIntgFlow);
  •      const params = {
           dataIntgSubflowId: subflow_assetID,
           dataIntgSubflowName: dataIntgSubFlowName,
           pipelineFlows: pipelineJsonFromFile,
           projectId: projectID,
           assetCategory: 'system',
         };
         const res = await datastageService.updateDatastageSubflows(params);
  • data_intg_flow = datastage_service.update_datastage_subflows(
      data_intg_subflow_id=createdSubflowId,
      data_intg_subflow_name='testSubflow1Updated',
      pipeline_flows=UtilHelper.readJsonFileToDict('inputFiles/exampleSubflowUpdated.json'),
      project_id=config['PROJECT_ID']
    ).get_result()
    
    print(json.dumps(data_intg_flow, indent=2))

Response

A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).

A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).

A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).

A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Modifies attributes of DataStage subflow

Modifies attributes of a data subflow in the specified project or catalog (either project_id or catalog_id must be set).

Modifies attributes of a data subflow in the specified project or catalog (either project_id or catalog_id must be set).

Modifies attributes of a data subflow in the specified project or catalog (either project_id or catalog_id must be set).

Modifies attributes of a data subflow in the specified project or catalog (either project_id or catalog_id must be set).

PUT /v3/data_intg_flows/subflows/{data_intg_subflow_id}/attributes
ServiceCall<DataIntgFlow> patchAttributesDatastageSubflow(PatchAttributesDatastageSubflowOptions patchAttributesDatastageSubflowOptions)
patchAttributesDatastageSubflow(params)
patch_attributes_datastage_subflow(
        self,
        data_intg_subflow_id: str,
        *,
        description: Optional[str] = None,
        directory_asset_id: Optional[str] = None,
        name: Optional[str] = None,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the PatchAttributesDatastageSubflowOptions.Builder to create a PatchAttributesDatastageSubflowOptions object that contains the parameter values for the patchAttributesDatastageSubflow method.

Path Parameters

  • The DataStage subflow ID to use.

Query Parameters

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Example: 4c9adbb4-28ef-4a7d-b273-1cee0c38021f

attributes of subflows to modify.

The patchAttributesDatastageSubflow options.

parameters

  • The DataStage subflow ID to use.

  • description of the asset.

  • The directory asset ID of the asset.

  • name of the asset.

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:

parameters

  • The DataStage subflow ID to use.

  • description of the asset.

  • The directory asset ID of the asset.

  • name of the asset.

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:

Response

A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).

A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).

A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).

A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Clone DataStage subflow

Create a DataStage subflow in the specified project or catalog based on an existing DataStage subflow in the same project or catalog.

Create a DataStage subflow in the specified project or catalog based on an existing DataStage subflow in the same project or catalog.

Create a DataStage subflow in the specified project or catalog based on an existing DataStage subflow in the same project or catalog.

Create a DataStage subflow in the specified project or catalog based on an existing DataStage subflow in the same project or catalog.

POST /v3/data_intg_flows/subflows/{data_intg_subflow_id}/clone
ServiceCall<DataIntgFlow> cloneDatastageSubflows(CloneDatastageSubflowsOptions cloneDatastageSubflowsOptions)
cloneDatastageSubflows(params)
clone_datastage_subflows(
        self,
        data_intg_subflow_id: str,
        *,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        directory_asset_id: Optional[str] = None,
        data_intg_subflow_name: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the CloneDatastageSubflowsOptions.Builder to create a CloneDatastageSubflowsOptions object that contains the parameter values for the cloneDatastageSubflows method.

Path Parameters

  • The DataStage subflow ID to use.

Query Parameters

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Example: 4c9adbb4-28ef-4a7d-b273-1cee0c38021f

  • The directory asset ID.

  • The data subflow name.

The cloneDatastageSubflows options.

parameters

  • The DataStage subflow ID to use.

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • The directory asset ID.

  • The data subflow name.

parameters

  • The DataStage subflow ID to use.

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • The directory asset ID.

  • The data subflow name.

  • curl -X POST --location --header "Authorization: Bearer {iam_token}"   --header "Accept: application/json;charset=utf-8"   "{base_url}/v3/data_intg_flows/subflows/{data_intg_subflow_id}/clone?project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
  • CloneDatastageSubflowsOptions cloneDatastageSubflowsOptions = new CloneDatastageSubflowsOptions.Builder()
      .dataIntgSubflowId(subflowID)
      .projectId(projectID)
      .build();
    
    Response<DataIntgFlow> response = datastageService.cloneDatastageSubflows(cloneDatastageSubflowsOptions).execute();
    DataIntgFlow dataIntgFlow = response.getResult();
    
    System.out.println(dataIntgFlow);
  •      const params = {
           dataIntgSubflowId: subflow_assetID,
           projectId: projectID,
         };
         const res = await datastageService.cloneDatastageSubflows(params);
  • data_intg_flow = datastage_service.clone_datastage_subflows(
      data_intg_subflow_id=createdSubflowId,
      project_id=config['PROJECT_ID']
    ).get_result()
    
    print(json.dumps(data_intg_flow, indent=2))

Response

A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).

A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).

A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).

A DataStage flow model that defines physical source(s), physical target(s) and an optional pipeline containing operations to apply to source(s).

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Delete table definitions

Delete the specified table definitions from a project or catalog (either project_id or catalog_id must be set).

Delete the specified table definitions from a project or catalog (either project_id or catalog_id must be set).

Delete the specified table definitions from a project or catalog (either project_id or catalog_id must be set).

Delete the specified table definitions from a project or catalog (either project_id or catalog_id must be set).

DELETE /v3/table_definitions
ServiceCall<Void> deleteTableDefinitions(DeleteTableDefinitionsOptions deleteTableDefinitionsOptions)
deleteTableDefinitions(params)
delete_table_definitions(
        self,
        id: List[str],
        *,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the DeleteTableDefinitionsOptions.Builder to create a DeleteTableDefinitionsOptions object that contains the parameter values for the deleteTableDefinitions method.

Query Parameters

  • The list of table definitions IDs to delete.

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id, space_id, or project_id is required.

The deleteTableDefinitions options.

parameters

  • The list of table definitions IDs to delete.

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Examples:
  • The ID of the space to use. catalog_id, space_id, or project_id is required.

parameters

  • The list of table definitions IDs to delete.

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Examples:
  • The ID of the space to use. catalog_id, space_id, or project_id is required.

Response

Status Code

  • The requested operation completed successfully.

  • The requested operation is in progress.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

List table definitions

Lists the table definitions that are contained in the specified project or catalog (either project_id or catalog_id must be set).

Use the following parameters to filter the results:

| Field | Match type | Example | | ------------------------ | ------------ | --------------------------------------- | | asset.name | Starts with | ?asset.name=starts:MyTable | | asset.description | Equals | ?asset.description=starts:profiling |

To sort the returned results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time, returning the most recently created data flows first.

| Field | Example | | ------------------------- | ----------------------------------- | | asset.name | ?sort=+asset.name | | metadata.create_time | ?sort=-metadata.create_time |

Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time and then in ascending order on name use: `?sort=-metadata.create_time,+asset.name

Lists the table definitions that are contained in the specified project or catalog (either project_id or catalog_id must be set).

Use the following parameters to filter the results:

| Field | Match type | Example | | ------------------------ | ------------ | --------------------------------------- | | asset.name | Starts with | ?asset.name=starts:MyTable | | asset.description | Equals | ?asset.description=starts:profiling |

To sort the returned results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time, returning the most recently created data flows first.

| Field | Example | | ------------------------- | ----------------------------------- | | asset.name | ?sort=+asset.name | | metadata.create_time | ?sort=-metadata.create_time |

Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time and then in ascending order on name use: `?sort=-metadata.create_time,+asset.name.

Lists the table definitions that are contained in the specified project or catalog (either project_id or catalog_id must be set).

Use the following parameters to filter the results:

| Field | Match type | Example | | ------------------------ | ------------ | --------------------------------------- | | asset.name | Starts with | ?asset.name=starts:MyTable | | asset.description | Equals | ?asset.description=starts:profiling |

To sort the returned results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time, returning the most recently created data flows first.

| Field | Example | | ------------------------- | ----------------------------------- | | asset.name | ?sort=+asset.name | | metadata.create_time | ?sort=-metadata.create_time |

Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time and then in ascending order on name use: `?sort=-metadata.create_time,+asset.name.

Lists the table definitions that are contained in the specified project or catalog (either project_id or catalog_id must be set).

Use the following parameters to filter the results:

| Field | Match type | Example | | ------------------------ | ------------ | --------------------------------------- | | asset.name | Starts with | ?asset.name=starts:MyTable | | asset.description | Equals | ?asset.description=starts:profiling |

To sort the returned results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time, returning the most recently created data flows first.

| Field | Example | | ------------------------- | ----------------------------------- | | asset.name | ?sort=+asset.name | | metadata.create_time | ?sort=-metadata.create_time |

Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time and then in ascending order on name use: `?sort=-metadata.create_time,+asset.name.

GET /v3/table_definitions
ServiceCall<TableDefinitionPagedCollection> getTableDefinitions(GetTableDefinitionsOptions getTableDefinitionsOptions)
getTableDefinitions(params)
get_table_definitions(
        self,
        *,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        sort: Optional[str] = None,
        start: Optional[str] = None,
        limit: Optional[int] = None,
        asset_name: Optional[str] = None,
        asset_description: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the GetTableDefinitionsOptions.Builder to create a GetTableDefinitionsOptions object that contains the parameter values for the getTableDefinitions method.

Query Parameters

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id, space_id, or project_id is required.

  • The field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time

  • The page token indicating where to start paging from.

  • The limit of the number of items to return, for example limit=50. If not specified a default of 100 will be used.

    Possible values: value ≥ 1

    Example: 100

  • Filter results based on the specified name.

  • Filter results based on the specified description.

The getTableDefinitions options.

parameters

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Examples:
  • The ID of the space to use. catalog_id, space_id, or project_id is required.

  • The field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time.

  • The page token indicating where to start paging from.

  • The limit of the number of items to return, for example limit=50. If not specified a default of 100 will be used.

    Possible values: value ≥ 1

    Examples:
  • Filter results based on the specified name.

  • Filter results based on the specified description.

parameters

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Examples:
  • The ID of the space to use. catalog_id, space_id, or project_id is required.

  • The field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time.

  • The page token indicating where to start paging from.

  • The limit of the number of items to return, for example limit=50. If not specified a default of 100 will be used.

    Possible values: value ≥ 1

    Examples:
  • Filter results based on the specified name.

  • Filter results based on the specified description.

Response

A page from a collection of table definitions.

A page from a collection of table definitions.

A page from a collection of table definitions.

A page from a collection of table definitions.

Status Code

  • The requested operation completed successfully.

  • You are not permitted to perform this action. See response for more information.

  • Not authorized.

  • An error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Create table definition

Creates a table definition in the specified project or catalog (either project_id or catalog_id must be set). All subsequent calls to use the parameter set must specify the project or catalog ID the table definition was created in.

Creates a table definition in the specified project or catalog (either project_id or catalog_id must be set). All subsequent calls to use the parameter set must specify the project or catalog ID the table definition was created in.

Creates a table definition in the specified project or catalog (either project_id or catalog_id must be set). All subsequent calls to use the parameter set must specify the project or catalog ID the table definition was created in.

Creates a table definition in the specified project or catalog (either project_id or catalog_id must be set). All subsequent calls to use the parameter set must specify the project or catalog ID the table definition was created in.

POST /v3/table_definitions
ServiceCall<TableDefinition> createTableDefinition(CreateTableDefinitionOptions createTableDefinitionOptions)
createTableDefinition(params)
create_table_definition(
        self,
        entity: 'TableDefinitionEntity',
        metadata: 'TableDefinitionMetadata',
        *,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        directory_asset_id: Optional[str] = None,
        asset_category: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the CreateTableDefinitionOptions.Builder to create a CreateTableDefinitionOptions object that contains the parameter values for the createTableDefinition method.

Query Parameters

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id, space_id, or project_id is required.

  • The directory asset id to create the asset in or move to

  • The category of the asset. Must be either SYSTEM or USER. Only a registered service can use this parameter.

    Allowable values: [SYSTEM,USER]

The table definition to be created.

The createTableDefinition options.

parameters

  • The underlying table definition.

  • System metadata about a table definition.

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Examples:
  • The ID of the space to use. catalog_id, space_id, or project_id is required.

  • The directory asset id to create the asset in or move to.

  • The category of the asset. Must be either SYSTEM or USER. Only a registered service can use this parameter.

    Allowable values: [SYSTEM,USER]

parameters

  • The underlying table definition.

  • System metadata about a table definition.

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Examples:
  • The ID of the space to use. catalog_id, space_id, or project_id is required.

  • The directory asset id to create the asset in or move to.

  • The category of the asset. Must be either SYSTEM or USER. Only a registered service can use this parameter.

    Allowable values: [SYSTEM,USER]

Response

A table definition model that defines a set of parameters that can be referenced at runtime.

A table definition model that defines a set of parameters that can be referenced at runtime.

A table definition model that defines a set of parameters that can be referenced at runtime.

A table definition model that defines a set of parameters that can be referenced at runtime.

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • The data source type details cannot be found.

  • An error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Delete a table definition

Delete the specified table definition from a project or catalog (either project_id or catalog_id must be set).

DELETE /v3/table_definitions/{table_id}

Request

Path Parameters

  • Table definition ID

Query Parameters

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id, space_id, or project_id is required.

Response

Status Code

  • The requested operation completed successfully.

  • Bad request. See response for more information.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • The data source type details cannot be found.

  • Gone. See response for more information.

  • An error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Get table definition

Get table definition

Get table definition.

Get table definition.

Get table definition.

GET /v3/table_definitions/{table_id}
ServiceCall<TableDefinition> getTableDefinition(GetTableDefinitionOptions getTableDefinitionOptions)
getTableDefinition(params)
get_table_definition(
        self,
        table_id: str,
        *,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the GetTableDefinitionOptions.Builder to create a GetTableDefinitionOptions object that contains the parameter values for the getTableDefinition method.

Path Parameters

  • Table definition ID

Query Parameters

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id, space_id, or project_id is required.

The getTableDefinition options.

parameters

  • Table definition ID.

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Examples:
  • The ID of the space to use. catalog_id, space_id, or project_id is required.

parameters

  • Table definition ID.

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Examples:
  • The ID of the space to use. catalog_id, space_id, or project_id is required.

Response

A table definition model that defines a set of parameters that can be referenced at runtime.

A table definition model that defines a set of parameters that can be referenced at runtime.

A table definition model that defines a set of parameters that can be referenced at runtime.

A table definition model that defines a set of parameters that can be referenced at runtime.

Status Code

  • Table definition found.

  • Bad request. See response for more information.

  • You are not authorized to retrieve the table definition.

  • You are not permitted to perform this action.

  • The data source type details cannot be found.

  • The service is currently receiving more requests than it can process in a timely fashion. Please retry submitting your request later.

  • An error occurred. No table definitions were retrieved.

  • A timeout occurred when processing your request. Please retry later.

No Sample Response

This method does not specify any sample responses.

Patch a table definition

Patch a table definition in the specified project or catalog (either project_id or catalog_id must be set).

Patch a table definition in the specified project or catalog (either project_id or catalog_id must be set).

Patch a table definition in the specified project or catalog (either project_id or catalog_id must be set).

Patch a table definition in the specified project or catalog (either project_id or catalog_id must be set).

PATCH /v3/table_definitions/{table_id}
ServiceCall<TableDefinition> patchTableDefinition(PatchTableDefinitionOptions patchTableDefinitionOptions)
patchTableDefinition(params)
patch_table_definition(
        self,
        table_id: str,
        json_patch: List['PatchDocument'],
        *,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the PatchTableDefinitionOptions.Builder to create a PatchTableDefinitionOptions object that contains the parameter values for the patchTableDefinition method.

Path Parameters

  • Table definition ID

Query Parameters

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id, space_id, or project_id is required.

The patch operations to apply.

The patchTableDefinition options.

parameters

  • Table definition ID.

  • The patch operations to apply.

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Examples:
  • The ID of the space to use. catalog_id, space_id, or project_id is required.

parameters

  • Table definition ID.

  • The patch operations to apply.

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Examples:
  • The ID of the space to use. catalog_id, space_id, or project_id is required.

Response

A table definition model that defines a set of parameters that can be referenced at runtime.

A table definition model that defines a set of parameters that can be referenced at runtime.

A table definition model that defines a set of parameters that can be referenced at runtime.

A table definition model that defines a set of parameters that can be referenced at runtime.

Status Code

  • The requested operation completed successfully.

  • Bad request. See response for more information.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • The data source type details cannot be found.

  • An error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Update a table definition with a replacement

Replace the contents of a table definition in the specified project or catalog (either project_id or catalog_id must be set).

Replace the contents of a table definition in the specified project or catalog (either project_id or catalog_id must be set).

Replace the contents of a table definition in the specified project or catalog (either project_id or catalog_id must be set).

Replace the contents of a table definition in the specified project or catalog (either project_id or catalog_id must be set).

PUT /v3/table_definitions/{table_id}
ServiceCall<TableDefinition> updateTableDefinition(UpdateTableDefinitionOptions updateTableDefinitionOptions)
updateTableDefinition(params)
update_table_definition(
        self,
        table_id: str,
        entity: 'TableDefinitionEntity',
        metadata: 'TableDefinitionMetadata',
        *,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        directory_asset_id: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the UpdateTableDefinitionOptions.Builder to create a UpdateTableDefinitionOptions object that contains the parameter values for the updateTableDefinition method.

Path Parameters

  • Table definition ID

Query Parameters

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id, space_id, or project_id is required.

  • The directory asset id to create the asset in or move to

The table definition to be updated.

The updateTableDefinition options.

parameters

  • Table definition ID.

  • The underlying table definition.

  • System metadata about a table definition.

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Examples:
  • The ID of the space to use. catalog_id, space_id, or project_id is required.

  • The directory asset id to create the asset in or move to.

parameters

  • Table definition ID.

  • The underlying table definition.

  • System metadata about a table definition.

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Examples:
  • The ID of the space to use. catalog_id, space_id, or project_id is required.

  • The directory asset id to create the asset in or move to.

Response

A table definition model that defines a set of parameters that can be referenced at runtime.

A table definition model that defines a set of parameters that can be referenced at runtime.

A table definition model that defines a set of parameters that can be referenced at runtime.

A table definition model that defines a set of parameters that can be referenced at runtime.

Status Code

  • The requested operation completed successfully.

  • Bad request. See response for more information.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • The data source type details cannot be found.

  • An error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Clone table definition

Clone table definition

Clone table definition.

Clone table definition.

Clone table definition.

POST /v3/table_definitions/{table_id}/clone
ServiceCall<TableDefinition> cloneTableDefinition(CloneTableDefinitionOptions cloneTableDefinitionOptions)
cloneTableDefinition(params)
clone_table_definition(
        self,
        table_id: str,
        *,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        directory_asset_id: Optional[str] = None,
        name: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the CloneTableDefinitionOptions.Builder to create a CloneTableDefinitionOptions object that contains the parameter values for the cloneTableDefinition method.

Path Parameters

  • Table definition ID

Query Parameters

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id, space_id, or project_id is required.

  • The directory asset id to create the asset in or move to

  • The new name.

The cloneTableDefinition options.

parameters

  • Table definition ID.

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Examples:
  • The ID of the space to use. catalog_id, space_id, or project_id is required.

  • The directory asset id to create the asset in or move to.

  • The new name.

parameters

  • Table definition ID.

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Examples:
  • The ID of the space to use. catalog_id, space_id, or project_id is required.

  • The directory asset id to create the asset in or move to.

  • The new name.

Response

A table definition model that defines a set of parameters that can be referenced at runtime.

A table definition model that defines a set of parameters that can be referenced at runtime.

A table definition model that defines a set of parameters that can be referenced at runtime.

A table definition model that defines a set of parameters that can be referenced at runtime.

Status Code

  • The requested operation completed successfully.

  • Bad request. See response for more information.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • The data source type details cannot be found.

  • Internal server error. See response for more information.

  • An error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Delete assets and their attachments

Delete assets and their attachments

Delete assets and their attachments.

Delete assets and their attachments.

Delete assets and their attachments.

DELETE /v3/assets
ServiceCall<Void> deleteAssets(DeleteAssetsOptions deleteAssetsOptions)
deleteAssets(params)
delete_assets(
        self,
        asset_ids: List[str],
        *,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        asset_type: Optional[str] = None,
        purge_test_data: Optional[bool] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the DeleteAssetsOptions.Builder to create a DeleteAssetsOptions object that contains the parameter values for the deleteAssets method.

Query Parameters

  • A list of asset IDs

  • The ID of the project to use. space_id, or project_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the project to use. space_id, or project_id is required.

  • The type of asset

  • Only take effect when delete data_intg_test_case. by default, it is false which means keep data files

The deleteAssets options.

parameters

  • A list of asset IDs.

  • The ID of the project to use. space_id, or project_id is required.

    Examples:
  • The ID of the project to use. space_id, or project_id is required.

  • The type of asset.

  • Only take effect when delete data_intg_test_case. by default, it is false which means keep data files.

parameters

  • A list of asset IDs.

  • The ID of the project to use. space_id, or project_id is required.

    Examples:
  • The ID of the project to use. space_id, or project_id is required.

  • The type of asset.

  • Only take effect when delete data_intg_test_case. by default, it is false which means keep data files.

Response

Status Code

  • The requested operation completed successfully.

  • The requested operation is in progress.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An unexpected error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

List assets

Lists the assets that are contained in the specified project.

Use the following parameters to filter the results:

| Field | Match type | Example | | ------------------------ | ------------ | --------------------------------------- | | asset.name | Starts with | ?asset.name=starts:assetName | | asset.description | Equals | ?asset.description=starts:assetDesc |

To sort the returned results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time, returning the most recently created data flows first.

| Field | Example | | ------------------------- | ----------------------------------- | | asset.name | ?sort=+asset.name | | metadata.create_time | ?sort=-metadata.create_time |

Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time and then in ascending order on name use: ?sort=-metadata.create_time,+asset.name

Lists the assets that are contained in the specified project.

Use the following parameters to filter the results:

| Field | Match type | Example | | ------------------------ | ------------ | --------------------------------------- | | asset.name | Starts with | ?asset.name=starts:assetName | | asset.description | Equals | ?asset.description=starts:assetDesc |

To sort the returned results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time, returning the most recently created data flows first.

| Field | Example | | ------------------------- | ----------------------------------- | | asset.name | ?sort=+asset.name | | metadata.create_time | ?sort=-metadata.create_time |

Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time and then in ascending order on name use: ?sort=-metadata.create_time,+asset.name.

Lists the assets that are contained in the specified project.

Use the following parameters to filter the results:

| Field | Match type | Example | | ------------------------ | ------------ | --------------------------------------- | | asset.name | Starts with | ?asset.name=starts:assetName | | asset.description | Equals | ?asset.description=starts:assetDesc |

To sort the returned results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time, returning the most recently created data flows first.

| Field | Example | | ------------------------- | ----------------------------------- | | asset.name | ?sort=+asset.name | | metadata.create_time | ?sort=-metadata.create_time |

Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time and then in ascending order on name use: ?sort=-metadata.create_time,+asset.name.

Lists the assets that are contained in the specified project.

Use the following parameters to filter the results:

| Field | Match type | Example | | ------------------------ | ------------ | --------------------------------------- | | asset.name | Starts with | ?asset.name=starts:assetName | | asset.description | Equals | ?asset.description=starts:assetDesc |

To sort the returned results, use one or more of the parameters described in the following section. If no sort key is specified, the results are sorted in descending order on metadata.create_time, returning the most recently created data flows first.

| Field | Example | | ------------------------- | ----------------------------------- | | asset.name | ?sort=+asset.name | | metadata.create_time | ?sort=-metadata.create_time |

Multiple sort keys can be specified by delimiting them with a comma. For example, to sort in descending order on create_time and then in ascending order on name use: ?sort=-metadata.create_time,+asset.name.

GET /v3/assets
ServiceCall<DSAssetPagedCollection> findAssets(FindAssetsOptions findAssetsOptions)
findAssets(params)
find_assets(
        self,
        *,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        asset_type: Optional[str] = None,
        sort: Optional[str] = None,
        start: Optional[str] = None,
        limit: Optional[int] = None,
        asset_name: Optional[str] = None,
        asset_description: Optional[str] = None,
        asset_resource_key: Optional[str] = None,
        tags: Optional[List[str]] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the FindAssetsOptions.Builder to create a FindAssetsOptions object that contains the parameter values for the findAssets method.

Query Parameters

  • The ID of the project to use. space_id, or project_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the project to use. space_id, or project_id is required.

  • The type of asset

  • The field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time

  • The page token indicating where to start paging from.

  • The limit of the number of items to return, for example limit=50. If not specified a default of 100 will be used.

    Possible values: value ≥ 1

    Example: 100

  • Filter results based on the specified name.

  • Filter results based on the specified description.

  • Filter results based on the specified resource_key.

  • A list of tags

The findAssets options.

parameters

  • The ID of the project to use. space_id, or project_id is required.

    Examples:
  • The ID of the project to use. space_id, or project_id is required.

  • The type of asset.

  • The field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time.

  • The page token indicating where to start paging from.

  • The limit of the number of items to return, for example limit=50. If not specified a default of 100 will be used.

    Possible values: value ≥ 1

    Examples:
  • Filter results based on the specified name.

  • Filter results based on the specified description.

  • Filter results based on the specified resource_key.

  • A list of tags.

parameters

  • The ID of the project to use. space_id, or project_id is required.

    Examples:
  • The ID of the project to use. space_id, or project_id is required.

  • The type of asset.

  • The field to sort the results on, including whether to sort ascending (+) or descending (-), for example, sort=-metadata.create_time.

  • The page token indicating where to start paging from.

  • The limit of the number of items to return, for example limit=50. If not specified a default of 100 will be used.

    Possible values: value ≥ 1

    Examples:
  • Filter results based on the specified name.

  • Filter results based on the specified description.

  • Filter results based on the specified resource_key.

  • A list of tags.

Response

A page from a collection of assets

A page from a collection of assets.

A page from a collection of assets.

A page from a collection of assets.

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An unexpected error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Create asset

Create asset

Create asset.

Create asset.

Create asset.

POST /v3/assets
ServiceCall<DSAsset> createAsset(CreateAssetOptions createAssetOptions)
createAsset(params)
create_asset(
        self,
        asset_type: str,
        entity: dict,
        name: str,
        *,
        attachments: Optional[List[dict]] = None,
        description: Optional[str] = None,
        tags: Optional[List[str]] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        directory_asset_id: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the CreateAssetOptions.Builder to create a CreateAssetOptions object that contains the parameter values for the createAsset method.

Query Parameters

  • The ID of the project to use. space_id, or project_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the project to use. space_id, or project_id is required.

  • The directory asset id to create the asset in or move to

Asset definition

The createAsset options.

parameters

  • asset type.

  • asset entity definition.

    Examples:
  • asset name.

  • asset description.

  • List of tags to identify the asset.

  • The ID of the project to use. space_id, or project_id is required.

    Examples:
  • The ID of the project to use. space_id, or project_id is required.

  • The directory asset id to create the asset in or move to.

parameters

  • asset type.

  • asset entity definition.

    Examples:
  • asset name.

  • asset description.

  • List of tags to identify the asset.

  • The ID of the project to use. space_id, or project_id is required.

    Examples:
  • The ID of the project to use. space_id, or project_id is required.

  • The directory asset id to create the asset in or move to.

Response

DataStage asset definition

DataStage asset definition.

DataStage asset definition.

DataStage asset definition.

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • The data source type details cannot be found.

  • An unexpected error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Create/update asset based on the given zip file.

Create/update asset based on the given zip file. Unzip the body and use the fixed metadata file(${asset_type}_metadata) to create/update asset, then upload remain files as attachments one by one

Create/update asset based on the given zip file. Unzip the body and use the fixed metadata file(${asset_type}_metadata) to create/update asset, then upload remain files as attachments one by one.

Create/update asset based on the given zip file. Unzip the body and use the fixed metadata file(${asset_type}_metadata) to create/update asset, then upload remain files as attachments one by one.

Create/update asset based on the given zip file. Unzip the body and use the fixed metadata file(${asset_type}_metadata) to create/update asset, then upload remain files as attachments one by one.

PUT /v3/assets
ServiceCall<DSAsset> zipImport(ZipImportOptions zipImportOptions)
zipImport(params)
zip_import(
        self,
        body: BinaryIO,
        *,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        asset_type: Optional[str] = None,
        directory_asset_id: Optional[str] = None,
        override: Optional[bool] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the ZipImportOptions.Builder to create a ZipImportOptions object that contains the parameter values for the zipImport method.

Query Parameters

  • The ID of the project to use. space_id, or project_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the project to use. space_id, or project_id is required.

  • The type of asset

  • The directory asset id to create the asset in or move to

  • whether override if the header file is common path and already exists in the cluster, by default it is false. data_set and file_set only.

The zip file to import.

The zipImport options.

parameters

  • The zip file to import.

  • The ID of the project to use. space_id, or project_id is required.

    Examples:
  • The ID of the project to use. space_id, or project_id is required.

  • The type of asset.

  • The directory asset id to create the asset in or move to.

  • whether override if the header file is common path and already exists in the cluster, by default it is false. data_set and file_set only.

parameters

  • The zip file to import.

  • The ID of the project to use. space_id, or project_id is required.

    Examples:
  • The ID of the project to use. space_id, or project_id is required.

  • The type of asset.

  • The directory asset id to create the asset in or move to.

  • whether override if the header file is common path and already exists in the cluster, by default it is false. data_set and file_set only.

Response

DataStage asset definition

DataStage asset definition.

DataStage asset definition.

DataStage asset definition.

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • The data source type details cannot be found.

  • An unexpected error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Get asset

Get asset

Get asset.

Get asset.

Get asset.

GET /v3/assets/{asset_id}
ServiceCall<DSAsset> getAsset(GetAssetOptions getAssetOptions)
getAsset(params)
get_asset(
        self,
        asset_id: str,
        *,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the GetAssetOptions.Builder to create a GetAssetOptions object that contains the parameter values for the getAsset method.

Path Parameters

  • The ID of the asset

Query Parameters

  • The ID of the project to use. space_id, or project_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the project to use. space_id, or project_id is required.

The getAsset options.

parameters

  • The ID of the asset.

  • The ID of the project to use. space_id, or project_id is required.

    Examples:
  • The ID of the project to use. space_id, or project_id is required.

parameters

  • The ID of the asset.

  • The ID of the project to use. space_id, or project_id is required.

    Examples:
  • The ID of the project to use. space_id, or project_id is required.

Response

DataStage asset definition

DataStage asset definition.

DataStage asset definition.

DataStage asset definition.

Status Code

  • Asset definition

  • Bad request. See response for more information.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • The data source type details cannot be found.

  • An error occurred.

  • An unexpected error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Update asset with a replacement

Update asset with a replacement. If a property is not provided in request body, its corresponding asset field will not be updated.

Update asset with a replacement. If a property is not provided in request body, its corresponding asset field will not be updated.

Update asset with a replacement. If a property is not provided in request body, its corresponding asset field will not be updated.

Update asset with a replacement. If a property is not provided in request body, its corresponding asset field will not be updated.

PUT /v3/assets/{asset_id}
ServiceCall<DSAsset> updateAsset(UpdateAssetOptions updateAssetOptions)
updateAsset(params)
update_asset(
        self,
        asset_id: str,
        *,
        description: Optional[str] = None,
        entity: Optional[dict] = None,
        name: Optional[str] = None,
        tags: Optional[List[str]] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        directory_asset_id: Optional[str] = None,
        override: Optional[bool] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the UpdateAssetOptions.Builder to create a UpdateAssetOptions object that contains the parameter values for the updateAsset method.

Path Parameters

  • The ID of the asset

Query Parameters

  • The ID of the project to use. space_id, or project_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the project to use. space_id, or project_id is required.

  • The directory asset id to create the asset in or move to

  • whether override if the header file is common path and already exists in the cluster, by default it is false. data_set and file_set only.

New asset definition

The updateAsset options.

parameters

  • The ID of the asset.

  • asset description.

  • asset entity definition.

    Examples:
  • asset name.

  • The ID of the project to use. space_id, or project_id is required.

    Examples:
  • The ID of the project to use. space_id, or project_id is required.

  • The directory asset id to create the asset in or move to.

  • whether override if the header file is common path and already exists in the cluster, by default it is false. data_set and file_set only.

parameters

  • The ID of the asset.

  • asset description.

  • asset entity definition.

    Examples:
  • asset name.

  • The ID of the project to use. space_id, or project_id is required.

    Examples:
  • The ID of the project to use. space_id, or project_id is required.

  • The directory asset id to create the asset in or move to.

  • whether override if the header file is common path and already exists in the cluster, by default it is false. data_set and file_set only.

Response

DataStage asset definition

DataStage asset definition.

DataStage asset definition.

DataStage asset definition.

Status Code

  • The requested operation completed successfully

  • Bad request. See response for more information.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • The data source type details cannot be found.

  • An unexpected error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Delete attachments

Delete attachments

Delete attachments.

Delete attachments.

Delete attachments.

DELETE /v3/assets/{asset_id}/attachments
ServiceCall<Void> deleteAttachments(DeleteAttachmentsOptions deleteAttachmentsOptions)
deleteAttachments(params)
delete_attachments(
        self,
        asset_id: str,
        attachment_ids: List[str],
        *,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the DeleteAttachmentsOptions.Builder to create a DeleteAttachmentsOptions object that contains the parameter values for the deleteAttachments method.

Path Parameters

  • The ID of the asset

Query Parameters

  • A list of attachment GUIDs or names

  • The ID of the project to use. space_id, or project_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the project to use. space_id, or project_id is required.

The deleteAttachments options.

parameters

  • The ID of the asset.

  • A list of attachment GUIDs or names.

  • The ID of the project to use. space_id, or project_id is required.

    Examples:
  • The ID of the project to use. space_id, or project_id is required.

parameters

  • The ID of the asset.

  • A list of attachment GUIDs or names.

  • The ID of the project to use. space_id, or project_id is required.

    Examples:
  • The ID of the project to use. space_id, or project_id is required.

Response

Status Code

  • The requested operation completed successfully.

  • The requested operation is in progress.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An unexpected error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Create attachment

Create attachment

Create attachment.

Create attachment.

Create attachment.

POST /v3/assets/{asset_id}/attachments
ServiceCall<DSAttachment> createAttachment(CreateAttachmentOptions createAttachmentOptions)
createAttachment(params)
create_attachment(
        self,
        asset_id: str,
        attachment_name: str,
        attachment_type: str,
        body: BinaryIO,
        *,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the CreateAttachmentOptions.Builder to create a CreateAttachmentOptions object that contains the parameter values for the createAttachment method.

Path Parameters

  • The ID of the asset

Query Parameters

  • The name of the new attachment

  • The mime type of the new attachment

    Allowable values: [application/octet-stream,application/json]

  • The ID of the project to use. space_id, or project_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the project to use. space_id, or project_id is required.

Attachment content

The createAttachment options.

parameters

  • The ID of the asset.

  • The name of the new attachment.

  • The mime type of the new attachment.

    Allowable values: [application/octet-stream,application/json]

  • Attachment content.

  • The ID of the project to use. space_id, or project_id is required.

    Examples:
  • The ID of the project to use. space_id, or project_id is required.

parameters

  • The ID of the asset.

  • The name of the new attachment.

  • The mime type of the new attachment.

    Allowable values: [application/octet-stream,application/json]

  • Attachment content.

  • The ID of the project to use. space_id, or project_id is required.

    Examples:
  • The ID of the project to use. space_id, or project_id is required.

Response

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • The data source type details cannot be found.

  • An unexpected error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Get attachment

Get attachment

Get attachment.

Get attachment.

Get attachment.

GET /v3/assets/{asset_id}/attachments/{attachment_id}
ServiceCall<InputStream> getAttachment(GetAttachmentOptions getAttachmentOptions)
getAttachment(params)
get_attachment(
        self,
        asset_id: str,
        attachment_id: str,
        *,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the GetAttachmentOptions.Builder to create a GetAttachmentOptions object that contains the parameter values for the getAttachment method.

Path Parameters

  • The ID of the asset

  • The GUID or name of the attachment

Query Parameters

  • The ID of the project to use. space_id, or project_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the project to use. space_id, or project_id is required.

The getAttachment options.

parameters

  • The ID of the asset.

  • The GUID or name of the attachment.

  • The ID of the project to use. space_id, or project_id is required.

    Examples:
  • The ID of the project to use. space_id, or project_id is required.

parameters

  • The ID of the asset.

  • The GUID or name of the attachment.

  • The ID of the project to use. space_id, or project_id is required.

    Examples:
  • The ID of the project to use. space_id, or project_id is required.

Response

Response type: InputStream

Response type: NodeJS.ReadableStream

Response type: BinaryIO

Status Code

  • Attachment content

  • Bad request. See response for more information.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • The data source type details cannot be found.

  • An error occurred.

  • An unexpected error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Patch attachment

Patch attachment

Patch attachment.

Patch attachment.

Patch attachment.

PATCH /v3/assets/{asset_id}/attachments/{attachment_id}
ServiceCall<DSAttachment> patchAttachment(PatchAttachmentOptions patchAttachmentOptions)
patchAttachment(params)
patch_attachment(
        self,
        asset_id: str,
        attachment_id: str,
        request_body: dict,
        *,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the PatchAttachmentOptions.Builder to create a PatchAttachmentOptions object that contains the parameter values for the patchAttachment method.

Path Parameters

  • The ID of the asset

  • The GUID or name of the attachment

Query Parameters

  • The ID of the project to use. space_id, or project_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the project to use. space_id, or project_id is required.

the patch json

The patchAttachment options.

parameters

  • The ID of the asset.

  • The GUID or name of the attachment.

  • the patch json.

  • The ID of the project to use. space_id, or project_id is required.

    Examples:
  • The ID of the project to use. space_id, or project_id is required.

parameters

  • The ID of the asset.

  • The GUID or name of the attachment.

  • the patch json.

  • The ID of the project to use. space_id, or project_id is required.

    Examples:
  • The ID of the project to use. space_id, or project_id is required.

Response

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • The data source type details cannot be found.

  • An unexpected error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Replace attachment

Replace attachment

Replace attachment.

Replace attachment.

Replace attachment.

PUT /v3/assets/{asset_id}/attachments/{attachment_id}
ServiceCall<DSAttachment> replaceAttachment(ReplaceAttachmentOptions replaceAttachmentOptions)
replaceAttachment(params)
replace_attachment(
        self,
        asset_id: str,
        attachment_id: str,
        attachment_name: str,
        attachment_type: str,
        body: BinaryIO,
        *,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the ReplaceAttachmentOptions.Builder to create a ReplaceAttachmentOptions object that contains the parameter values for the replaceAttachment method.

Path Parameters

  • The ID of the asset

  • The GUID or name of the attachment

Query Parameters

  • The name of the new attachment

  • The mime type of the new attachment

    Allowable values: [application/octet-stream,application/json]

  • The ID of the project to use. space_id, or project_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the project to use. space_id, or project_id is required.

Attachment content

The replaceAttachment options.

parameters

  • The ID of the asset.

  • The GUID or name of the attachment.

  • The name of the new attachment.

  • The mime type of the new attachment.

    Allowable values: [application/octet-stream,application/json]

  • Attachment content.

  • The ID of the project to use. space_id, or project_id is required.

    Examples:
  • The ID of the project to use. space_id, or project_id is required.

parameters

  • The ID of the asset.

  • The GUID or name of the attachment.

  • The name of the new attachment.

  • The mime type of the new attachment.

    Allowable values: [application/octet-stream,application/json]

  • Attachment content.

  • The ID of the project to use. space_id, or project_id is required.

    Examples:
  • The ID of the project to use. space_id, or project_id is required.

Response

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • The data source type details cannot be found.

  • An unexpected error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Clone asset and its attachments

Clone asset and its attachments

Clone asset and its attachments.

Clone asset and its attachments.

Clone asset and its attachments.

POST /v3/assets/{asset_id}/clone
ServiceCall<DSAsset> cloneAsset(CloneAssetOptions cloneAssetOptions)
cloneAsset(params)
clone_asset(
        self,
        asset_id: str,
        *,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        directory_asset_id: Optional[str] = None,
        name: Optional[str] = None,
        override: Optional[bool] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the CloneAssetOptions.Builder to create a CloneAssetOptions object that contains the parameter values for the cloneAsset method.

Path Parameters

  • The ID of the asset

Query Parameters

  • The ID of the project to use. space_id, or project_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the project to use. space_id, or project_id is required.

  • The directory asset id to create the asset in or move to

  • The new name

  • whether override if the header file is common path and already exists in the cluster, by default it is false. data_set and file_set only.

The cloneAsset options.

parameters

  • The ID of the asset.

  • The ID of the project to use. space_id, or project_id is required.

    Examples:
  • The ID of the project to use. space_id, or project_id is required.

  • The directory asset id to create the asset in or move to.

  • The new name.

  • whether override if the header file is common path and already exists in the cluster, by default it is false. data_set and file_set only.

parameters

  • The ID of the asset.

  • The ID of the project to use. space_id, or project_id is required.

    Examples:
  • The ID of the project to use. space_id, or project_id is required.

  • The directory asset id to create the asset in or move to.

  • The new name.

  • whether override if the header file is common path and already exists in the cluster, by default it is false. data_set and file_set only.

Response

DataStage asset definition

DataStage asset definition.

DataStage asset definition.

DataStage asset definition.

Status Code

  • The requested operation completed successfully.

  • Bad request. See response for more information.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • The data source type details cannot be found.

  • An unexpected error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Export the asset with its all attachments as a zip

Export the asset with its all attachments as a zip which include the fixed metadata file(${asset_type}_metadata) and all attachments(without entity-attachment)

Export the asset with its all attachments as a zip which include the fixed metadata file(${asset_type}_metadata) and all attachments(without entity-attachment).

Export the asset with its all attachments as a zip which include the fixed metadata file(${asset_type}_metadata) and all attachments(without entity-attachment).

Export the asset with its all attachments as a zip which include the fixed metadata file(${asset_type}_metadata) and all attachments(without entity-attachment).

GET /v3/assets/{asset_id}/export
ServiceCall<InputStream> zipExport(ZipExportOptions zipExportOptions)
zipExport(params)
zip_export(
        self,
        asset_id: str,
        *,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        max_allowed_data_size: Optional[int] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the ZipExportOptions.Builder to create a ZipExportOptions object that contains the parameter values for the zipExport method.

Path Parameters

  • The ID of the asset

Query Parameters

  • The ID of the project to use. space_id, or project_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the project to use. space_id, or project_id is required.

  • (Optional) Only the data size smaller than this are exported. In mega byte, only take effect on data_intg_data_set and data_intg_file_set and data_intg_test_case. By default or set to negative, means 500. If set 0, skip data. if greater than 1000, use 1000.

The zipExport options.

parameters

  • The ID of the asset.

  • The ID of the project to use. space_id, or project_id is required.

    Examples:
  • The ID of the project to use. space_id, or project_id is required.

  • (Optional) Only the data size smaller than this are exported. In mega byte, only take effect on data_intg_data_set and data_intg_file_set and data_intg_test_case. By default or set to negative, means 500. If set 0, skip data. if greater than 1000, use 1000.

parameters

  • The ID of the asset.

  • The ID of the project to use. space_id, or project_id is required.

    Examples:
  • The ID of the project to use. space_id, or project_id is required.

  • (Optional) Only the data size smaller than this are exported. In mega byte, only take effect on data_intg_data_set and data_intg_file_set and data_intg_test_case. By default or set to negative, means 500. If set 0, skip data. if greater than 1000, use 1000.

Response

Response type: InputStream

Response type: NodeJS.ReadableStream

Response type: BinaryIO

Status Code

  • The requested operation completed successfully.

  • Bad request. See response for more information.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • The data source type details cannot be found.

  • An error occurred.

  • An unexpected error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

List DataStage XML schema libraries

List existing DataStage XML schema libraries in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).

List existing DataStage XML schema libraries in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).

List existing DataStage XML schema libraries in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).

List existing DataStage XML schema libraries in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).

GET /v3/schema_libraries
ServiceCall<LibraryList> listDatastageLibraries(ListDatastageLibrariesOptions listDatastageLibrariesOptions)
listDatastageLibraries(params)
list_datastage_libraries(
        self,
        *,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the ListDatastageLibrariesOptions.Builder to create a ListDatastageLibrariesOptions object that contains the parameter values for the listDatastageLibraries method.

Query Parameters

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id, space_id, or project_id is required.

The listDatastageLibraries options.

parameters

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Examples:
  • The ID of the space to use. catalog_id, space_id, or project_id is required.

parameters

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Examples:
  • The ID of the space to use. catalog_id, space_id, or project_id is required.

Response

the list of libraries.

the list of libraries.

the list of libraries.

the list of libraries.

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An unexpected error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Create a new DataStage XML schema library

Creates a new DataStage XML schema library in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).

Creates a new DataStage XML schema library in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).

Creates a new DataStage XML schema library in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).

Creates a new DataStage XML schema library in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).

POST /v3/schema_libraries
ServiceCall<Library> createDatastageLibrary(CreateDatastageLibraryOptions createDatastageLibraryOptions)
createDatastageLibrary(params)
create_datastage_library(
        self,
        name: str,
        *,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        directory_asset_id: Optional[str] = None,
        folder: Optional[str] = None,
        description: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the CreateDatastageLibraryOptions.Builder to create a CreateDatastageLibraryOptions object that contains the parameter values for the createDatastageLibrary method.

Query Parameters

  • The name of the new XML schema library.

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id, space_id, or project_id is required.

  • The directory asset id to create the asset in or move to

  • The folder that the new XML schema library belongs to.

  • The description of the new XML schema library.

The createDatastageLibrary options.

parameters

  • The name of the new XML schema library.

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Examples:
  • The ID of the space to use. catalog_id, space_id, or project_id is required.

  • The directory asset id to create the asset in or move to.

  • The folder that the new XML schema library belongs to.

  • The description of the new XML schema library.

parameters

  • The name of the new XML schema library.

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Examples:
  • The ID of the space to use. catalog_id, space_id, or project_id is required.

  • The directory asset id to create the asset in or move to.

  • The folder that the new XML schema library belongs to.

  • The description of the new XML schema library.

Response

library info.

library info.

library info.

library info.

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An unexpected error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Delete a DataStage XML schema library

Delete a DataStage XML schema library in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).

Delete a DataStage XML schema library in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).

Delete a DataStage XML schema library in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).

Delete a DataStage XML schema library in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).

DELETE /v3/schema_libraries/{library_id}
ServiceCall<Void> deleteDatastageLibrary(DeleteDatastageLibraryOptions deleteDatastageLibraryOptions)
deleteDatastageLibrary(params)
delete_datastage_library(
        self,
        library_id: str,
        *,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the DeleteDatastageLibraryOptions.Builder to create a DeleteDatastageLibraryOptions object that contains the parameter values for the deleteDatastageLibrary method.

Path Parameters

  • The ID of the XML Schema Library.

Query Parameters

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id, space_id, or project_id is required.

The deleteDatastageLibrary options.

parameters

  • The ID of the XML Schema Library.

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Examples:
  • The ID of the space to use. catalog_id, space_id, or project_id is required.

parameters

  • The ID of the XML Schema Library.

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Examples:
  • The ID of the space to use. catalog_id, space_id, or project_id is required.

Response

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An unexpected error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Get the specify DataStage XML schema library

Get the specify DataStage XML schema library in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).

Get the specify DataStage XML schema library in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).

Get the specify DataStage XML schema library in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).

Get the specify DataStage XML schema library in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).

GET /v3/schema_libraries/{library_id}
ServiceCall<Library> getDatastageLibrary(GetDatastageLibraryOptions getDatastageLibraryOptions)
getDatastageLibrary(params)
get_datastage_library(
        self,
        library_id: str,
        *,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the GetDatastageLibraryOptions.Builder to create a GetDatastageLibraryOptions object that contains the parameter values for the getDatastageLibrary method.

Path Parameters

  • The ID of the XML Schema Library.

Query Parameters

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id, space_id, or project_id is required.

The getDatastageLibrary options.

parameters

  • The ID of the XML Schema Library.

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Examples:
  • The ID of the space to use. catalog_id, space_id, or project_id is required.

parameters

  • The ID of the XML Schema Library.

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Examples:
  • The ID of the space to use. catalog_id, space_id, or project_id is required.

Response

library info.

library info.

library info.

library info.

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An error occurred.

No Sample Response

This method does not specify any sample responses.

Update a DataStage XML schema library

Update a DataStage XML schema library, name and directory_id only

Update a DataStage XML schema library, name and directory_id only.

Update a DataStage XML schema library, name and directory_id only.

Update a DataStage XML schema library, name and directory_id only.

POST /v3/schema_libraries/{library_id}
ServiceCall<Library> updateDatastageLibrary(UpdateDatastageLibraryOptions updateDatastageLibraryOptions)
updateDatastageLibrary(params)
update_datastage_library(
        self,
        library_id: str,
        *,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        directory_asset_id: Optional[str] = None,
        name: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the UpdateDatastageLibraryOptions.Builder to create a UpdateDatastageLibraryOptions object that contains the parameter values for the updateDatastageLibrary method.

Path Parameters

  • The ID of the XML Schema Library.

Query Parameters

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id, space_id, or project_id is required.

  • The directory asset id to create the asset in or move to

  • The new name of the XML schema library.

The updateDatastageLibrary options.

parameters

  • The ID of the XML Schema Library.

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Examples:
  • The ID of the space to use. catalog_id, space_id, or project_id is required.

  • The directory asset id to create the asset in or move to.

  • The new name of the XML schema library.

parameters

  • The ID of the XML Schema Library.

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Examples:
  • The ID of the space to use. catalog_id, space_id, or project_id is required.

  • The directory asset id to create the asset in or move to.

  • The new name of the XML schema library.

Response

library info.

library info.

library info.

library info.

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An error occurred.

No Sample Response

This method does not specify any sample responses.

Upload a file to an existing DataStage XML schema library

Upload a file to an existing DataStage XML schema library in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set). Thread unsafe

Upload a file to an existing DataStage XML schema library in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set). Thread unsafe.

Upload a file to an existing DataStage XML schema library in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set). Thread unsafe.

Upload a file to an existing DataStage XML schema library in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set). Thread unsafe.

PUT /v3/schema_libraries/{library_id}
ServiceCall<Library> uploadDatastageLibraryFile(UploadDatastageLibraryFileOptions uploadDatastageLibraryFileOptions)
uploadDatastageLibraryFile(params)
upload_datastage_library_file(
        self,
        library_id: str,
        body: BinaryIO,
        *,
        file_name: Optional[str] = None,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        max_tree_size: Optional[int] = None,
        output_global_type: Optional[bool] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the UploadDatastageLibraryFileOptions.Builder to create a UploadDatastageLibraryFileOptions object that contains the parameter values for the uploadDatastageLibraryFile method.

Path Parameters

  • The ID of the XML Schema Library.

Query Parameters

  • The file name you want to upload to the specified XML schema library.

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id, space_id, or project_id is required.

  • Max tree size for one type in schema library

  • Output the global type

The content of the file to upload.

The uploadDatastageLibraryFile options.

parameters

  • The ID of the XML Schema Library.

  • The content of the file to upload.

  • The file name you want to upload to the specified XML schema library.

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Examples:
  • The ID of the space to use. catalog_id, space_id, or project_id is required.

  • Max tree size for one type in schema library.

  • Output the global type.

parameters

  • The ID of the XML Schema Library.

  • The content of the file to upload.

  • The file name you want to upload to the specified XML schema library.

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Examples:
  • The ID of the space to use. catalog_id, space_id, or project_id is required.

  • Max tree size for one type in schema library.

  • Output the global type.

Response

library info.

library info.

library info.

library info.

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An unexpected error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Clone a DataStage XML schema library

Clone a DataStage XML schema library based on the specify library id in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).

Clone a DataStage XML schema library based on the specify library id in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).

Clone a DataStage XML schema library based on the specify library id in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).

Clone a DataStage XML schema library based on the specify library id in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).

POST /v3/schema_libraries/{library_id}/clone
ServiceCall<Library> cloneDatastageLibrary(CloneDatastageLibraryOptions cloneDatastageLibraryOptions)
cloneDatastageLibrary(params)
clone_datastage_library(
        self,
        library_id: str,
        *,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        directory_asset_id: Optional[str] = None,
        name: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the CloneDatastageLibraryOptions.Builder to create a CloneDatastageLibraryOptions object that contains the parameter values for the cloneDatastageLibrary method.

Path Parameters

  • The ID of the XML Schema Library.

Query Parameters

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id, space_id, or project_id is required.

  • The directory asset id to create the asset in or move to

  • The new name of the XML schema library.

The cloneDatastageLibrary options.

parameters

  • The ID of the XML Schema Library.

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Examples:
  • The ID of the space to use. catalog_id, space_id, or project_id is required.

  • The directory asset id to create the asset in or move to.

  • The new name of the XML schema library.

parameters

  • The ID of the XML Schema Library.

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Examples:
  • The ID of the space to use. catalog_id, space_id, or project_id is required.

  • The directory asset id to create the asset in or move to.

  • The new name of the XML schema library.

Response

library info.

library info.

library info.

library info.

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An error occurred.

No Sample Response

This method does not specify any sample responses.

Delete files from a DataStage XML schema library

Delete files from a DataStage XML schema library based on the file_names in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set). Thread unsafe.

Delete files from a DataStage XML schema library based on the file_names in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set). Thread unsafe.

Delete files from a DataStage XML schema library based on the file_names in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set). Thread unsafe.

Delete files from a DataStage XML schema library based on the file_names in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set). Thread unsafe.

DELETE /v3/schema_libraries/{library_id}/file
ServiceCall<Void> deleteDatastageLibraryFiles(DeleteDatastageLibraryFilesOptions deleteDatastageLibraryFilesOptions)
deleteDatastageLibraryFiles(params)
delete_datastage_library_files(
        self,
        file_names: str,
        library_id: str,
        *,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        max_tree_size: Optional[int] = None,
        output_global_type: Optional[bool] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the DeleteDatastageLibraryFilesOptions.Builder to create a DeleteDatastageLibraryFilesOptions object that contains the parameter values for the deleteDatastageLibraryFiles method.

Path Parameters

  • The ID of the XML Schema Library.

Query Parameters

  • The file names (path-dependent) you want to delete from the specified XML schema library. Multiple files can be specified by delimiting them with a comma. Skip files are not exist in this library.

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id, space_id, or project_id is required.

  • Max tree size for one type in schema library

  • Output the global type

The deleteDatastageLibraryFiles options.

parameters

  • The file names (path-dependent) you want to delete from the specified XML schema library. Multiple files can be specified by delimiting them with a comma. Skip files are not exist in this library.

  • The ID of the XML Schema Library.

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Examples:
  • The ID of the space to use. catalog_id, space_id, or project_id is required.

  • Max tree size for one type in schema library.

  • Output the global type.

parameters

  • The file names (path-dependent) you want to delete from the specified XML schema library. Multiple files can be specified by delimiting them with a comma. Skip files are not exist in this library.

  • The ID of the XML Schema Library.

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Examples:
  • The ID of the space to use. catalog_id, space_id, or project_id is required.

  • Max tree size for one type in schema library.

  • Output the global type.

Response

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An unexpected error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Download file from a DataStage XML schema library

Download file from a DataStage XML schema library based on the file_name in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).

Download file from a DataStage XML schema library based on the file_name in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).

Download file from a DataStage XML schema library based on the file_name in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).

Download file from a DataStage XML schema library based on the file_name in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).

GET /v3/schema_libraries/{library_id}/file
ServiceCall<InputStream> downloadDatastageLibraryFile(DownloadDatastageLibraryFileOptions downloadDatastageLibraryFileOptions)
downloadDatastageLibraryFile(params)
download_datastage_library_file(
        self,
        library_id: str,
        *,
        file_name: Optional[str] = None,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the DownloadDatastageLibraryFileOptions.Builder to create a DownloadDatastageLibraryFileOptions object that contains the parameter values for the downloadDatastageLibraryFile method.

Path Parameters

  • The ID of the XML Schema Library.

Query Parameters

  • The file name (path-dependent) you want to download from the specified XML schema library. If specified, only download the file. If not specified, download all files as a zip which maintain its original structure.

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id, space_id, or project_id is required.

The downloadDatastageLibraryFile options.

parameters

  • The ID of the XML Schema Library.

  • The file name (path-dependent) you want to download from the specified XML schema library. If specified, only download the file. If not specified, download all files as a zip which maintain its original structure.

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Examples:
  • The ID of the space to use. catalog_id, space_id, or project_id is required.

parameters

  • The ID of the XML Schema Library.

  • The file name (path-dependent) you want to download from the specified XML schema library. If specified, only download the file. If not specified, download all files as a zip which maintain its original structure.

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Examples:
  • The ID of the space to use. catalog_id, space_id, or project_id is required.

Response

Response type: InputStream

Response type: NodeJS.ReadableStream

Response type: BinaryIO

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An unexpected error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Rename a DataStage XML schema library

Rename a DataStage XML schema library based on the specify library id in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).

Rename a DataStage XML schema library based on the specify library id in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).

Rename a DataStage XML schema library based on the specify library id in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).

Rename a DataStage XML schema library based on the specify library id in the specified project, catalog, or space (either project_id, catalog_id, or space_id must be set).

POST /v3/schema_libraries/{library_id}/rename
ServiceCall<Library> renameDatastageLibrary(RenameDatastageLibraryOptions renameDatastageLibraryOptions)
renameDatastageLibrary(params)
rename_datastage_library(
        self,
        library_id: str,
        name: str,
        *,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the RenameDatastageLibraryOptions.Builder to create a RenameDatastageLibraryOptions object that contains the parameter values for the renameDatastageLibrary method.

Path Parameters

  • The ID of the XML Schema Library.

Query Parameters

  • The new name of the XML schema library

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id, space_id, or project_id is required.

The renameDatastageLibrary options.

parameters

  • The ID of the XML Schema Library.

  • The new name of the XML schema library.

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Examples:
  • The ID of the space to use. catalog_id, space_id, or project_id is required.

parameters

  • The ID of the XML Schema Library.

  • The new name of the XML schema library.

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Examples:
  • The ID of the space to use. catalog_id, space_id, or project_id is required.

Response

library info.

library info.

library info.

library info.

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An error occurred.

No Sample Response

This method does not specify any sample responses.

Export a XML Schema Library in zip format

Export a XML Schema Library in zip format.

Export a XML Schema Library in zip format.

Export a XML Schema Library in zip format.

Export a XML Schema Library in zip format.

GET /v3/schema_libraries/{library_name}/zip
ServiceCall<InputStream> exportDatastageLibraryZip(ExportDatastageLibraryZipOptions exportDatastageLibraryZipOptions)
exportDatastageLibraryZip(params)
export_datastage_library_zip(
        self,
        library_name: str,
        *,
        folder: Optional[str] = None,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the ExportDatastageLibraryZipOptions.Builder to create a ExportDatastageLibraryZipOptions object that contains the parameter values for the exportDatastageLibraryZip method.

Path Parameters

  • The name or id of the XML schema library

Query Parameters

  • The folder of the XML schema library

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id, space_id, or project_id is required.

The exportDatastageLibraryZip options.

parameters

  • The name or id of the XML schema library.

  • The folder of the XML schema library.

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Examples:
  • The ID of the space to use. catalog_id, space_id, or project_id is required.

parameters

  • The name or id of the XML schema library.

  • The folder of the XML schema library.

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Examples:
  • The ID of the space to use. catalog_id, space_id, or project_id is required.

Response

Response type: InputStream

Response type: NodeJS.ReadableStream

Response type: BinaryIO

Status Code

  • Success.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An unexpected error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Import/create XML Schema Library from zip stream

Import/create XML Schema Library from zip stream

Import/create XML Schema Library from zip stream.

Import/create XML Schema Library from zip stream.

Import/create XML Schema Library from zip stream.

PUT /v3/schema_libraries/{library_name}/zip
ServiceCall<Library> importDatastageLibraryZip(ImportDatastageLibraryZipOptions importDatastageLibraryZipOptions)
importDatastageLibraryZip(params)
import_datastage_library_zip(
        self,
        library_name: str,
        body: BinaryIO,
        *,
        folder: Optional[str] = None,
        conflict_option: Optional[str] = None,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        directory_asset_id: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the ImportDatastageLibraryZipOptions.Builder to create a ImportDatastageLibraryZipOptions object that contains the parameter values for the importDatastageLibraryZip method.

Path Parameters

  • The name of the XML schema library

Query Parameters

  • The folder of the XML schema library

  • The conflict_option. The default is skip

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id, space_id, or project_id is required.

  • The directory asset id to create the asset in or move to

The importDatastageLibraryZip options.

parameters

  • The name of the XML schema library.

  • The folder of the XML schema library.

  • The conflict_option. The default is skip.

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Examples:
  • The ID of the space to use. catalog_id, space_id, or project_id is required.

  • The directory asset id to create the asset in or move to.

parameters

  • The name of the XML schema library.

  • The folder of the XML schema library.

  • The conflict_option. The default is skip.

  • The ID of the catalog to use. catalog_id, space_id, or project_id is required.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Examples:
  • The ID of the space to use. catalog_id, space_id, or project_id is required.

  • The directory asset id to create the asset in or move to.

Response

library info.

library info.

library info.

library info.

Status Code

  • Success.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An unexpected error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

List all Standardization Rules in a project.

List all Standardization Rules in a project.

List all Standardization Rules in a project.

List all Standardization Rules in a project.

List all Standardization Rules in a project.

GET /v3/quality_stage/rules
ServiceCall<QualityFolder> listRules(ListRulesOptions listRulesOptions)
listRules(params)
list_rules(
        self,
        *,
        project_id: Optional[str] = None,
        catalog_id: Optional[str] = None,
        space_id: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the ListRulesOptions.Builder to create a ListRulesOptions object that contains the parameter values for the listRules method.

Query Parameters

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Example: 4c9adbb4-28ef-4a7d-b273-1cee0c38021f

The listRules options.

parameters

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:

parameters

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:

Response

A folder.

A folder.

A folder.

A folder.

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Removes the Standardization Rule from a project.

Removes the Standardization Rule from a project. For the built-in one, it will restore to the defaults and do not actually delete it from the system.

Removes the Standardization Rule from a project. For the built-in one, it will restore to the defaults and do not actually delete it from the system.

Removes the Standardization Rule from a project. For the built-in one, it will restore to the defaults and do not actually delete it from the system.

Removes the Standardization Rule from a project. For the built-in one, it will restore to the defaults and do not actually delete it from the system.

DELETE /v3/quality_stage/rules/{rule_name}
ServiceCall<Void> deleteRule(DeleteRuleOptions deleteRuleOptions)
deleteRule(params)
delete_rule(
        self,
        location: str,
        rule_name: str,
        *,
        project_id: Optional[str] = None,
        catalog_id: Optional[str] = None,
        space_id: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the DeleteRuleOptions.Builder to create a DeleteRuleOptions object that contains the parameter values for the deleteRule method.

Path Parameters

  • The rule name or the asset id.

Query Parameters

  • The location of rule set. Required only when rule_name is not an asset id

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Example: 4c9adbb4-28ef-4a7d-b273-1cee0c38021f

The deleteRule options.

parameters

  • The location of rule set. Required only when rule_name is not an asset id.

  • The rule name or the asset id.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:

parameters

  • The location of rule set. Required only when rule_name is not an asset id.

  • The rule name or the asset id.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:

Response

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Get basic information for the Standardization Rule

Get basic information for the Standardization Rule in a project.

Get basic information for the Standardization Rule in a project.

Get basic information for the Standardization Rule in a project.

Get basic information for the Standardization Rule in a project.

GET /v3/quality_stage/rules/{rule_name}
ServiceCall<QualityFolder> getRule(GetRuleOptions getRuleOptions)
getRule(params)
get_rule(
        self,
        location: str,
        rule_name: str,
        *,
        project_id: Optional[str] = None,
        catalog_id: Optional[str] = None,
        space_id: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the GetRuleOptions.Builder to create a GetRuleOptions object that contains the parameter values for the getRule method.

Path Parameters

  • The rule name or the asset id.

Query Parameters

  • The location of rule set. Required only when rule_name is not an asset id

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Example: 4c9adbb4-28ef-4a7d-b273-1cee0c38021f

The getRule options.

parameters

  • The location of rule set. Required only when rule_name is not an asset id.

  • The rule name or the asset id.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:

parameters

  • The location of rule set. Required only when rule_name is not an asset id.

  • The rule name or the asset id.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:

Response

A folder.

A folder.

A folder.

A folder.

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Create a new Standardization Rule.

create a new Standardization Rule in a given project.

create a new Standardization Rule in a given project.

create a new Standardization Rule in a given project.

create a new Standardization Rule in a given project.

POST /v3/quality_stage/rules/{rule_name}
ServiceCall<QualityFolder> createRule(CreateRuleOptions createRuleOptions)
createRule(params)
create_rule(
        self,
        location: str,
        rule_name: str,
        *,
        project_id: Optional[str] = None,
        catalog_id: Optional[str] = None,
        space_id: Optional[str] = None,
        description: Optional[str] = None,
        directory_asset_id: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the CreateRuleOptions.Builder to create a CreateRuleOptions object that contains the parameter values for the createRule method.

Path Parameters

  • The rule name or the asset id.

Query Parameters

  • The location of rule set. Required only when rule_name is not an asset id

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Example: 4c9adbb4-28ef-4a7d-b273-1cee0c38021f

  • The directory asset id to create the asset in or move to

The createRule options.

parameters

  • The location of rule set. Required only when rule_name is not an asset id.

  • The rule name or the asset id.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • The directory asset id to create the asset in or move to.

parameters

  • The location of rule set. Required only when rule_name is not an asset id.

  • The rule name or the asset id.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • The directory asset id to create the asset in or move to.

Response

A folder.

A folder.

A folder.

A folder.

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Copy the Standardization Rule from from_location to to_location.

Copy the Standardization Rule from from_location to to_location.

Copy the Standardization Rule from from_location to to_location.

Copy the Standardization Rule from from_location to to_location.

Copy the Standardization Rule from from_location to to_location.

POST /v3/quality_stage/rules/{rule_name}/copy
ServiceCall<QualityFolder> cloneRule(CloneRuleOptions cloneRuleOptions)
cloneRule(params)
clone_rule(
        self,
        rule_name: str,
        from_location: str,
        *,
        project_id: Optional[str] = None,
        catalog_id: Optional[str] = None,
        space_id: Optional[str] = None,
        to_location: Optional[str] = None,
        new_name: Optional[str] = None,
        description: Optional[str] = None,
        directory_asset_id: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the CloneRuleOptions.Builder to create a CloneRuleOptions object that contains the parameter values for the cloneRule method.

Path Parameters

  • The rule name or the asset id.

Query Parameters

  • The from_location.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Example: 4c9adbb4-28ef-4a7d-b273-1cee0c38021f

  • The to_location. If not speecified, use Customized Standardization Rules/ + from_location

  • The new_name. the new name of the Standardization Rule. If not specified, use CopyOf + rule_name + number

  • The directory asset id to create the asset in or move to

The cloneRule options.

parameters

  • The rule name or the asset id.

  • The from_location.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • The to_location. If not speecified, use Customized Standardization Rules/ + from_location.

  • The new_name. the new name of the Standardization Rule. If not specified, use CopyOf + rule_name + number.

  • The directory asset id to create the asset in or move to.

parameters

  • The rule name or the asset id.

  • The from_location.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • The to_location. If not speecified, use Customized Standardization Rules/ + from_location.

  • The new_name. the new name of the Standardization Rule. If not specified, use CopyOf + rule_name + number.

  • The directory asset id to create the asset in or move to.

Response

A folder.

A folder.

A folder.

A folder.

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Test the Standardization Rule against a one-line test string (a single record) before you run it against an entire file.

Test the Standardize Rule against a one-line test string (a single record) before you run it against an entire file.

Test the Standardize Rule against a one-line test string (a single record) before you run it against an entire file.

Test the Standardize Rule against a one-line test string (a single record) before you run it against an entire file.

Test the Standardize Rule against a one-line test string (a single record) before you run it against an entire file.

POST /v3/quality_stage/rules/{rule_name}/test
ServiceCall<RulePropertiesList> testRule(TestRuleOptions testRuleOptions)
testRule(params)
test_rule(
        self,
        location: str,
        rule_name: str,
        *,
        file_content: Optional[str] = None,
        file_name: Optional[str] = None,
        input: Optional[str] = None,
        engine_type: Optional[str] = None,
        project_id: Optional[str] = None,
        catalog_id: Optional[str] = None,
        space_id: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the TestRuleOptions.Builder to create a TestRuleOptions object that contains the parameter values for the testRule method.

Path Parameters

  • The rule name or the asset id.

Query Parameters

  • The location of rule set. Required only when rule_name is not an asset id

  • The type of engine used for testing ruleset.

    Allowable values: [JNI,JAVA]

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Example: 4c9adbb4-28ef-4a7d-b273-1cee0c38021f

Json format for input string to be tested.

The testRule options.

parameters

  • The location of rule set. Required only when rule_name is not an asset id.

  • The rule name or the asset id.

  • the file content base64 encoded.

  • the changed file name.

  • Input string for testing.

  • The type of engine used for testing ruleset.

    Allowable values: [JNI,JAVA]

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:

parameters

  • The location of rule set. Required only when rule_name is not an asset id.

  • The rule name or the asset id.

  • the file content base64 encoded.

  • the changed file name.

  • Input string for testing.

  • The type of engine used for testing ruleset.

    Allowable values: [JNI,JAVA]

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:

Response

Delimters of one ruleset.

Delimters of one ruleset.

Delimters of one ruleset.

Delimters of one ruleset.

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Export the Standardization Rule in zip format.

Export the Standardization Rule in zip format.

Export the Standardization Rule in zip format.

Export the Standardization Rule in zip format.

Export the Standardization Rule in zip format.

GET /v3/quality_stage/rules/{rule_name}/zip
ServiceCall<InputStream> exportZip(ExportZipOptions exportZipOptions)
exportZip(params)
export_zip(
        self,
        location: str,
        rule_name: str,
        *,
        project_id: Optional[str] = None,
        catalog_id: Optional[str] = None,
        space_id: Optional[str] = None,
        force: Optional[bool] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the ExportZipOptions.Builder to create a ExportZipOptions object that contains the parameter values for the exportZip method.

Path Parameters

  • The rule name or the asset id.

Query Parameters

  • The location of rule set. Required only when rule_name is not an asset id

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Example: 4c9adbb4-28ef-4a7d-b273-1cee0c38021f

  • Template rule only. Force download even without any changes, default is true

The exportZip options.

parameters

  • The location of rule set. Required only when rule_name is not an asset id.

  • The rule name or the asset id.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • Template rule only. Force download even without any changes, default is true.

parameters

  • The location of rule set. Required only when rule_name is not an asset id.

  • The rule name or the asset id.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • Template rule only. Force download even without any changes, default is true.

Response

Response type: InputStream

Response type: NodeJS.ReadableStream

Response type: BinaryIO

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Upload/update the Standardization Rule from the given zip file

Upload/update the Standardization Rule from the given zip file.

Upload/update the Standardization Rule from the given zip file.

Upload/update the Standardization Rule from the given zip file.

Upload/update the Standardization Rule from the given zip file.

PUT /v3/quality_stage/rules/{rule_name}/zip
ServiceCall<Map<String, Object>> importZip(ImportZipOptions importZipOptions)
importZip(params)
import_zip(
        self,
        location: str,
        rule_name: str,
        body: BinaryIO,
        *,
        project_id: Optional[str] = None,
        catalog_id: Optional[str] = None,
        space_id: Optional[str] = None,
        directory_asset_id: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the ImportZipOptions.Builder to create a ImportZipOptions object that contains the parameter values for the importZip method.

Path Parameters

  • The rule name or the asset id.

Query Parameters

  • The location of rule set. Required only when rule_name is not an asset id

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Example: 4c9adbb4-28ef-4a7d-b273-1cee0c38021f

  • The directory asset id to create the asset in or move to

The importZip options.

parameters

  • The location of rule set. Required only when rule_name is not an asset id.

  • The rule name or the asset id.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • The directory asset id to create the asset in or move to.

parameters

  • The location of rule set. Required only when rule_name is not an asset id.

  • The rule name or the asset id.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • The directory asset id to create the asset in or move to.

Response

Response type: Map<String, Object>

Response type: JsonObject

Response type: dict

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Get the status of a previous git commit request

Gets the status of a git commit request. The status field in the response object indicates if the given commit is completed, in progress, or failed. Detailed status information about the commit progress is also contained in the response object.

Gets the status of a git commit request. The status field in the response object indicates if the given commit is completed, in progress, or failed. Detailed status information about the commit progress is also contained in the response object.

Gets the status of a git commit request. The status field in the response object indicates if the given commit is completed, in progress, or failed. Detailed status information about the commit progress is also contained in the response object.

Gets the status of a git commit request. The status field in the response object indicates if the given commit is completed, in progress, or failed. Detailed status information about the commit progress is also contained in the response object.

GET /v3/migration/git_commit
ServiceCall<GitCommitResponse> getGitCommit(GetGitCommitOptions getGitCommitOptions)
getGitCommit(params)
get_git_commit(
        self,
        *,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the GetGitCommitOptions.Builder to create a GetGitCommitOptions object that contains the parameter values for the getGitCommit method.

Query Parameters

  • The ID of the catalog to use. catalog_id or project_id is required.

  • The ID of the project to use. project_id or catalog_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Example: 4c9adbb4-28ef-4a7d-b273-1cee0c38021f

The getGitCommit options.

parameters

  • The ID of the catalog to use. catalog_id or project_id is required.

  • The ID of the project to use. project_id or catalog_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:

parameters

  • The ID of the catalog to use. catalog_id or project_id is required.

  • The ID of the project to use. project_id or catalog_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:

Response

Response object of an git commit request.

Response object of an git commit request.

Response object of an git commit request.

Response object of an git commit request.

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • Status of the git commit request cannot be found. This can be due to the given project is not valid, or the git commit has been completed long ago and its status information is no longer available.

  • An error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Import Datastage components from git.

Creates DataStage components from the git. This is an asynchronous call. The API call returns almost immediately which does not necessarily imply the completion of the import request. It only means that the import request has been accepted. The status field of the import request is included in the import response object. The status "completed" ("in_progress", "failed", resp.) indicates the import request is completed (in progress, and failed, resp.) The job export file for an import request may contain one mor more data flows. Unless the on_failure option is set to "stop", a completed import request may contain not only successfully imported data flows but also data flows that cannot be imported.

Creates DataStage components from the git. This is an asynchronous call. The API call returns almost immediately which does not necessarily imply the completion of the import request. It only means that the import request has been accepted. The status field of the import request is included in the import response object. The status "completed" ("in_progress", "failed", resp.) indicates the import request is completed (in progress, and failed, resp.) The job export file for an import request may contain one mor more data flows. Unless the on_failure option is set to "stop", a completed import request may contain not only successfully imported data flows but also data flows that cannot be imported.

Creates DataStage components from the git. This is an asynchronous call. The API call returns almost immediately which does not necessarily imply the completion of the import request. It only means that the import request has been accepted. The status field of the import request is included in the import response object. The status "completed" ("in_progress", "failed", resp.) indicates the import request is completed (in progress, and failed, resp.) The job export file for an import request may contain one mor more data flows. Unless the on_failure option is set to "stop", a completed import request may contain not only successfully imported data flows but also data flows that cannot be imported.

Creates DataStage components from the git. This is an asynchronous call. The API call returns almost immediately which does not necessarily imply the completion of the import request. It only means that the import request has been accepted. The status field of the import request is included in the import response object. The status "completed" ("in_progress", "failed", resp.) indicates the import request is completed (in progress, and failed, resp.) The job export file for an import request may contain one mor more data flows. Unless the on_failure option is set to "stop", a completed import request may contain not only successfully imported data flows but also data flows that cannot be imported.

POST /v3/migration/git_pull
ServiceCall<GitPullResponse> gitPull(GitPullOptions gitPullOptions)
gitPull(params)
git_pull(
        self,
        *,
        assets: Optional[List['GitPullTree']] = None,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        git_repo: Optional[str] = None,
        git_branch: Optional[str] = None,
        git_folder: Optional[str] = None,
        git_tag: Optional[str] = None,
        on_failure: Optional[str] = None,
        conflict_resolution: Optional[str] = None,
        enable_notification: Optional[bool] = None,
        import_only: Optional[bool] = None,
        include_dependencies: Optional[bool] = None,
        asset_type: Optional[str] = None,
        skip_dependencies: Optional[str] = None,
        replace_mode: Optional[str] = None,
        x_migration_enc_key: Optional[str] = None,
        import_binaries: Optional[bool] = None,
        project_folder: Optional[str] = None,
        project_folder_recursive: Optional[bool] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the GitPullOptions.Builder to create a GitPullOptions object that contains the parameter values for the gitPull method.

Custom Headers

  • The encryption key to encrypt credentials on export or to decrypt them on import

Query Parameters

  • The ID of the catalog to use. catalog_id or project_id is required.

  • The ID of the project to use. project_id or catalog_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Example: 4c9adbb4-28ef-4a7d-b273-1cee0c38021f

  • The name of the git repo to use.

    Example: git-dsjob

  • The name of the git branch to use.

  • The name of the git folder where the project contents are committed/fetched.

    Example: my-project-folder

  • The name of the git tag to use.

  • Action when the first import failure occurs. The default action is "continue" which will continue importing the remaining data flows. The "stop" action will stop the import operation upon the first error.

    Allowable values: [continue,stop]

    Example: continue

  • Resolution when data flow to be imported has a name conflict with an existing data flow in the project or catalog. The default conflict resolution is "skip" will skip the data flow so that it will not be imported. The "rename" resolution will append "_Import_NNNN" suffix to the original name and use the new name for the imported data flow, while the "replace" resolution will first remove the existing data flow with the same name and import the new data flow. For the "rename_replace" option, when the flow name is already used, a new flow name with the suffix "_DATASTAGE_ISX_IMPORT" will be used. If the name is not currently used, the imported flow will be created with this name. In case the new name is already used, the existing flow will be removed first before the imported flow is created. With the rename_replace option, job creation will be determined as follows. If the job name is already used, a new job name with the suffix ".DataStage job" will be used. If the new job name is not currently used, the job will be created with this name. In case the new job name is already used, the job creation will not happen and an error will be raised.

    Allowable values: [skip,rename,replace,rename_replace]

    Example: rename

  • enable/disable notification. Default value is true.

  • Skip flow compilation.

  • If set to false, skip dependencies, only import flow. If not specified or set to true, import flow and dependencies.

  • Asset types separated by commas (','). Only assets of these types will be imported. If not specified, all supported assets will be imported.

    Example: data_intg_flow,parameter_set

  • Skip dependencies separated by commas (','), it is only used with replace option. If specified the skiped dependencies, only assets of these types will be skiped if the assets are existed.

    Example: connection,parameter_set,subflow

  • This parameter takes effect when conflict_resolution is set to replace or skip. soft- merge the parameter set, add new parameter only, keep the old value sets hard- replace all

    Allowable values: [soft,hard]

    Example: hard

  • Import binaries

  • The name of the project folder you would like to pull assets from.

  • If true, will pull all subfolders of the project being pulled

asset list

The gitPull options.

parameters

  • list of assets.

  • The ID of the catalog to use. catalog_id or project_id is required.

  • The ID of the project to use. project_id or catalog_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • The name of the git repo to use.

    Examples:
  • The name of the git branch to use.

  • The name of the git folder where the project contents are committed/fetched.

    Examples:
  • The name of the git tag to use.

  • Action when the first import failure occurs. The default action is "continue" which will continue importing the remaining data flows. The "stop" action will stop the import operation upon the first error.

    Allowable values: [continue,stop]

    Examples:
  • Resolution when data flow to be imported has a name conflict with an existing data flow in the project or catalog. The default conflict resolution is "skip" will skip the data flow so that it will not be imported. The "rename" resolution will append "_Import_NNNN" suffix to the original name and use the new name for the imported data flow, while the "replace" resolution will first remove the existing data flow with the same name and import the new data flow. For the "rename_replace" option, when the flow name is already used, a new flow name with the suffix "_DATASTAGE_ISX_IMPORT" will be used. If the name is not currently used, the imported flow will be created with this name. In case the new name is already used, the existing flow will be removed first before the imported flow is created. With the rename_replace option, job creation will be determined as follows. If the job name is already used, a new job name with the suffix ".DataStage job" will be used. If the new job name is not currently used, the job will be created with this name. In case the new job name is already used, the job creation will not happen and an error will be raised.

    Allowable values: [skip,rename,replace,rename_replace]

    Examples:
  • enable/disable notification. Default value is true.

    Examples:
  • Skip flow compilation.

    Examples:
  • If set to false, skip dependencies, only import flow. If not specified or set to true, import flow and dependencies.

    Examples:
  • Asset types separated by commas (','). Only assets of these types will be imported. If not specified, all supported assets will be imported.

    Examples:
  • Skip dependencies separated by commas (','), it is only used with replace option. If specified the skiped dependencies, only assets of these types will be skiped if the assets are existed.

    Examples:
  • This parameter takes effect when conflict_resolution is set to replace or skip. soft- merge the parameter set, add new parameter only, keep the old value sets hard- replace all.

    Allowable values: [soft,hard]

    Examples:
  • The encryption key to encrypt credentials on export or to decrypt them on import.

  • Import binaries.

    Examples:
  • The name of the project folder you would like to pull assets from.

  • If true, will pull all subfolders of the project being pulled.

parameters

  • list of assets.

  • The ID of the catalog to use. catalog_id or project_id is required.

  • The ID of the project to use. project_id or catalog_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • The name of the git repo to use.

    Examples:
  • The name of the git branch to use.

  • The name of the git folder where the project contents are committed/fetched.

    Examples:
  • The name of the git tag to use.

  • Action when the first import failure occurs. The default action is "continue" which will continue importing the remaining data flows. The "stop" action will stop the import operation upon the first error.

    Allowable values: [continue,stop]

    Examples:
  • Resolution when data flow to be imported has a name conflict with an existing data flow in the project or catalog. The default conflict resolution is "skip" will skip the data flow so that it will not be imported. The "rename" resolution will append "_Import_NNNN" suffix to the original name and use the new name for the imported data flow, while the "replace" resolution will first remove the existing data flow with the same name and import the new data flow. For the "rename_replace" option, when the flow name is already used, a new flow name with the suffix "_DATASTAGE_ISX_IMPORT" will be used. If the name is not currently used, the imported flow will be created with this name. In case the new name is already used, the existing flow will be removed first before the imported flow is created. With the rename_replace option, job creation will be determined as follows. If the job name is already used, a new job name with the suffix ".DataStage job" will be used. If the new job name is not currently used, the job will be created with this name. In case the new job name is already used, the job creation will not happen and an error will be raised.

    Allowable values: [skip,rename,replace,rename_replace]

    Examples:
  • enable/disable notification. Default value is true.

    Examples:
  • Skip flow compilation.

    Examples:
  • If set to false, skip dependencies, only import flow. If not specified or set to true, import flow and dependencies.

    Examples:
  • Asset types separated by commas (','). Only assets of these types will be imported. If not specified, all supported assets will be imported.

    Examples:
  • Skip dependencies separated by commas (','), it is only used with replace option. If specified the skiped dependencies, only assets of these types will be skiped if the assets are existed.

    Examples:
  • This parameter takes effect when conflict_resolution is set to replace or skip. soft- merge the parameter set, add new parameter only, keep the old value sets hard- replace all.

    Allowable values: [soft,hard]

    Examples:
  • The encryption key to encrypt credentials on export or to decrypt them on import.

  • Import binaries.

    Examples:
  • The name of the project folder you would like to pull assets from.

  • If true, will pull all subfolders of the project being pulled.

Response

Response object of an import request.

Response object of an import request.

Response object of an import request.

Response object of an import request.

Status Code

  • The requested import operation has been accepted. However, the import operation may or may not be completed. The status field in the import response object describes the current status of the import. The response "Location" header provides a convenient url for retrieving the status with a GET request.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An error occurred. See response for more information.

Example responses

Cancel a previous import request

Cancel a previous import request.

Cancel a previous import request.

Cancel a previous import request.

Cancel a previous import request.

DELETE /v3/migration/git_pull/{import_id}
ServiceCall<Void> deleteGitPull(DeleteGitPullOptions deleteGitPullOptions)
deleteGitPull(params)
delete_git_pull(
        self,
        import_id: str,
        *,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the DeleteGitPullOptions.Builder to create a DeleteGitPullOptions object that contains the parameter values for the deleteGitPull method.

Path Parameters

  • Unique ID of the import request.

    Example: cc6dbbfd-810d-4f0e-b0a9-228c328aff29

Query Parameters

  • The ID of the catalog to use. catalog_id or project_id is required.

  • The ID of the project to use. project_id or catalog_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Example: 4c9adbb4-28ef-4a7d-b273-1cee0c38021f

The deleteGitPull options.

parameters

  • Unique ID of the import request.

    Examples:
  • The ID of the catalog to use. catalog_id or project_id is required.

  • The ID of the project to use. project_id or catalog_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:

parameters

  • Unique ID of the import request.

    Examples:
  • The ID of the catalog to use. catalog_id or project_id is required.

  • The ID of the project to use. project_id or catalog_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:

Response

Status Code

  • The import cancellation request was accepted.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • Status of the git pull request cannot be found. This can be due to the given import_id is not valid, or the git pull has been completed long ago and its status information is no longer available.

  • An error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Get the status of a previous git pull request

Gets the status of a git pull request. The status field in the response object indicates if the given import is completed, in progress, or failed. Detailed status information about each imported data flow is also contained in the response object.

Gets the status of a git pull request. The status field in the response object indicates if the given import is completed, in progress, or failed. Detailed status information about each imported data flow is also contained in the response object.

Gets the status of a git pull request. The status field in the response object indicates if the given import is completed, in progress, or failed. Detailed status information about each imported data flow is also contained in the response object.

Gets the status of a git pull request. The status field in the response object indicates if the given import is completed, in progress, or failed. Detailed status information about each imported data flow is also contained in the response object.

GET /v3/migration/git_pull/{import_id}
ServiceCall<GitPullResponse> getGitPull(GetGitPullOptions getGitPullOptions)
getGitPull(params)
get_git_pull(
        self,
        import_id: str,
        *,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        format: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the GetGitPullOptions.Builder to create a GetGitPullOptions object that contains the parameter values for the getGitPull method.

Path Parameters

  • Unique ID of the pull request.

Query Parameters

  • The ID of the catalog to use. catalog_id or project_id is required.

  • The ID of the project to use. project_id or catalog_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Example: 4c9adbb4-28ef-4a7d-b273-1cee0c38021f

  • format of isx import report

    Allowable values: [json,csv]

    Example: json

The getGitPull options.

parameters

  • Unique ID of the pull request.

  • The ID of the catalog to use. catalog_id or project_id is required.

  • The ID of the project to use. project_id or catalog_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • format of isx import report.

    Allowable values: [json,csv]

    Examples:

parameters

  • Unique ID of the pull request.

  • The ID of the catalog to use. catalog_id or project_id is required.

  • The ID of the project to use. project_id or catalog_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • format of isx import report.

    Allowable values: [json,csv]

    Examples:

Response

Response object of an import request.

Response object of an import request.

Response object of an import request.

Response object of an import request.

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • Status of the git pull request cannot be found. This can be due to the given import_id is not valid, or the git pull has been completed long ago and its status information is no longer available.

  • An error occurred. See response for more information.

Example responses

Get the status of a git repo

Gets the status on objects changed in git repo with respect to a project. Detailed status information about changes committed in a project as well as from git.

Gets the status on objects changed in git repo with respect to a project. Detailed status information about changes committed in a project as well as from git.

Gets the status on objects changed in git repo with respect to a project. Detailed status information about changes committed in a project as well as from git.

Gets the status on objects changed in git repo with respect to a project. Detailed status information about changes committed in a project as well as from git.

GET /v3/migration/git_status
ServiceCall<GitStatusResponse> gitStatus(GitStatusOptions gitStatusOptions)
gitStatus(params)
git_status(
        self,
        *,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        git_repo: Optional[str] = None,
        git_branch: Optional[str] = None,
        git_tag: Optional[str] = None,
        git_folder: Optional[str] = None,
        format: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the GitStatusOptions.Builder to create a GitStatusOptions object that contains the parameter values for the gitStatus method.

Query Parameters

  • The ID of the catalog to use. catalog_id or project_id is required.

  • The ID of the project to use. project_id or catalog_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Example: 4c9adbb4-28ef-4a7d-b273-1cee0c38021f

  • The name of the git repo to use.

    Example: git-dsjob

  • The name of the git branch to use.

  • The name of the git tag to use.

  • The name of the git folder where the project contents are committed/fetched.

    Example: my-project-folder

  • format of isx import report

    Allowable values: [json,csv]

    Example: json

The gitStatus options.

parameters

  • The ID of the catalog to use. catalog_id or project_id is required.

  • The ID of the project to use. project_id or catalog_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • The name of the git repo to use.

    Examples:
  • The name of the git branch to use.

  • The name of the git tag to use.

  • The name of the git folder where the project contents are committed/fetched.

    Examples:
  • format of isx import report.

    Allowable values: [json,csv]

    Examples:

parameters

  • The ID of the catalog to use. catalog_id or project_id is required.

  • The ID of the project to use. project_id or catalog_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • The name of the git repo to use.

    Examples:
  • The name of the git branch to use.

  • The name of the git tag to use.

  • The name of the git folder where the project contents are committed/fetched.

    Examples:
  • format of isx import report.

    Allowable values: [json,csv]

    Examples:

Response

Changes between Project and Git

Changes between Project and Git.

Changes between Project and Git.

Changes between Project and Git.

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • Status of the differences between a project and git repo.

  • An error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Create V3 data flows from the attached job export file

Creates data flows from the attached job export file. This is an asynchronous call. The API call returns almost immediately which does not necessarily imply the completion of the import request. It only means that the import request has been accepted. The status field of the import request is included in the import response object. The status "completed" ("in_progress", "failed", resp.) indicates the import request is completed (in progress, and failed, resp.) The job export file for an import request may contain one mor more data flows. Unless the on_failure option is set to "stop", a completed import request may contain not only successfully imported data flows but also data flows that cannot be imported.

Creates data flows from the attached job export file. This is an asynchronous call. The API call returns almost immediately which does not necessarily imply the completion of the import request. It only means that the import request has been accepted. The status field of the import request is included in the import response object. The status "completed" ("in_progress", "failed", resp.) indicates the import request is completed (in progress, and failed, resp.) The job export file for an import request may contain one mor more data flows. Unless the on_failure option is set to "stop", a completed import request may contain not only successfully imported data flows but also data flows that cannot be imported.

Creates data flows from the attached job export file. This is an asynchronous call. The API call returns almost immediately which does not necessarily imply the completion of the import request. It only means that the import request has been accepted. The status field of the import request is included in the import response object. The status "completed" ("in_progress", "failed", resp.) indicates the import request is completed (in progress, and failed, resp.) The job export file for an import request may contain one mor more data flows. Unless the on_failure option is set to "stop", a completed import request may contain not only successfully imported data flows but also data flows that cannot be imported.

Creates data flows from the attached job export file. This is an asynchronous call. The API call returns almost immediately which does not necessarily imply the completion of the import request. It only means that the import request has been accepted. The status field of the import request is included in the import response object. The status "completed" ("in_progress", "failed", resp.) indicates the import request is completed (in progress, and failed, resp.) The job export file for an import request may contain one mor more data flows. Unless the on_failure option is set to "stop", a completed import request may contain not only successfully imported data flows but also data flows that cannot be imported.

POST /v3/migration/isx_imports
ServiceCall<ImportResponse> createMigration(CreateMigrationOptions createMigrationOptions)
createMigration(params)
create_migration(
        self,
        body: BinaryIO,
        *,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        on_failure: Optional[str] = None,
        conflict_resolution: Optional[str] = None,
        attachment_type: Optional[str] = None,
        file_name: Optional[str] = None,
        enable_notification: Optional[bool] = None,
        import_only: Optional[bool] = None,
        create_missing_parameters: Optional[bool] = None,
        enable_rulestage_integration: Optional[bool] = None,
        enable_local_connection: Optional[bool] = None,
        asset_type: Optional[str] = None,
        create_connection_parametersets: Optional[bool] = None,
        storage_path: Optional[str] = None,
        replace_mode: Optional[str] = None,
        migrate_to_platform_connection: Optional[bool] = None,
        use_dsn_name: Optional[bool] = None,
        migrate_to_send_email: Optional[bool] = None,
        enable_folder: Optional[bool] = None,
        migrate_hive_impala: Optional[bool] = None,
        from_: Optional[str] = None,
        to: Optional[str] = None,
        job_name_with_invocation_id: Optional[bool] = None,
        annotation_styling: Optional[str] = None,
        migrate_to_datastage_division: Optional[bool] = None,
        run_job_by_name: Optional[bool] = None,
        enable_optimized_pipeline: Optional[bool] = None,
        use_jinja_template: Optional[bool] = None,
        enable_flow_autosave: Optional[bool] = None,
        migrate_jdbc_impala: Optional[bool] = None,
        enable_inline_pipeline: Optional[bool] = None,
        optimized_job_name_suffix: Optional[str] = None,
        migrate_bash_param: Optional[bool] = None,
        is_define_parameter_toggled: Optional[bool] = None,
        migrate_stp_plugin: Optional[bool] = None,
        migrate_userstatus: Optional[bool] = None,
        skip_connections: Optional[bool] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the CreateMigrationOptions.Builder to create a CreateMigrationOptions object that contains the parameter values for the createMigration method.

Query Parameters

  • The ID of the catalog to use. catalog_id or project_id is required.

  • The ID of the project to use. project_id or catalog_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Example: 4c9adbb4-28ef-4a7d-b273-1cee0c38021f

  • Action when the first import failure occurs. The default action is "continue" which will continue importing the remaining data flows. The "stop" action will stop the import operation upon the first error.

    Allowable values: [continue,stop]

    Example: continue

  • Resolution when data flow to be imported has a name conflict with an existing data flow in the project or catalog. The default conflict resolution is "skip" will skip the data flow so that it will not be imported. The "rename" resolution will append "_Import_NNNN" suffix to the original name and use the new name for the imported data flow, while the "replace" resolution will first remove the existing data flow with the same name and import the new data flow. For the "rename_replace" option, when the flow name is already used, a new flow name with the suffix "_DATASTAGE_ISX_IMPORT" will be used. If the name is not currently used, the imported flow will be created with this name. In case the new name is already used, the existing flow will be removed first before the imported flow is created. With the rename_replace option, job creation will be determined as follows. If the job name is already used, a new job name with the suffix ".DataStage job" will be used. If the new job name is not currently used, the job will be created with this name. In case the new job name is already used, the job creation will not happen and an error will be raised.

    Allowable values: [skip,rename,replace,rename_replace]

    Example: rename

  • Type of attachment. The default attachment type is "isx".

    Allowable values: [isx]

    Example: isx

  • Name of the input file, if it exists.

    Example: myFlows.isx

  • enable/disable notification. Default value is true.

  • Skip flow compilation.

  • Create missing parameter sets and job parameters.

  • enable/disable wkc rule stage migration. Default value is false.

  • enable local connection migration. Default value is false.

  • Asset types separated by commas (','). Only assets of these types will be imported. If not specified, all supported assets will be imported.

    Example: data_intg_flow,parameter_set

  • Create generic parameter sets for default connection values migration. Default value is true.

  • Folder path of the storage volume for routine scripts and other data assets.

    Example: /mnts/my-script-storage

  • This parameter takes effect when conflict_resolution is set to replace or skip. soft- merge the parameter set, add new parameter only, keep the old value sets hard- replace all

    Allowable values: [soft,hard]

    Example: hard

  • Will migrate all isx connections to available platform connections instead of their datastage optimized versions.

  • Enables private cloud migration of ODBC connector 'datasource' to 'dsn_name' instead of generating parameter references. Default is false.

  • Will migrate all notification activitie stages in sequence job to send email task nodes.

  • Enable folder support.

  • Will migrate hive isx connections to impala.

  • Migrate from which stage.

    Example: Db2ConnectorPX

  • Migrate to which stage.

    Example: Db2zos

  • whether to migrate $JobName to ${jobName}.${DSInvocationJobId} if invocation id present. Default value is false.

  • format of annotation styling

    Allowable values: [markdown,html]

    Example: markdown

  • migrate the division operator in any expression to datastage division ds.DIV.

    Example: true

  • run the datastage/pipeline job by job name, default is by id.

    Example: true

  • Enable optimized pipeline. Default value is false.

  • Use jinja template in bash script, all environment vars will be invoked as "{{env_var}}" instead of "${env_var}" in bash script. Default value is false.

  • Enable flow autosave or not. Default value is false.

  • Will migrate JDBC Impala isx connections to impala.

  • enable inline mode for the pipeline compilation.

  • Optimized sequence job name suffix, project default setting will take effect if not specified.

  • migrate bash script parameter or not for pipeline.

  • Prompt runtime parameters settings before running a flow.

  • Will migrate STPPX Teradata to Teradata Plugin

  • Will migrate setuserstatus calls to DSSetUserStatus bash function

  • Applies ONLY when --conflict-resolution replace is selected. When this flag is set we will NOT create new connections for the existing flows that are being remigrated but keep the connection links that are already present in the existing flows.

The ISX file to import. The maximum file size is 1GB.

The createMigration options.

parameters

  • The ISX file to import. The maximum file size is 1GB.

  • The ID of the catalog to use. catalog_id or project_id is required.

  • The ID of the project to use. project_id or catalog_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • Action when the first import failure occurs. The default action is "continue" which will continue importing the remaining data flows. The "stop" action will stop the import operation upon the first error.

    Allowable values: [continue,stop]

    Examples:
  • Resolution when data flow to be imported has a name conflict with an existing data flow in the project or catalog. The default conflict resolution is "skip" will skip the data flow so that it will not be imported. The "rename" resolution will append "_Import_NNNN" suffix to the original name and use the new name for the imported data flow, while the "replace" resolution will first remove the existing data flow with the same name and import the new data flow. For the "rename_replace" option, when the flow name is already used, a new flow name with the suffix "_DATASTAGE_ISX_IMPORT" will be used. If the name is not currently used, the imported flow will be created with this name. In case the new name is already used, the existing flow will be removed first before the imported flow is created. With the rename_replace option, job creation will be determined as follows. If the job name is already used, a new job name with the suffix ".DataStage job" will be used. If the new job name is not currently used, the job will be created with this name. In case the new job name is already used, the job creation will not happen and an error will be raised.

    Allowable values: [skip,rename,replace,rename_replace]

    Examples:
  • Type of attachment. The default attachment type is "isx".

    Allowable values: [isx]

    Examples:
  • Name of the input file, if it exists.

    Examples:
  • enable/disable notification. Default value is true.

    Examples:
  • Skip flow compilation.

    Examples:
  • Create missing parameter sets and job parameters.

    Examples:
  • enable/disable wkc rule stage migration. Default value is false.

    Examples:
  • enable local connection migration. Default value is false.

    Examples:
  • Asset types separated by commas (','). Only assets of these types will be imported. If not specified, all supported assets will be imported.

    Examples:
  • Create generic parameter sets for default connection values migration. Default value is true.

    Examples:
  • Folder path of the storage volume for routine scripts and other data assets.

    Examples:
  • This parameter takes effect when conflict_resolution is set to replace or skip. soft- merge the parameter set, add new parameter only, keep the old value sets hard- replace all.

    Allowable values: [soft,hard]

    Examples:
  • Will migrate all isx connections to available platform connections instead of their datastage optimized versions.

    Examples:
  • Enables private cloud migration of ODBC connector 'datasource' to 'dsn_name' instead of generating parameter references. Default is false.

    Examples:
  • Will migrate all notification activitie stages in sequence job to send email task nodes.

    Examples:
  • Enable folder support.

    Examples:
  • Will migrate hive isx connections to impala.

    Examples:
  • Migrate from which stage.

    Examples:
  • Migrate to which stage.

    Examples:
  • whether to migrate $JobName to ${jobName}.${DSInvocationJobId} if invocation id present. Default value is false.

    Examples:
  • format of annotation styling.

    Allowable values: [markdown,html]

    Examples:
  • migrate the division operator in any expression to datastage division ds.DIV.

    Examples:
  • run the datastage/pipeline job by job name, default is by id.

    Examples:
  • Enable optimized pipeline. Default value is false.

    Examples:
  • Use jinja template in bash script, all environment vars will be invoked as "{{env_var}}" instead of "${env_var}" in bash script. Default value is false.

    Examples:
  • Enable flow autosave or not. Default value is false.

    Examples:
  • Will migrate JDBC Impala isx connections to impala.

    Examples:
  • enable inline mode for the pipeline compilation.

    Examples:
  • Optimized sequence job name suffix, project default setting will take effect if not specified.

  • migrate bash script parameter or not for pipeline.

    Examples:
  • Prompt runtime parameters settings before running a flow.

    Examples:
  • Will migrate STPPX Teradata to Teradata Plugin.

    Examples:
  • Will migrate setuserstatus calls to DSSetUserStatus bash function.

    Examples:
  • Applies ONLY when --conflict-resolution replace is selected. When this flag is set we will NOT create new connections for the existing flows that are being remigrated but keep the connection links that are already present in the existing flows.

    Examples:

parameters

  • The ISX file to import. The maximum file size is 1GB.

  • The ID of the catalog to use. catalog_id or project_id is required.

  • The ID of the project to use. project_id or catalog_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • Action when the first import failure occurs. The default action is "continue" which will continue importing the remaining data flows. The "stop" action will stop the import operation upon the first error.

    Allowable values: [continue,stop]

    Examples:
  • Resolution when data flow to be imported has a name conflict with an existing data flow in the project or catalog. The default conflict resolution is "skip" will skip the data flow so that it will not be imported. The "rename" resolution will append "_Import_NNNN" suffix to the original name and use the new name for the imported data flow, while the "replace" resolution will first remove the existing data flow with the same name and import the new data flow. For the "rename_replace" option, when the flow name is already used, a new flow name with the suffix "_DATASTAGE_ISX_IMPORT" will be used. If the name is not currently used, the imported flow will be created with this name. In case the new name is already used, the existing flow will be removed first before the imported flow is created. With the rename_replace option, job creation will be determined as follows. If the job name is already used, a new job name with the suffix ".DataStage job" will be used. If the new job name is not currently used, the job will be created with this name. In case the new job name is already used, the job creation will not happen and an error will be raised.

    Allowable values: [skip,rename,replace,rename_replace]

    Examples:
  • Type of attachment. The default attachment type is "isx".

    Allowable values: [isx]

    Examples:
  • Name of the input file, if it exists.

    Examples:
  • enable/disable notification. Default value is true.

    Examples:
  • Skip flow compilation.

    Examples:
  • Create missing parameter sets and job parameters.

    Examples:
  • enable/disable wkc rule stage migration. Default value is false.

    Examples:
  • enable local connection migration. Default value is false.

    Examples:
  • Asset types separated by commas (','). Only assets of these types will be imported. If not specified, all supported assets will be imported.

    Examples:
  • Create generic parameter sets for default connection values migration. Default value is true.

    Examples:
  • Folder path of the storage volume for routine scripts and other data assets.

    Examples:
  • This parameter takes effect when conflict_resolution is set to replace or skip. soft- merge the parameter set, add new parameter only, keep the old value sets hard- replace all.

    Allowable values: [soft,hard]

    Examples:
  • Will migrate all isx connections to available platform connections instead of their datastage optimized versions.

    Examples:
  • Enables private cloud migration of ODBC connector 'datasource' to 'dsn_name' instead of generating parameter references. Default is false.

    Examples:
  • Will migrate all notification activitie stages in sequence job to send email task nodes.

    Examples:
  • Enable folder support.

    Examples:
  • Will migrate hive isx connections to impala.

    Examples:
  • Migrate from which stage.

    Examples:
  • Migrate to which stage.

    Examples:
  • whether to migrate $JobName to ${jobName}.${DSInvocationJobId} if invocation id present. Default value is false.

    Examples:
  • format of annotation styling.

    Allowable values: [markdown,html]

    Examples:
  • migrate the division operator in any expression to datastage division ds.DIV.

    Examples:
  • run the datastage/pipeline job by job name, default is by id.

    Examples:
  • Enable optimized pipeline. Default value is false.

    Examples:
  • Use jinja template in bash script, all environment vars will be invoked as "{{env_var}}" instead of "${env_var}" in bash script. Default value is false.

    Examples:
  • Enable flow autosave or not. Default value is false.

    Examples:
  • Will migrate JDBC Impala isx connections to impala.

    Examples:
  • enable inline mode for the pipeline compilation.

    Examples:
  • Optimized sequence job name suffix, project default setting will take effect if not specified.

  • migrate bash script parameter or not for pipeline.

    Examples:
  • Prompt runtime parameters settings before running a flow.

    Examples:
  • Will migrate STPPX Teradata to Teradata Plugin.

    Examples:
  • Will migrate setuserstatus calls to DSSetUserStatus bash function.

    Examples:
  • Applies ONLY when --conflict-resolution replace is selected. When this flag is set we will NOT create new connections for the existing flows that are being remigrated but keep the connection links that are already present in the existing flows.

    Examples:
  • curl -X POST --location --header "Authorization: Bearer {iam_token}"   --header "Accept: application/json;charset=utf-8"   --header "Content-Type: application/octet-stream"   --data 'createMockStream(This is a mock file.)'   "{base_url}/v3/migration/isx_imports?project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23&on_failure=continue&conflict_resolution=rename&attachment_type=isx&file_name=myFlows.isx"
  • CreateMigrationOptions createMigrationOptions = new CreateMigrationOptions.Builder()
      .body(rowGenIsx)
      .projectId(projectID)
      .onFailure("continue")
      .conflictResolution("rename")
      .attachmentType("isx")
      .fileName("rowgen_peek.isx")
      .build();
    
    Response<ImportResponse> response = datastageService.createMigration(createMigrationOptions).execute();
    ImportResponse importResponse = response.getResult();
    
    System.out.println(importResponse);
  • const params = {
      body: Buffer.from(fs.readFileSync('testInput/rowgen_peek.isx')),
      projectId: projectID,
      onFailure: 'continue',
      conflictResolution: 'rename',
      attachmentType: 'isx',
      fileName: 'rowgen_peek.isx',
    };
    const res = await datastageService.createMigration(params);
  • import_response = datastage_service.create_migration(
      body=open(Path(__file__).parent / 'inputFiles/rowgen_peek.isx', "rb").read(),
      project_id=config['PROJECT_ID'],
      on_failure='continue',
      conflict_resolution='rename',
      attachment_type='isx',
      file_name='rowgen_peek.isx'
    ).get_result()
    
    print(json.dumps(import_response, indent=2))

Response

Response object of an import request.

Response object of an import request.

Response object of an import request.

Response object of an import request.

Status Code

  • The requested import operation has been accepted. However, the import operation may or may not be completed. The status field in the import response object describes the current status of the import. The response "Location" header provides a convenient url for retrieving the status with a GET request.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An error occurred. See response for more information.

Example responses

Cancel a previous import request

Cancel a previous import request. Use GET /v3/migration/imports/{import_id} to obtain the current status of the import, including whether it has been cancelled.

Cancel a previous import request. Use GET /v3/migration/imports/{import_id} to obtain the current status of the import, including whether it has been cancelled.

Cancel a previous import request. Use GET /v3/migration/imports/{import_id} to obtain the current status of the import, including whether it has been cancelled.

Cancel a previous import request. Use GET /v3/migration/imports/{import_id} to obtain the current status of the import, including whether it has been cancelled.

DELETE /v3/migration/isx_imports/{import_id}
ServiceCall<Void> deleteMigration(DeleteMigrationOptions deleteMigrationOptions)
deleteMigration(params)
delete_migration(
        self,
        import_id: str,
        *,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the DeleteMigrationOptions.Builder to create a DeleteMigrationOptions object that contains the parameter values for the deleteMigration method.

Path Parameters

  • Unique ID of the import request.

    Example: cc6dbbfd-810d-4f0e-b0a9-228c328aff29

Query Parameters

  • The ID of the catalog to use. catalog_id or project_id is required.

  • The ID of the project to use. project_id or catalog_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Example: 4c9adbb4-28ef-4a7d-b273-1cee0c38021f

The deleteMigration options.

parameters

  • Unique ID of the import request.

    Examples:
  • The ID of the catalog to use. catalog_id or project_id is required.

  • The ID of the project to use. project_id or catalog_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:

parameters

  • Unique ID of the import request.

    Examples:
  • The ID of the catalog to use. catalog_id or project_id is required.

  • The ID of the project to use. project_id or catalog_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • curl -X DELETE --location --header "Authorization: Bearer {iam_token}"   "{base_url}/v3/migration/isx_imports/{import_id}?project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
  • DeleteMigrationOptions deleteMigrationOptions = new DeleteMigrationOptions.Builder()
      .importId(importID)
      .projectId(projectID)
      .build();
    
    datastageService.deleteMigration(deleteMigrationOptions).execute();
  •      const params = {
           importId: importID,
           projectId: projectID,
         };
         const res = await datastageService.deleteMigration(params);
  • response = datastage_service.delete_migration(
      import_id=importId,
      project_id=config['PROJECT_ID']
    )

Response

Status Code

  • The import cancellation request was accepted.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • Status of the import request cannot be found. This can be due to the given import_id is not valid, or the import has been completed long ago and its status information is no longer available.

  • An error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Get the status of a previous import request

Gets the status of an import request. The status field in the response object indicates if the given import is completed, in progress, or failed. Detailed status information about each imported data flow is also contained in the response object.

Gets the status of an import request. The status field in the response object indicates if the given import is completed, in progress, or failed. Detailed status information about each imported data flow is also contained in the response object.

Gets the status of an import request. The status field in the response object indicates if the given import is completed, in progress, or failed. Detailed status information about each imported data flow is also contained in the response object.

Gets the status of an import request. The status field in the response object indicates if the given import is completed, in progress, or failed. Detailed status information about each imported data flow is also contained in the response object.

GET /v3/migration/isx_imports/{import_id}
ServiceCall<ImportResponse> getMigration(GetMigrationOptions getMigrationOptions)
getMigration(params)
get_migration(
        self,
        import_id: str,
        *,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        format: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the GetMigrationOptions.Builder to create a GetMigrationOptions object that contains the parameter values for the getMigration method.

Path Parameters

  • Unique ID of the import request.

Query Parameters

  • The ID of the catalog to use. catalog_id or project_id is required.

  • The ID of the project to use. project_id or catalog_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Example: 4c9adbb4-28ef-4a7d-b273-1cee0c38021f

  • format of isx import report

    Allowable values: [json,csv]

    Example: json

The getMigration options.

parameters

  • Unique ID of the import request.

  • The ID of the catalog to use. catalog_id or project_id is required.

  • The ID of the project to use. project_id or catalog_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • format of isx import report.

    Allowable values: [json,csv]

    Examples:

parameters

  • Unique ID of the import request.

  • The ID of the catalog to use. catalog_id or project_id is required.

  • The ID of the project to use. project_id or catalog_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • format of isx import report.

    Allowable values: [json,csv]

    Examples:
  • curl -X GET --location --header "Authorization: Bearer {iam_token}"   --header "Accept: application/json;charset=utf-8"   "{base_url}/v3/migration/isx_imports/{import_id}?project_id=bd0dbbfd-810d-4f0e-b0a9-228c328a8e23"
  • GetMigrationOptions getMigrationOptions = new GetMigrationOptions.Builder()
      .importId(importID)
      .projectId(projectID)
      .build();
    
    Response<ImportResponse> response = datastageService.getMigration(getMigrationOptions).execute();
    ImportResponse importResponse = response.getResult();
    
    System.out.println(importResponse);
  •      const params = {
           importId: importID,
           projectId: projectID,
         };
         const res = await datastageService.getMigration(params);
  • import_response = datastage_service.get_migration(
      import_id=importId,
      project_id=config['PROJECT_ID']
    ).get_result()
    
    print(json.dumps(import_response, indent=2))

Response

Response object of an import request.

Response object of an import request.

Response object of an import request.

Response object of an import request.

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • Status of the import request cannot be found. This can be due to the given import_id is not valid, or the import has been completed long ago and its status information is no longer available.

  • An error occurred. See response for more information.

Example responses

Get Project and cache to reduce rate limit issue.

Get Project id from name, should leverage ds-common-services to cache more information.

Get Project id from name, should leverage ds-common-services to cache more information.

Get Project id from name, should leverage ds-common-services to cache more information.

Get Project id from name, should leverage ds-common-services to cache more information.

GET /v3/migration/project_info
ServiceCall<List<ProjectInfoResponseItem>> projectInfo(ProjectInfoOptions projectInfoOptions)
projectInfo(params)
project_info(
        self,
        project_name: str,
        *,
        is_space: Optional[bool] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the ProjectInfoOptions.Builder to create a ProjectInfoOptions object that contains the parameter values for the projectInfo method.

Query Parameters

  • Name of the project/space.

  • Is Space.

The projectInfo options.

parameters

  • Name of the project/space.

  • Is Space.

parameters

  • Name of the project/space.

  • Is Space.

Response

Response type: List<ProjectInfoResponseItem>

Response type: ProjectInfoResponseItem[]

Response type: List[ProjectInfoResponseItem]

Status Code

  • Success.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

No Sample Response

This method does not specify any sample responses.

export flows with dependencies as a zip file.

export flows with dependencies as a zip file.

export flows with dependencies as a zip file.

export flows with dependencies as a zip file.

export flows with dependencies as a zip file.

POST /v3/migration/zip_exports
ServiceCall<InputStream> exportFlowsWithDependencies(ExportFlowsWithDependenciesOptions exportFlowsWithDependenciesOptions)
exportFlowsWithDependencies(params)
export_flows_with_dependencies(
        self,
        flows: List['FlowDependencyTree'],
        *,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        remove_secrets: Optional[bool] = None,
        include_dependencies: Optional[bool] = None,
        id: Optional[List[str]] = None,
        type: Optional[str] = None,
        include_data_assets: Optional[bool] = None,
        exclude_data_files: Optional[bool] = None,
        asset_type_filter: Optional[List[str]] = None,
        x_migration_enc_key: Optional[str] = None,
        export_binaries: Optional[bool] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the ExportFlowsWithDependenciesOptions.Builder to create a ExportFlowsWithDependenciesOptions object that contains the parameter values for the exportFlowsWithDependencies method.

Custom Headers

  • The encryption key to encrypt credentials on export or to decrypt them on import

Query Parameters

  • The ID of the catalog to use. catalog_id or project_id is required.

  • The ID of the project to use. project_id or catalog_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Example: 4c9adbb4-28ef-4a7d-b273-1cee0c38021f

  • remove secrets from exported flows, default value is false.

  • include dependencies. If no dependencies are specified in the payload, all dependencies will be included.

  • The list of flow IDs to export.

  • Type of flow. The default flow type is "data_intg_flow". It is only used with 'id' parameter.

  • include data_assets. If set as true, all referenced data_assets will be included.

  • skip the actual data files for datastage dataset and fileset when exporting flows as zip.

  • Filter assets to be exported by asset type

  • Export binaries

flows and dependencies metadata

The exportFlowsWithDependencies options.

parameters

  • list of flows and their dependencies.

  • The ID of the catalog to use. catalog_id or project_id is required.

  • The ID of the project to use. project_id or catalog_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • remove secrets from exported flows, default value is false.

    Examples:
  • include dependencies. If no dependencies are specified in the payload, all dependencies will be included.

    Examples:
  • The list of flow IDs to export.

  • Type of flow. The default flow type is "data_intg_flow". It is only used with 'id' parameter.

  • include data_assets. If set as true, all referenced data_assets will be included.

    Examples:
  • skip the actual data files for datastage dataset and fileset when exporting flows as zip.

    Examples:
  • Filter assets to be exported by asset type.

  • The encryption key to encrypt credentials on export or to decrypt them on import.

  • Export binaries.

    Examples:

parameters

  • list of flows and their dependencies.

  • The ID of the catalog to use. catalog_id or project_id is required.

  • The ID of the project to use. project_id or catalog_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • remove secrets from exported flows, default value is false.

    Examples:
  • include dependencies. If no dependencies are specified in the payload, all dependencies will be included.

    Examples:
  • The list of flow IDs to export.

  • Type of flow. The default flow type is "data_intg_flow". It is only used with 'id' parameter.

  • include data_assets. If set as true, all referenced data_assets will be included.

    Examples:
  • skip the actual data files for datastage dataset and fileset when exporting flows as zip.

    Examples:
  • Filter assets to be exported by asset type.

  • The encryption key to encrypt credentials on export or to decrypt them on import.

  • Export binaries.

    Examples:

Response

Response type: InputStream

Response type: NodeJS.ReadableStream

Response type: BinaryIO

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Create data flows from zip file

Creates data flows from the attached job export file. This is an asynchronous call. The API call returns almost immediately which does not necessarily imply the completion of the import request. It only means that the import request has been accepted. The status field of the import request is included in the import response object. The status "completed" ("in_progress", "failed", resp.) indicates the import request is completed (in progress, and failed, resp.) The job export file for an import request may contain one mor more data flows. Unless the on_failure option is set to "stop", a completed import request may contain not only successfully imported data flows but also data flows that cannot be imported.

Creates data flows from the attached job export file. This is an asynchronous call. The API call returns almost immediately which does not necessarily imply the completion of the import request. It only means that the import request has been accepted. The status field of the import request is included in the import response object. The status "completed" ("in_progress", "failed", resp.) indicates the import request is completed (in progress, and failed, resp.) The job export file for an import request may contain one mor more data flows. Unless the on_failure option is set to "stop", a completed import request may contain not only successfully imported data flows but also data flows that cannot be imported.

Creates data flows from the attached job export file. This is an asynchronous call. The API call returns almost immediately which does not necessarily imply the completion of the import request. It only means that the import request has been accepted. The status field of the import request is included in the import response object. The status "completed" ("in_progress", "failed", resp.) indicates the import request is completed (in progress, and failed, resp.) The job export file for an import request may contain one mor more data flows. Unless the on_failure option is set to "stop", a completed import request may contain not only successfully imported data flows but also data flows that cannot be imported.

Creates data flows from the attached job export file. This is an asynchronous call. The API call returns almost immediately which does not necessarily imply the completion of the import request. It only means that the import request has been accepted. The status field of the import request is included in the import response object. The status "completed" ("in_progress", "failed", resp.) indicates the import request is completed (in progress, and failed, resp.) The job export file for an import request may contain one mor more data flows. Unless the on_failure option is set to "stop", a completed import request may contain not only successfully imported data flows but also data flows that cannot be imported.

POST /v3/migration/zip_imports
ServiceCall<ImportResponse> createFromZipMigration(CreateFromZipMigrationOptions createFromZipMigrationOptions)
createFromZipMigration(params)
create_from_zip_migration(
        self,
        body: BinaryIO,
        *,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        on_failure: Optional[str] = None,
        conflict_resolution: Optional[str] = None,
        file_name: Optional[str] = None,
        enable_notification: Optional[bool] = None,
        import_only: Optional[bool] = None,
        include_dependencies: Optional[bool] = None,
        asset_type: Optional[str] = None,
        skip_dependencies: Optional[str] = None,
        replace_mode: Optional[str] = None,
        x_migration_enc_key: Optional[str] = None,
        import_binaries: Optional[bool] = None,
        run_job_by_name: Optional[bool] = None,
        enable_optimized_pipeline: Optional[bool] = None,
        enable_inline_pipeline: Optional[bool] = None,
        optimized_job_name_suffix: Optional[str] = None,
        is_define_parameter_toggled: Optional[bool] = None,
        override_dataset: Optional[bool] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the CreateFromZipMigrationOptions.Builder to create a CreateFromZipMigrationOptions object that contains the parameter values for the createFromZipMigration method.

Custom Headers

  • The encryption key to encrypt credentials on export or to decrypt them on import

Query Parameters

  • The ID of the catalog to use. catalog_id or project_id is required.

  • The ID of the project to use. project_id or catalog_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Example: 4c9adbb4-28ef-4a7d-b273-1cee0c38021f

  • Action when the first import failure occurs. The default action is "continue" which will continue importing the remaining data flows. The "stop" action will stop the import operation upon the first error.

    Allowable values: [continue,stop]

    Example: continue

  • Resolution when data flow to be imported has a name conflict with an existing data flow in the project or catalog. The default conflict resolution is "skip" will skip the data flow so that it will not be imported. The "rename" resolution will append "_Import_NNNN" suffix to the original name and use the new name for the imported data flow, while the "replace" resolution will first remove the existing data flow with the same name and import the new data flow. For the "rename_replace" option, when the flow name is already used, a new flow name with the suffix "_DATASTAGE_ISX_IMPORT" will be used. If the name is not currently used, the imported flow will be created with this name. In case the new name is already used, the existing flow will be removed first before the imported flow is created. With the rename_replace option, job creation will be determined as follows. If the job name is already used, a new job name with the suffix ".DataStage job" will be used. If the new job name is not currently used, the job will be created with this name. In case the new job name is already used, the job creation will not happen and an error will be raised.

    Allowable values: [skip,rename,replace,rename_replace]

    Example: rename

  • Name of the input file, if it exists.

    Example: myFlows.isx

  • enable/disable notification. Default value is true.

  • Skip flow compilation.

  • If set to false, skip dependencies, only import flow. If not specified or set to true, import flow and dependencies.

  • Asset types separated by commas (','). Only assets of these types will be imported. If not specified, all supported assets will be imported.

    Example: data_intg_flow,parameter_set

  • Skip dependencies separated by commas (','), it is only used with replace option. If specified the skiped dependencies, only assets of these types will be skiped if the assets are existed.

    Example: connection,parameter_set,subflow

  • This parameter takes effect when conflict_resolution is set to replace or skip. soft- merge the parameter set, add new parameter only, keep the old value sets hard- replace all

    Allowable values: [soft,hard]

    Example: hard

  • Import binaries

  • run the datastage/pipeline job by job name, default is by id.

    Example: true

  • Enable optimized pipeline. Default value is false.

  • enable inline mode for the pipeline compilation.

  • Optimized sequence job name suffix, project default setting will take effect if not specified.

  • Prompt runtime parameters settings before running a flow.

  • This parameter takes effect when conflict_resolution is set to replace for data_set and file_set. by defaut it is false, break importing if the header file is a common path and already exists in the cluster, otherwise force override it

The zip file to import. The maximum file size is 1GB.

The createFromZipMigration options.

parameters

  • The zip file to import. The maximum file size is 1GB.

  • The ID of the catalog to use. catalog_id or project_id is required.

  • The ID of the project to use. project_id or catalog_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • Action when the first import failure occurs. The default action is "continue" which will continue importing the remaining data flows. The "stop" action will stop the import operation upon the first error.

    Allowable values: [continue,stop]

    Examples:
  • Resolution when data flow to be imported has a name conflict with an existing data flow in the project or catalog. The default conflict resolution is "skip" will skip the data flow so that it will not be imported. The "rename" resolution will append "_Import_NNNN" suffix to the original name and use the new name for the imported data flow, while the "replace" resolution will first remove the existing data flow with the same name and import the new data flow. For the "rename_replace" option, when the flow name is already used, a new flow name with the suffix "_DATASTAGE_ISX_IMPORT" will be used. If the name is not currently used, the imported flow will be created with this name. In case the new name is already used, the existing flow will be removed first before the imported flow is created. With the rename_replace option, job creation will be determined as follows. If the job name is already used, a new job name with the suffix ".DataStage job" will be used. If the new job name is not currently used, the job will be created with this name. In case the new job name is already used, the job creation will not happen and an error will be raised.

    Allowable values: [skip,rename,replace,rename_replace]

    Examples:
  • Name of the input file, if it exists.

    Examples:
  • enable/disable notification. Default value is true.

    Examples:
  • Skip flow compilation.

    Examples:
  • If set to false, skip dependencies, only import flow. If not specified or set to true, import flow and dependencies.

    Examples:
  • Asset types separated by commas (','). Only assets of these types will be imported. If not specified, all supported assets will be imported.

    Examples:
  • Skip dependencies separated by commas (','), it is only used with replace option. If specified the skiped dependencies, only assets of these types will be skiped if the assets are existed.

    Examples:
  • This parameter takes effect when conflict_resolution is set to replace or skip. soft- merge the parameter set, add new parameter only, keep the old value sets hard- replace all.

    Allowable values: [soft,hard]

    Examples:
  • The encryption key to encrypt credentials on export or to decrypt them on import.

  • Import binaries.

    Examples:
  • run the datastage/pipeline job by job name, default is by id.

    Examples:
  • Enable optimized pipeline. Default value is false.

    Examples:
  • enable inline mode for the pipeline compilation.

    Examples:
  • Optimized sequence job name suffix, project default setting will take effect if not specified.

  • Prompt runtime parameters settings before running a flow.

    Examples:
  • This parameter takes effect when conflict_resolution is set to replace for data_set and file_set. by defaut it is false, break importing if the header file is a common path and already exists in the cluster, otherwise force override it.

parameters

  • The zip file to import. The maximum file size is 1GB.

  • The ID of the catalog to use. catalog_id or project_id is required.

  • The ID of the project to use. project_id or catalog_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • Action when the first import failure occurs. The default action is "continue" which will continue importing the remaining data flows. The "stop" action will stop the import operation upon the first error.

    Allowable values: [continue,stop]

    Examples:
  • Resolution when data flow to be imported has a name conflict with an existing data flow in the project or catalog. The default conflict resolution is "skip" will skip the data flow so that it will not be imported. The "rename" resolution will append "_Import_NNNN" suffix to the original name and use the new name for the imported data flow, while the "replace" resolution will first remove the existing data flow with the same name and import the new data flow. For the "rename_replace" option, when the flow name is already used, a new flow name with the suffix "_DATASTAGE_ISX_IMPORT" will be used. If the name is not currently used, the imported flow will be created with this name. In case the new name is already used, the existing flow will be removed first before the imported flow is created. With the rename_replace option, job creation will be determined as follows. If the job name is already used, a new job name with the suffix ".DataStage job" will be used. If the new job name is not currently used, the job will be created with this name. In case the new job name is already used, the job creation will not happen and an error will be raised.

    Allowable values: [skip,rename,replace,rename_replace]

    Examples:
  • Name of the input file, if it exists.

    Examples:
  • enable/disable notification. Default value is true.

    Examples:
  • Skip flow compilation.

    Examples:
  • If set to false, skip dependencies, only import flow. If not specified or set to true, import flow and dependencies.

    Examples:
  • Asset types separated by commas (','). Only assets of these types will be imported. If not specified, all supported assets will be imported.

    Examples:
  • Skip dependencies separated by commas (','), it is only used with replace option. If specified the skiped dependencies, only assets of these types will be skiped if the assets are existed.

    Examples:
  • This parameter takes effect when conflict_resolution is set to replace or skip. soft- merge the parameter set, add new parameter only, keep the old value sets hard- replace all.

    Allowable values: [soft,hard]

    Examples:
  • The encryption key to encrypt credentials on export or to decrypt them on import.

  • Import binaries.

    Examples:
  • run the datastage/pipeline job by job name, default is by id.

    Examples:
  • Enable optimized pipeline. Default value is false.

    Examples:
  • enable inline mode for the pipeline compilation.

    Examples:
  • Optimized sequence job name suffix, project default setting will take effect if not specified.

  • Prompt runtime parameters settings before running a flow.

    Examples:
  • This parameter takes effect when conflict_resolution is set to replace for data_set and file_set. by defaut it is false, break importing if the header file is a common path and already exists in the cluster, otherwise force override it.

Response

Response object of an import request.

Response object of an import request.

Response object of an import request.

Response object of an import request.

Status Code

  • The requested import operation has been accepted. However, the import operation may or may not be completed. The status field in the import response object describes the current status of the import. The response "Location" header provides a convenient url for retrieving the status with a GET request.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An error occurred. See response for more information.

Example responses

Cancel a previous import request

Cancel a previous import request.

Cancel a previous import request.

Cancel a previous import request.

Cancel a previous import request.

DELETE /v3/migration/zip_imports/{import_id}
ServiceCall<Void> deleteZipMigration(DeleteZipMigrationOptions deleteZipMigrationOptions)
deleteZipMigration(params)
delete_zip_migration(
        self,
        import_id: str,
        *,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the DeleteZipMigrationOptions.Builder to create a DeleteZipMigrationOptions object that contains the parameter values for the deleteZipMigration method.

Path Parameters

  • Unique ID of the import request.

    Example: cc6dbbfd-810d-4f0e-b0a9-228c328aff29

Query Parameters

  • The ID of the catalog to use. catalog_id or project_id is required.

  • The ID of the project to use. project_id or catalog_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Example: 4c9adbb4-28ef-4a7d-b273-1cee0c38021f

The deleteZipMigration options.

parameters

  • Unique ID of the import request.

    Examples:
  • The ID of the catalog to use. catalog_id or project_id is required.

  • The ID of the project to use. project_id or catalog_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:

parameters

  • Unique ID of the import request.

    Examples:
  • The ID of the catalog to use. catalog_id or project_id is required.

  • The ID of the project to use. project_id or catalog_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:

Response

Status Code

  • The import cancellation request was accepted.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • Status of the import request cannot be found. This can be due to the given import_id is not valid, or the import has been completed long ago and its status information is no longer available.

  • An error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Get the status of a previous import request

Gets the status of an import request. The status field in the response object indicates if the given import is completed, in progress, or failed. Detailed status information about each imported data flow is also contained in the response object.

Gets the status of an import request. The status field in the response object indicates if the given import is completed, in progress, or failed. Detailed status information about each imported data flow is also contained in the response object.

Gets the status of an import request. The status field in the response object indicates if the given import is completed, in progress, or failed. Detailed status information about each imported data flow is also contained in the response object.

Gets the status of an import request. The status field in the response object indicates if the given import is completed, in progress, or failed. Detailed status information about each imported data flow is also contained in the response object.

GET /v3/migration/zip_imports/{import_id}
ServiceCall<ImportResponse> getZipMigration(GetZipMigrationOptions getZipMigrationOptions)
getZipMigration(params)
get_zip_migration(
        self,
        import_id: str,
        *,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        format: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the GetZipMigrationOptions.Builder to create a GetZipMigrationOptions object that contains the parameter values for the getZipMigration method.

Path Parameters

  • Unique ID of the import request.

Query Parameters

  • The ID of the catalog to use. catalog_id or project_id is required.

  • The ID of the project to use. project_id or catalog_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Example: 4c9adbb4-28ef-4a7d-b273-1cee0c38021f

  • format of isx import report

    Allowable values: [json,csv]

    Example: json

The getZipMigration options.

parameters

  • Unique ID of the import request.

  • The ID of the catalog to use. catalog_id or project_id is required.

  • The ID of the project to use. project_id or catalog_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • format of isx import report.

    Allowable values: [json,csv]

    Examples:

parameters

  • Unique ID of the import request.

  • The ID of the catalog to use. catalog_id or project_id is required.

  • The ID of the project to use. project_id or catalog_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • format of isx import report.

    Allowable values: [json,csv]

    Examples:

Response

Response object of an import request.

Response object of an import request.

Response object of an import request.

Response object of an import request.

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • Status of the import request cannot be found. This can be due to the given import_id is not valid, or the import has been completed long ago and its status information is no longer available.

  • An error occurred. See response for more information.

Example responses

Analyze a Flow for Unit Testing

Analyze a DataStage Flow to determine stages and link that need to be stubbed during unit testing.

Analyze a DataStage Flow to determine stages and link that need to be stubbed during unit testing.

Analyze a DataStage Flow to determine stages and link that need to be stubbed during unit testing.

Analyze a DataStage Flow to determine stages and link that need to be stubbed during unit testing.

GET /v3/assets/test_cases/{flow_id}
ServiceCall<TestCaseAnalysis> testCaseAnalysis(TestCaseAnalysisOptions testCaseAnalysisOptions)
testCaseAnalysis(params)
test_case_analysis(
        self,
        flow_id: str,
        *,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the TestCaseAnalysisOptions.Builder to create a TestCaseAnalysisOptions object that contains the parameter values for the testCaseAnalysis method.

Path Parameters

  • The id of the flow to analyze

Query Parameters

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id, space_id, or project_id is required.

The testCaseAnalysis options.

parameters

  • The id of the flow to analyze.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Examples:
  • The ID of the space to use. catalog_id, space_id, or project_id is required.

parameters

  • The id of the flow to analyze.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Examples:
  • The ID of the space to use. catalog_id, space_id, or project_id is required.

Response

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An unexpected error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Create Test Case specification from Flow Analysis

Create Test Case specification from Flow Analysis

Create Test Case specification from Flow Analysis.

Create Test Case specification from Flow Analysis.

Create Test Case specification from Flow Analysis.

POST /v3/assets/test_cases/{flow_id}
ServiceCall<Void> createTestCaseFromAnalysis(CreateTestCaseFromAnalysisOptions createTestCaseFromAnalysisOptions)
createTestCaseFromAnalysis(params)
create_test_case_from_analysis(
        self,
        flow_id: str,
        analysis: 'TestCaseAnalysis',
        name: str,
        *,
        description: Optional[str] = None,
        tags: Optional[List[str]] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        directory_asset_id: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the CreateTestCaseFromAnalysisOptions.Builder to create a CreateTestCaseFromAnalysisOptions object that contains the parameter values for the createTestCaseFromAnalysis method.

Path Parameters

  • The id of the flow to analyze

Query Parameters

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id, space_id, or project_id is required.

  • The directory asset id to create the asset in or move to

Flow Analysis

The createTestCaseFromAnalysis options.

parameters

  • The id of the flow to analyze.

  • asset name.

  • asset description.

  • List of tags to identify the asset.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Examples:
  • The ID of the space to use. catalog_id, space_id, or project_id is required.

  • The directory asset id to create the asset in or move to.

parameters

  • The id of the flow to analyze.

  • asset name.

  • asset description.

  • List of tags to identify the asset.

  • The ID of the project to use. catalog_id, space_id, or project_id is required.

    Examples:
  • The ID of the space to use. catalog_id, space_id, or project_id is required.

  • The directory asset id to create the asset in or move to.

Response

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An unexpected error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Delete asset and their attachments

Delete asset and their attachments

Delete asset and their attachments.

Delete asset and their attachments.

Delete asset and their attachments.

DELETE /v3/assets/{asset_id}
ServiceCall<Void> deleteAsset(DeleteAssetOptions deleteAssetOptions)
deleteAsset(params)
delete_asset(
        self,
        asset_id: str,
        *,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        purge_test_data: Optional[bool] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the DeleteAssetOptions.Builder to create a DeleteAssetOptions object that contains the parameter values for the deleteAsset method.

Path Parameters

  • The ID of the asset

Query Parameters

  • The ID of the project to use. space_id, or project_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the project to use. space_id, or project_id is required.

  • Only take effect when delete data_intg_test_case. by default, it is false which means keep data files

The deleteAsset options.

parameters

  • The ID of the asset.

  • The ID of the project to use. space_id, or project_id is required.

    Examples:
  • The ID of the project to use. space_id, or project_id is required.

  • Only take effect when delete data_intg_test_case. by default, it is false which means keep data files.

parameters

  • The ID of the asset.

  • The ID of the project to use. space_id, or project_id is required.

    Examples:
  • The ID of the project to use. space_id, or project_id is required.

  • Only take effect when delete data_intg_test_case. by default, it is false which means keep data files.

Response

Status Code

  • The requested operation completed successfully.

  • The requested operation is in progress.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An unexpected error occurred. See response for more information.

No Sample Response

This method does not specify any sample responses.

Generate OPD-code for DataStage buildop

Generate the runtime assets for a DataStage buildop in the specified project or catalog for a specified runtime type. Either project_id or catalog_id must be specified.

Generate the runtime assets for a DataStage buildop in the specified project or catalog for a specified runtime type. Either project_id or catalog_id must be specified.

Generate the runtime assets for a DataStage buildop in the specified project or catalog for a specified runtime type. Either project_id or catalog_id must be specified.

Generate the runtime assets for a DataStage buildop in the specified project or catalog for a specified runtime type. Either project_id or catalog_id must be specified.

POST /v3/ds_codegen/generateBuildOp/{data_intg_bldop_id}
ServiceCall<GenerateBuildOpResponse> generateDatastageBuildop(GenerateDatastageBuildopOptions generateDatastageBuildopOptions)
generateDatastageBuildop(params)
generate_datastage_buildop(
        self,
        data_intg_bldop_id: str,
        *,
        build: Optional['BuildopBuild'] = None,
        creator: Optional['BuildopCreator'] = None,
        directory_asset: Optional[dict] = None,
        general: Optional['BuildopGeneral'] = None,
        properties: Optional[List['BuildopPropertiesItem']] = None,
        schemas: Optional[List[dict]] = None,
        type: Optional[str] = None,
        ui_data: Optional[dict] = None,
        wrapped: Optional['BuildopWrapped'] = None,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        runtime_type: Optional[str] = None,
        enable_async_compile: Optional[bool] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the GenerateDatastageBuildopOptions.Builder to create a GenerateDatastageBuildopOptions object that contains the parameter values for the generateDatastageBuildop method.

Path Parameters

  • The DataStage BuildOp-Asset-ID to use.

Query Parameters

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Example: 4c9adbb4-28ef-4a7d-b273-1cee0c38021f

  • The type of the runtime to use. e.g. dspxosh or Spark etc. If not provided queried from within pipeline flow if available otherwise default of dspxosh is used.

  • Whether to compile the flow asynchronously or not. When set to true, the compile request will be queued and then compiled. Response will be returned immediately as "Compiling". For compile status, call get compile status api.

BuildOp json to be attached.

The generateDatastageBuildop options.

parameters

  • The DataStage BuildOp-Asset-ID to use.

  • Build info.

  • Creator information.

  • directory information.

  • General information.

  • List of stage properties.

  • Array of data record schemas used in the buildop.

    Examples:
  • The operator type.

    Examples:
  • UI data.

    Examples:
  • Wrapped info.

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • The type of the runtime to use. e.g. dspxosh or Spark etc. If not provided queried from within pipeline flow if available otherwise default of dspxosh is used.

  • Whether to compile the flow asynchronously or not. When set to true, the compile request will be queued and then compiled. Response will be returned immediately as "Compiling". For compile status, call get compile status api.

parameters

  • The DataStage BuildOp-Asset-ID to use.

  • Build info.

  • Creator information.

  • directory information.

  • General information.

  • List of stage properties.

  • Array of data record schemas used in the buildop.

    Examples:
  • The operator type.

    Examples:
  • UI data.

    Examples:
  • Wrapped info.

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • The type of the runtime to use. e.g. dspxosh or Spark etc. If not provided queried from within pipeline flow if available otherwise default of dspxosh is used.

  • Whether to compile the flow asynchronously or not. When set to true, the compile request will be queued and then compiled. Response will be returned immediately as "Compiling". For compile status, call get compile status api.

Response

Describes the generateBuildOp response model.

Describes the generateBuildOp response model.

Describes the generateBuildOp response model.

Describes the generateBuildOp response model.

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • Request object contains invalid information. Server is not able to process the request object.

  • Unexpected error.

Example responses

Delete pipeline cache

Permanently remove all optimized runner cache (if any) for a Watson pipeline in the specified project or catalog for a specified runtime type. Either project_id or catalog_id must be specified.

Permanently remove all optimized runner cache (if any) for a Watson pipeline in the specified project or catalog for a specified runtime type. Either project_id or catalog_id must be specified.

Permanently remove all optimized runner cache (if any) for a Watson pipeline in the specified project or catalog for a specified runtime type. Either project_id or catalog_id must be specified.

Permanently remove all optimized runner cache (if any) for a Watson pipeline in the specified project or catalog for a specified runtime type. Either project_id or catalog_id must be specified.

DELETE /v3/ds_codegen/pipeline/cache/{pipeline_id}
ServiceCall<Void> deletePipelineCache(DeletePipelineCacheOptions deletePipelineCacheOptions)
deletePipelineCache(params)
delete_pipeline_cache(
        self,
        pipeline_id: str,
        *,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        cascade: Optional[bool] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the DeletePipelineCacheOptions.Builder to create a DeletePipelineCacheOptions object that contains the parameter values for the deletePipelineCache method.

Path Parameters

  • The Watson Pipeline ID to use.

Query Parameters

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Example: 4c9adbb4-28ef-4a7d-b273-1cee0c38021f

  • For Compile Watson pipeline API, whether or not to compile the primary(main) pipeline and all its nested pipelines. When the flag is set to false, only the primary pipeline will be compiled. The flag is set to true by default. For Delete pipeline cache API, whether or not to delete pipeline cache for all nested pipelines referenced by the pipeline. When this flag is set to true , the entire cache for the pipeline and all nested pipelines referenced in this pipeline will be permanently removed. The flag is set to false by default.

The deletePipelineCache options.

parameters

  • The Watson Pipeline ID to use.

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • For Compile Watson pipeline API, whether or not to compile the primary(main) pipeline and all its nested pipelines. When the flag is set to false, only the primary pipeline will be compiled. The flag is set to true by default. For Delete pipeline cache API, whether or not to delete pipeline cache for all nested pipelines referenced by the pipeline. When this flag is set to true , the entire cache for the pipeline and all nested pipelines referenced in this pipeline will be permanently removed. The flag is set to false by default.

parameters

  • The Watson Pipeline ID to use.

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • For Compile Watson pipeline API, whether or not to compile the primary(main) pipeline and all its nested pipelines. When the flag is set to false, only the primary pipeline will be compiled. The flag is set to true by default. For Delete pipeline cache API, whether or not to delete pipeline cache for all nested pipelines referenced by the pipeline. When this flag is set to true , the entire cache for the pipeline and all nested pipelines referenced in this pipeline will be permanently removed. The flag is set to false by default.

Response

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • Unexpected error.

No Sample Response

This method does not specify any sample responses.

Compile Watson pipeline to generate runtime code

Generate Runtime code for a Watson pipeline in the specified project or catalog for a specified runtime type. Either project_id or catalog_id must be specified.

Generate Runtime code for a Watson pipeline in the specified project or catalog for a specified runtime type. Either project_id or catalog_id must be specified.

Generate Runtime code for a Watson pipeline in the specified project or catalog for a specified runtime type. Either project_id or catalog_id must be specified.

Generate Runtime code for a Watson pipeline in the specified project or catalog for a specified runtime type. Either project_id or catalog_id must be specified.

POST /v3/ds_codegen/pipeline/compile/{pipeline_id}
ServiceCall<FlowCompileResponse> compileWatsonPipeline(CompileWatsonPipelineOptions compileWatsonPipelineOptions)
compileWatsonPipeline(params)
compile_watson_pipeline(
        self,
        pipeline_id: str,
        *,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        enable_inline_pipeline: Optional[bool] = None,
        runtime_type: Optional[str] = None,
        job_name_suffix: Optional[str] = None,
        enable_caching: Optional[bool] = None,
        enable_versioning: Optional[bool] = None,
        cascade: Optional[bool] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the CompileWatsonPipelineOptions.Builder to create a CompileWatsonPipelineOptions object that contains the parameter values for the compileWatsonPipeline method.

Path Parameters

  • The Watson Pipeline ID to use.

Query Parameters

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Example: 4c9adbb4-28ef-4a7d-b273-1cee0c38021f

  • Whether to enable inline pipeline execution or not. When this flag is set to true, no individual job runs will be created for nested pipelines. The flag is set to false by default.

  • The type of the runtime to use. e.g. dspxosh or Spark etc. If not provided queried from within pipeline flow if available otherwise default of dspxosh is used.

  • The name suffix for the created job, will use the pipeline name suffix configured in datastage project settings.

  • Whether to enable pipeline caching or not. Caching is disabled by default.

  • Whether to enable pipeline versioning or not. Versioning is disabled by default.

  • For Compile Watson pipeline API, whether or not to compile the primary(main) pipeline and all its nested pipelines. When the flag is set to false, only the primary pipeline will be compiled. The flag is set to true by default. For Delete pipeline cache API, whether or not to delete pipeline cache for all nested pipelines referenced by the pipeline. When this flag is set to true , the entire cache for the pipeline and all nested pipelines referenced in this pipeline will be permanently removed. The flag is set to false by default.

The compileWatsonPipeline options.

parameters

  • The Watson Pipeline ID to use.

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • Whether to enable inline pipeline execution or not. When this flag is set to true, no individual job runs will be created for nested pipelines. The flag is set to false by default.

  • The type of the runtime to use. e.g. dspxosh or Spark etc. If not provided queried from within pipeline flow if available otherwise default of dspxosh is used.

  • The name suffix for the created job, will use the pipeline name suffix configured in datastage project settings.

  • Whether to enable pipeline caching or not. Caching is disabled by default.

  • Whether to enable pipeline versioning or not. Versioning is disabled by default.

  • For Compile Watson pipeline API, whether or not to compile the primary(main) pipeline and all its nested pipelines. When the flag is set to false, only the primary pipeline will be compiled. The flag is set to true by default. For Delete pipeline cache API, whether or not to delete pipeline cache for all nested pipelines referenced by the pipeline. When this flag is set to true , the entire cache for the pipeline and all nested pipelines referenced in this pipeline will be permanently removed. The flag is set to false by default.

parameters

  • The Watson Pipeline ID to use.

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:
  • Whether to enable inline pipeline execution or not. When this flag is set to true, no individual job runs will be created for nested pipelines. The flag is set to false by default.

  • The type of the runtime to use. e.g. dspxosh or Spark etc. If not provided queried from within pipeline flow if available otherwise default of dspxosh is used.

  • The name suffix for the created job, will use the pipeline name suffix configured in datastage project settings.

  • Whether to enable pipeline caching or not. Caching is disabled by default.

  • Whether to enable pipeline versioning or not. Versioning is disabled by default.

  • For Compile Watson pipeline API, whether or not to compile the primary(main) pipeline and all its nested pipelines. When the flag is set to false, only the primary pipeline will be compiled. The flag is set to true by default. For Delete pipeline cache API, whether or not to delete pipeline cache for all nested pipelines referenced by the pipeline. When this flag is set to true , the entire cache for the pipeline and all nested pipelines referenced in this pipeline will be permanently removed. The flag is set to false by default.

Response

Describes the compile response model.

Describes the compile response model.

Describes the compile response model.

Describes the compile response model.

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • Request object contains invalid information. Server is not able to process the request object.

  • Unexpected error.

Example responses

Retrieve optimized pipeline info

Retrieve pipeline info including generated python code, version, last compile timestamp, and cached results.

Retrieve pipeline info including generated python code, version, last compile timestamp, and cached results.

Retrieve pipeline info including generated python code, version, last compile timestamp, and cached results.

Retrieve pipeline info including generated python code, version, last compile timestamp, and cached results.

GET /v3/ds_codegen/pipeline/info/{pipeline_id}
ServiceCall<OptimizedPipelineInfo> getPipelineInfo(GetPipelineInfoOptions getPipelineInfoOptions)
getPipelineInfo(params)
get_pipeline_info(
        self,
        pipeline_id: str,
        *,
        catalog_id: Optional[str] = None,
        project_id: Optional[str] = None,
        space_id: Optional[str] = None,
        **kwargs,
    ) -> DetailedResponse

Request

Use the GetPipelineInfoOptions.Builder to create a GetPipelineInfoOptions object that contains the parameter values for the getPipelineInfo method.

Path Parameters

  • The Watson Pipeline ID to use.

Query Parameters

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Example: bd0dbbfd-810d-4f0e-b0a9-228c328a8e23

  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Example: 4c9adbb4-28ef-4a7d-b273-1cee0c38021f

The getPipelineInfo options.

parameters

  • The Watson Pipeline ID to use.

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:

parameters

  • The Watson Pipeline ID to use.

  • The ID of the catalog to use. catalog_id or project_id or space_id is required.

  • The ID of the project to use. catalog_id or project_id or space_id is required.

    Examples:
  • The ID of the space to use. catalog_id or project_id or space_id is required.

    Examples:

Response

Information about the optimized pipeline.

Information about the optimized pipeline.

Information about the optimized pipeline.

Information about the optimized pipeline.

Status Code

  • The requested operation completed successfully.

  • You are not authorized to access the service. See response for more information.

  • You are not permitted to perform this action. See response for more information.

  • An error occurred. See response for more information.

Example responses
  • {
      "cache": {
        "cache_last_modified_time": 1741718218210,
        "has_cached_results": true
      },
      "compilation": {
        "has_parameters": false,
        "is_caching_enabled": true,
        "is_inline_mode": true,
        "is_pipeline_compiled": true,
        "job_name_suffix": "_eltjob",
        "pipeline_last_compiled_time": 1741718217973
      },
      "name": "pipeline_test",
      "version": "1.0.0"
    }
  • {
      "cache": {
        "cache_last_modified_time": 1741718218210,
        "has_cached_results": true
      },
      "compilation": {
        "has_parameters": false,
        "is_caching_enabled": true,
        "is_inline_mode": true,
        "is_pipeline_compiled": true,
        "job_name_suffix": "_eltjob",
        "pipeline_last_compiled_time": 1741718217973
      },
      "name": "pipeline_test",
      "version": "1.0.0"
    }
id=curlclassName=tab-item-selected