Introduction
You can use Analytics Engine powered by Apache Spark as a compute engine to run analytical and machine learning jobs. This service creates on-demand Spark clusters and runs workloads using offerings like Spark applications, Spark kernels, and Spark labs. Use this API to perform various actions on your Analytics Engine service instance such as set default configuration, manage quota, submit Spark applications, create Spark kernels etc.
Using Authorization: ZenApiKey token
With a platform API key, you can access everything that you would typically be able to access when you log in to the Cloud Pak for Data web client.
To generate a platform API key through the user experience:
- Log in to the web client.
- From the toolbar, click your avatar.
- Click Profile and settings.
- Click API key > Generate new key.
- Click Generate.
- Click Copy and save your key somewhere safe. You cannot recover this key if you lose it.
Alternatively, you can call the Generate API key method. Note: this method must be called with bearer access token authorization.
When you get the API Key from the user experience or from the generate API Key method, you must Base64 encode <username>:<api_key>
to get the token.
Using Authorization: Bearer token
If Identity and Access Management (IAM) are not enabled, you can generate a token by using your username and password against the Get authorization token endpoint.
If IAM is enabled, you can generate a token by using your username and password against the /idprovider/v1/auth/identitytoken
endpoint.
Generating an access token by using the Get authorization token endpoint. The response includes a token
property.
Replace {cpd_cluster_host}
with the details for the service instance. Replace {username}
and {password}
with your IBM Cloud Pak for Data credentials.
curl -k -X POST "https://{cpd_cluster_host}/icp4d-api/v1/authorize" \
-d "{\"username\":\"{username}\",\"password\":\"{password}\"}" \
-H "Content-Type: application/json"
Alternatively, you can use an API key instead of a password. Replace {username}
and {api_key}
with your IBM Cloud Pak for Data credentials.
curl -k -X POST "https://{cpd_cluster_host}/icp4d-api/v1/authorize" \
-d "{\"username\":\"{username}\",\"api_key\":\"{api_key}\"}" \
-H "Content-Type: application/json"
Authenticating to the API by using an access token. Replace {token}
with your details.
curl "https://{cpd_cluster_host}/v4/analytics_engines/{method}" -H "Authorization: Bearer {token}" -H "Content-Type: application/json"
Authenticating to the API by using a Base64 encoded API key token. Replace {token}
with your details.
Generate the token by using an API key and its respective username.
echo -n "<username>:<api_key>" | base64
curl "https://{cpd_cluster_host}/v4/analytics_engines/{method}" \
-H "Authorization: ZenApiKey {token}" \
-H "Content-Type: application/json"
Service endpoint
The service endpoint is based on your IBM Cloud Pak deployment URL.
https://{cpd_cluster_host}/v4/analytics_engines
For example, if your instance is deployed at https://www.example.com:31843
, you can access the APIs at https://www.example.com:31843/v4/analytics_engines
.
Example
curl -X {request_method} "https://{cpd_cluster_host}/v4/analytics_engines/{method}" -H "Authorization: Bearer {token}" -H "Content-Type: application/json"
Error handling
This API uses standard HTTP response codes to indicate whether a method completed successfully. A 200 response always indicates success. A 400 type response is a client side failure, and a 500 type response usually indicates a server side error.
Status code | Description |
---|---|
200 OK | The request was processed successfully. |
201 Created | The requested resource was created successfully. |
202 Accepted | The request was accepted successfully. |
400 Bad Request | The request could not be processed, often due to a missing required parameter. |
401 Unauthorized | The authorization token is invalid or missing. |
403 Forbidden | The authorization token presented does not have sufficient permission to perform the operation. |
404 Not Found | The requested resource does not exist. |
410 Gone | The requested resource was deleted and no longer exists. |
429 Too Many Requests | The request could not be processed due to too many concurrent requests against the API. |
500 Server Error | Your request could not be processed due to an internal server error. |
Methods
Find Analytics Engine by id
Retrieve the details of a single Analytics Engine instance.
GET /v4/analytics_engines/{instance_id}
Request
Path Parameters
GUID of the Analytics Engine service instance to retrieve.
Possible values: length = 36, Value must match regular expression
^^[0-9a-f]{8}-[0-9a-f]{4}-[1-5][0-9a-f]{3}-[89ab][0-9a-f]{3}-[0-9a-f]{12}$$
Example:
e64c907a-e82f-46fd-addc-ccfafbd28b09
Response
Analytics Engine instance information
GUID of the Analytics Engine instance
Possible values: length = 36
Instance state
Possible values: 1 ≤ length ≤ 32
Timestamp when the state of the instance was changed, in the format YYYY-MM-DDTHH:mm:ssZ
Possible values: length = 20
Example:
2021-01-30T08:30:00Z
Default namespace which is used as dataplane for Spark runtimes
Possible values: 1 ≤ length ≤ 128
- resource_quota
Max cpu quota for an instance
Possible values: 1 ≤ value ≤ 100000
Example:
20
Max memory quota for an instance
Possible values: 1 ≤ value ≤ 100000
Example:
20
Available cpu quota for an instance
Possible values: 1 ≤ value ≤ 100000
Example:
20
Available cpu quota for an instance
Possible values: 1 ≤ value ≤ 100000
Example:
20
space id associated with instance
Possible values: 1 ≤ length ≤ 256
Type of context
Possible values: 1 ≤ length ≤ 128
Instance default configuration
Runtime enviroment for applications and other workloads.
Examples:{ "spark_version": "3.4" }
Home volume pvc name
Possible values: 1 ≤ length ≤ 126
Status Code
OK
Unauthorized
Forbidden
Resource Not Found
Internal Server Error
{ "instance_id": "7bb1a226-f5f3-416c-8539-95b23244e25c", "context_type": "space_instance", "namespace": "cpd-instance", "creation_time": "Thursday 19 December 2024 06:06:19.174+0000", "dataplane_type": "NA", "endpoint_type": "public", "home_volume": "volumes-cpdvol-pvc", "resource_quota": { "cpu_quota": 20, "memory_quota_gibibytes": 80, "avail_cpu_quota": 19, "avail_memory_quota_gibibytes": 76 }, "state": "Created", "state_change_time": "Friday 20 December 2024 04:17:24.012+0000" }
Get instance default runtime
Get the default runtime environment on which all workloads of the instance will run.
GET /v4/analytics_engines/{instance_id}/default_runtime
Request
Path Parameters
The ID of the Analytics Engine instance.
Possible values: length = 36, Value must match regular expression
^[0-9a-f]{8}-[0-9a-f]{4}-[1-5][0-9a-f]{3}-[89ab][0-9a-f]{3}-[0-9a-f]{12}$
Example:
e64c907a-e82f-46fd-addc-ccfafbd28b09
Response
Runtime enviroment for applications and other workloads.
Spark version of the runtime environment.
Possible values: 1 ≤ length ≤ 3, Value must match regular expression
^3.4|3.5$
Status Code
OK
Unauthorized
Forbidden
Resource Not Found
Internal Server Error
{ "spark_version": "3.3" }
Replace instance default runtime
Replace the default runtime environment on which all workloads of the instance will run.
PUT /v4/analytics_engines/{instance_id}/default_runtime
Request
Path Parameters
The ID of the Analytics Engine instance.
Possible values: length = 36, Value must match regular expression
^[0-9a-f]{8}-[0-9a-f]{4}-[1-5][0-9a-f]{3}-[89ab][0-9a-f]{3}-[0-9a-f]{12}$
Example:
e64c907a-e82f-46fd-addc-ccfafbd28b09
Default runtime environment on which all workloads will run.
{
"spark_version": "3.4"
}
Spark version of the runtime environment.
Possible values: 1 ≤ length ≤ 3, Value must match regular expression
^3.4|3.5$
Default:
3.4
Response
Runtime enviroment for applications and other workloads.
Spark version of the runtime environment.
Possible values: 1 ≤ length ≤ 3, Value must match regular expression
^3.4|3.5$
Status Code
Instance default runtime was replaced.
Bad Request
Unauthorized
Forbidden
Resource Not Found
Internal Server Error
{ "spark_version": "3.4" }
Get instance default Spark configurations
Get the default Spark configuration properties that will be applied to all applications of the instance.
GET /v4/analytics_engines/{instance_id}/default_configs
Replace instance default Spark configurations
Replace the default Spark configuration properties that will be applied to all applications of the instance.
PUT /v4/analytics_engines/{instance_id}/default_configs
Request
Path Parameters
The ID of the Analytics Engine instance.
Possible values: length = 36, Value must match regular expression
^[0-9a-f]{8}-[0-9a-f]{4}-[1-5][0-9a-f]{3}-[89ab][0-9a-f]{3}-[0-9a-f]{12}$
Example:
e64c907a-e82f-46fd-addc-ccfafbd28b09
Spark configuration properties to replace existing instance default Spark configurations.
{
"spark.driver.memory": "8G",
"spark.driver.cores": "2"
}
Update instance default Spark configurations
Update the default Spark configuration properties that will be applied to all applications of the instance.
PATCH /v4/analytics_engines/{instance_id}/default_configs
Request
Custom Headers
Allowable values: [
application/merge-patch+json
,application/json
]
Path Parameters
The ID of the Analytics Engine instance.
Possible values: length = 36, Value must match regular expression
^[0-9a-f]{8}-[0-9a-f]{4}-[1-5][0-9a-f]{3}-[89ab][0-9a-f]{3}-[0-9a-f]{12}$
Example:
e64c907a-e82f-46fd-addc-ccfafbd28b09
Spark configuration properties to be updated. Properties will be merged with existing configuration properties. Set a property value to null
in order to unset it.
{
"ae.spark.history-server.cores": "1",
"ae.spark.history-server.memory": "4G"
}
Response
Default Spark configuration properties
Status Code
Instance default Spark configurations were updated.
Bad Request
Unauthorized
Forbidden
Resource Not Found
Internal Server Error
{ "ae.spark.history-server.cores": "1", "ae.spark.history-server.memory": "4G", "spark.driver.memory": "8G", "spark.driver.cores": "2" }
(CPD Scheduler) Change the default cpu quota and memory quota of the instance
Change the default cpu quota and memory quota of the instance. If your instance is already having V3 based resource quota, this API shall upgrade to V4 based CPD Scheduler quota and it is irreversible.
PUT /v4/analytics_engines/{instance_id}/resource_consumption_limits
Request
Path Parameters
instance id
Required quota limits
Max cpu quota for an instance
Possible values: 1 ≤ length ≤ 5
Example:
10
Max mamory quota for an instance
Possible values: 1 ≤ length ≤ 5
Example:
20Gi
Response
Hummingbird instance resource quota
Max cpu quota for an instance
Possible values: 1 ≤ length ≤ 5
Example:
10
Max mamory quota for an instance
Possible values: 1 ≤ length ≤ 5
Example:
20Gi
Status Code
Success.
Bad Request - Could be thrown if supported scheduler is not enabled/installed
Authorization Bearer token not provided in the header
Authorization token provided but not authorized to create instance
Internal error occured
{ "max_cores": "10", "max_memory": "20Gi" }
(CPD Scheduler) Get the default cpu quota and memory quota of the instance
GET /v4/analytics_engines/{instance_id}/resource_consumption_limits
Response
Hummingbird instance resource quota
Max cpu quota for an instance
Possible values: 1 ≤ length ≤ 5
Example:
10
Max mamory quota for an instance
Possible values: 1 ≤ length ≤ 5
Example:
20Gi
Status Code
Success.
Bad Request - Could be thrown if supported scheduler is not enabled/installed
Authorization Bearer token not provided in the header
Authorization token provided but not authorized to create instance
Internal error occured
{ "max_cores": "10", "max_memory": "20Gi" }
(CPD Scheduler) Current resource consumption of the instance
GET /v4/analytics_engines/{instance_id}/current_resource_consumption
Response
Hummingbird instance resource quota
- running
Current cpu quota in use
Possible values: 1 ≤ length ≤ 7
Example:
2000mi
Current memory quota in use
Possible values: 1 ≤ length ≤ 7
Example:
2Gi
- pending
Current cpu quota in use
Possible values: 1 ≤ length ≤ 7
Example:
2000mi
Current memory quota in use
Possible values: 1 ≤ length ≤ 7
Example:
2Gi
Status Code
Success.
Bad Request - Could be thrown if supported scheduler is not enabled/installed
Unauthorized
Forbidden
Resource Not Found - Could be thrown if resource quota for this instance doesn't exists
Internal error occured
{ "running": { "cores": "1000m", "memory": "1024Mi" }, "pending": { "cores": "1000m", "memory": "1024Mi" } }
Request
Path Parameters
instance id
History server request payload
Number of cores to be allocated for the History server
Number of memory to be allocated for the History server
Response
History server details
state of the history server
Number of cores used for the History server
Number of memory used for the History server
Start time of the History server
Status Code
History server started successfully
Authorization token not provided in the header
Authorization token provided but not authorized to create instance
History server already started
Internal error occured
{ "state": "started", "cores": "1", "memory": "4G", "start_time": "2024-01-30T10:04:33.183838094Z" }
Response
state of the history server
Number of cores used for the History server
Number of memory used for the History server
Start time of the History server
Status Code
History server details
Authorization token not provided in the header
Authorization token provided but not authorized to create instance
History server already stopped
Internal error occured
No Sample Response
Create an application
Create a Spark application.
POST /v4/analytics_engines/{instance_id}/spark_applications
Request
Path Parameters
instance id to create application
Request json to create an application
Sample payload for submitting pyspark wordcount application
{
"application_details": {
"application": "/opt/ibm/spark/examples/src/main/python/wordcount.py",
"arguments": [
"/opt/ibm/spark/examples/src/main/resources/people.txt"
],
"conf": {
"spark.app.name": "PySpark Application",
"spark.eventLog.enabled": "true"
}
}
}
Sample payload for submitting scala based spark application
{
"application_details": {
"arguments": [
"1"
],
"application": "/opt/ibm/spark/examples/jars/spark-examples*.jar",
"class": "org.apache.spark.examples.SparkPi",
"conf": {
"spark.app.name": "Scala Application",
"spark.eventLog.enabled": "true"
}
}
}
Sample payload for submitting R based spark application
{
"application_details": {
"application": "/opt/ibm/spark/examples/src/main/r/dataframe.R",
"conf": {
"spark.app.name": "R Application",
"spark.eventLog.enabled": "true"
}
}
}
Application details
- application_details
Path of the application to run
Possible values: 1 ≤ length ≤ 256
Example:
/opt/ibm/spark/examples/src/main/python/wordcount.py
Runtime enviroment for applications and other workloads.
Examples:{ "spark_version": "3.4" }
Path of the jar files containing the application
Possible values: 1 ≤ length ≤ 256
Example:
cos://cloud-object-storage/jars/tests.jar
Package names
Possible values: 1 ≤ length ≤ 256
Entry point for a Spark application bundled as a '.jar' file. This is applicable only for Java or Scala applications.
Possible values: 1 ≤ length ≤ 256, Value must match regular expression
([\p{L}_$][\p{L}\p{N}_$]*\.)*[\p{L}_$][\p{L}\p{N}_$]*
Example:
com.company.path.ClassName
An array of arguments to be passed to the application.
Examples:[ "/opt/ibm/spark/examples/src/main/resources/people.txt" ]
Application configurations to overload. Note: The following link contains the list of supported paramters. See here.
Examples:{ "spark.driver.cores": "1", "spark.driver.memory": "1g" }
- conf
Application environment configurations to overload. Note: The following link contains the list of supported paramters. See here.
Examples:{ "SPARK_ENV_LOADED": "2" }
- env
a list of pvcs to mount in spark cluster
Response
Application response details
Application id
Possible values: 1 ≤ length ≤ 36
Application state
Possible values: [
ACCEPTED
,FINISHED
,KILLED
,FAILED
,ERROR
,RUNNING
,SUBMITTED
,STOPPED
,WAITING
]Possible values: 1 ≤ length ≤ 32
Runtime enviroment for applications and other workloads.
Examples:{ "spark_version": "3.4" }
Application start time
Possible values: 2 ≤ length ≤ 20
Spark Application id
Possible values: 1 ≤ length ≤ 128
Status Code
Accepted
Bad Request
Unauthorized
Forbidden
Resource Not Found
Internal Server Error
No Sample Response
Retrieve all Spark applications
Gets all applications submitted in an instance with a specified instance-id.
GET /v4/analytics_engines/{instance_id}/spark_applications
Request
Path Parameters
Identifier of the instance where the applications run.
Possible values: length = 36, Value must match regular expression
^[0-9a-f]{8}-[0-9a-f]{4}-[1-5][0-9a-f]{3}-[89ab][0-9a-f]{3}-[0-9a-f]{12}$
Example:
e64c907a-e82f-46fd-addc-ccfafbd28b09
Response
An array of application details
List of applications
Status Code
OK
Bad Request
Unauthorized
Forbidden
Resource Not Found
Internal Server Error
{ "applications": [ { "application_id": "b10003fd-f68b-407d-8ab5-a6d73ea4da32", "template_id": "spark-3.4-cp4d-miniforge-template", "state": "FINISHED", "start_time": "Friday 03 January 2025 08:40:42.324+0000", "finish_time": "Friday 03 January 2025 08:40:56.368+0000", "spark_application_id": "app-20250103084042-0000", "spark_application_name": "SparkR-DataFrame-example", "creation_time": "Friday 03 January 2025 08:39:51.225+0000", "runtime": { "spark_version": "3.4" } } ] }
Get application details by application id
Returns details of an application by application id
GET /v4/analytics_engines/{instance_id}/spark_applications/{application_id}
Request
Path Parameters
The instance_id to fetch the application details
The application_id to fetch the application details
Response
Status Code
OK
Bad Request
Unauthorized
Forbidden
Resource Not Found
Internal Server Error
{ "application_details": { "application": "/opt/ibm/spark/examples/src/main/r/dataframe.R", "conf": { "spark.eventLog.enabled": "true", "spark.app.name": "R Application" }, "instance_defaults_at_submission": { "conf": { "spark.driver.memory": "8G", "spark.driver.cores": "2" } }, "runtime": { "spark_version": "3.4" } }, "application_id": "b10003fd-f68b-407d-8ab5-a6d73ea4da32", "return_code": "0", "state": "FINISHED", "start_time": "Friday 03 January 2025 08:40:42.324+0000", "finish_time": "Friday 03 January 2025 08:40:56.368+0000", "spark_application_id": "app-20250103084042-0000", "spark_application_name": "SparkR-DataFrame-example", "creation_time": "Friday 03 January 2025 08:39:51.225+0000", "deploy_mode": "stand-alone" }
{ "application_details": { "application": "/opt/ibm/spark/examples/src/main/python/wordcount.py", "application_arguments": [ "/opt/ibm/spark/examples/src/main/resources/people.txt" ], "mode": "stand-alone", "application_id": "a9a6f328-56d8-4923-8042-97652fff2af3", "state": "QUEUED", "state_details": [ { "type": "info", "code": "instance_quota_exhausted", "message": "This Job/Application requested exceeded Service Instance quota, please contact your instance admin to increase quota" } ], "start_time": "Wednesday 25 November 2020 14:14:31.311+0000", "finish_time": "Wednesday 25 November 2020 14:15:53.205+0000", "spark_application_id": "app-20211214013559-0000" } }
Delete the application
Delete the application. Deletes application by application id.
DELETE /v4/analytics_engines/{instance_id}/spark_applications/{application_id}
Get the status of the application
Returns the status of the given Spark application.
GET /v4/analytics_engines/{instance_id}/spark_applications/{application_id}/status
Request
Path Parameters
Identifier of the instance to which the applications belongs.
Possible values: length = 36, Value must match regular expression
^[0-9a-f]{8}-[0-9a-f]{4}-[1-5][0-9a-f]{3}-[89ab][0-9a-f]{3}-[0-9a-f]{12}$
Example:
e64c907a-e82f-46fd-addc-ccfafbd28b09
Identifier of the application for which details are requested.
Possible values: length = 36, Value must match regular expression
^[0-9a-f]{8}-[0-9a-f]{4}-[1-5][0-9a-f]{3}-[89ab][0-9a-f]{3}-[0-9a-f]{12}$
Example:
ff48cc19-0e7e-4627-aac6-0b4ad080397b
Response
Response of Application Get Api
Application id
Possible values: length = 36
Spark Application id
Possible values: 1 ≤ length ≤ 36
state of application
Possible values: 1 ≤ length ≤ 32
Application start time
Possible values: 1 ≤ length ≤ 32
Application finish time
Possible values: 1 ≤ length ≤ 32
Status Code
OK
Bad Request
Unauthorized
Forbidden
Resource Not Found
Internal Server Error
{ "id": "9da32aaf-df69-4e61-bdb8-1b2772c0f677", "state": "FINISHED", "start_time": "2021-04-21T04:24:01.000Z", "finish_time": "2021-04-21T04:28:15.000Z", "spark_application_id": "app-20211214013559-0000" }
Request
Path Parameters
instance id
Possible values: length = 36
Required kernel details
{
"name": "python310",
"engine": {
"conf": {
"spark.eventLog.enabled": "true"
}
}
}
Language of the kernel eg scala, python2, python3, r
(Optional) - Runtime Engine information. Either environment or Engine needs to be provided but not both.
Response
Kernel information
uuid of kernel
kernel spec name
ISO 8601 timestamp for the last-seen activity on this kernel. Use this in combination with execution_state == 'idle' to identify which kernels have been idle since a given time. Timestamps will be UTC, indicated 'Z' suffix. Added in notebook server 5.0.
The number of active connections to this kernel.
Current execution state of the kernel (typically 'idle' or 'busy', but may be other values, such as 'starting'). Added in notebook server 5.0.
Possible values: 1 ≤ length ≤ 126
Status Code
The metadata about the newly created kernel.
The maximum number of kernels have been created.
No Sample Response
Request
Path Parameters
instance id
Possible values: length = 36
kernel uuid
Possible values: length = 36
Response
Kernel information
uuid of kernel
kernel spec name
ISO 8601 timestamp for the last-seen activity on this kernel. Use this in combination with execution_state == 'idle' to identify which kernels have been idle since a given time. Timestamps will be UTC, indicated 'Z' suffix. Added in notebook server 5.0.
The number of active connections to this kernel.
Current execution state of the kernel (typically 'idle' or 'busy', but may be other values, such as 'starting'). Added in notebook server 5.0.
Possible values: 1 ≤ length ≤ 126
Status Code
Information about the kernel
No Sample Response
Upgrades the connection to a websocket connection
GET /v4/analytics_engines/{instance_id}/jkg/api/kernels/{kernel_id}/channels
Request
Path Parameters
instance id
Possible values: length = 36
kernel uuid
Possible values: length = 36
Response
Kernel information
uuid of kernel
kernel spec name
ISO 8601 timestamp for the last-seen activity on this kernel. Use this in combination with execution_state == 'idle' to identify which kernels have been idle since a given time. Timestamps will be UTC, indicated 'Z' suffix. Added in notebook server 5.0.
The number of active connections to this kernel.
Current execution state of the kernel (typically 'idle' or 'busy', but may be other values, such as 'starting'). Added in notebook server 5.0.
Possible values: 1 ≤ length ≤ 126
Status Code
Kernel interrupted
No Sample Response
Creates a Spark cluster
Creates an cluster and returns cluster details
POST /v4/analytics_engines/{instance_id}/spark_clusters
Request
Path Parameters
instance id to create cluster
Possible values: length = 36
request json to create an cluster
{
"cluster_details": {
"name": "my-lab-1",
"conf": {
"spark.eventLog.enabled": "true"
}
}
}
Cluster details
- cluster_details
Name of the cluster
Runtime enviroment for applications and other workloads.
Examples:{ "spark_version": "3.4" }
Application configurations to overload. Note: The following link contains the list of supported paramters. See here.
Examples:{ "spark.driver.cores": "1", "spark.driver.memory": "1g" }
- conf
Application environment configurations to overload. Note: The following link contains the list of supported paramters. See here.
Examples:{ "SPARK_ENV_LOADED": "2" }
- env
a list of pvcs to mount in spark cluster
Response
Application response details
Cluster id
Possible values: length = 36
Example:
1ffda5f8-6e2f-4581-bdb7-0e6a68cb50fe
Cluster state
Possible values: [
ACCEPTED
,FINISHED
,KILLED
,FAILED
,ERROR
,ACTIVE
,SUBMITTED
,STOPPED
,WAITING
]Cluster name
Runtime enviroment for applications and other workloads.
Examples:{ "spark_version": "3.4" }
Application start time
Status Code
Accepted
Bad Request
Unauthorized
Forbidden
Resource Not Found
Internal Server Error
No Sample Response
Retrieve all Spark clusters
Gets all clusters submitted in an instance with a specified instance-id.
GET /v4/analytics_engines/{instance_id}/spark_clusters
Request
Path Parameters
Identifier of the instance where the applications run.
Possible values: length = 36, Value must match regular expression
^[0-9a-f]{8}-[0-9a-f]{4}-[1-5][0-9a-f]{3}-[89ab][0-9a-f]{3}-[0-9a-f]{12}$
Example:
e64c907a-e82f-46fd-addc-ccfafbd28b09
Response
An array of cluster details
List of cluster
Status Code
OK
Bad Request
Unauthorized
Forbidden
Resource Not Found
Internal Server Error
{ "clusters": [ { "id": "db933645-0b68-4dcb-80d8-7b71a6c8e542", "name": "IBM Lab", "state": "ACTIVE", "start_time": "2021-04-21T04:24:01Z", "end_time": "2021-04-21T04:25:18Z", "finish_time": "2021-04-21T04:25:18Z", "default_runtime": { "spark_version": "3.4" } } ] }
Get cluster details by cluster id
Returns details of a cluster by cluster id
GET /v4/analytics_engines/{instance_id}/spark_clusters/{cluster_id}
Request
Path Parameters
The instance_id to fetch the cluster details
Possible values: length = 36
The cluster_id to fetch the cluster details
Possible values: length = 36
Response
Status Code
OK
Bad Request
Unauthorized
Forbidden
Resource Not Found
Internal Server Error
{ "cluster_details": { "name": "IBM Lab", "runtime": { "spark_version": "3.4" } }, "mode": "stand-alone", "cluster_id": "a9a6f328-56d8-4923-8042-97652fff2af3", "state": "FINISHED", "finish_time": "Wednesday 25 November 2020 14:15:53.205+0000", "creation_time": "Wednesday 25 November 2020 14:14:31.311+0000" }
{ "cluster_details": { "cluster_id": "a9a6f328-56d8-4923-8042-97652fff2af3", "state": "QUEUED", "state_details": [ { "type": "info", "code": "instance_quota_exhausted", "message": "This Job/Application requested exceeded Service Instance quota, please contact your instance admin to increase quota" } ], "start_time": "Wednesday 25 November 2020 14:14:31.311+0000", "finish_time": "Wednesday 25 November 2020 14:15:53.205+0000" } }
Delete the cluster
Delete the cluster. Deletes cluster by cluster id.
DELETE /v4/analytics_engines/{instance_id}/spark_clusters/{cluster_id}
Connect to Spark Cluster using Websocket
Weebsocket connection to Spark cluster
GET /v4/analytics_engines/{instance_id}/spark_clusters/{cluster_id}/connect