IBM Cloud Docs
Default limits and quotas for Spark engine

Default limits and quotas for Spark engine

The following sections provide details about the default limit and quota settings for the Spark engine.

These default values are set to avoid excessive billing, to override the default limits and quotas for the Spark engine, based on your requirements, contact IBM Support.

Application limits

The following table lists the default limits and quotas for the Spark engine.

Default limits and quotas for Spark instances
Category Default
Maximum number of Spark engines per watsonx.data instances 3
Maximum number of nodes per Spark engine 20
Shuffle space per core approx. 30 GB (Not customizable)

Supported Spark driver and executor vCPU and memory combinations

Apache Spark supports only the following pre-defined Spark driver and executor vCPU and memory combinations.

These two vCPU to memory proportions are supported: 1 vCPU to 4 GB of memory and 1 vCPU to 8 GB of memory.

The following table shows the supported vCPU to memory size combinations.

Supported vCPU to memory size combinations
Lower value Upper value
1 vCPU x 1 GB 10 vCPU x 48 GB

Supported Spark version

IBM® watsonx.data supports the following Spark runtime versions to run Spark workloads.

Supported Spark versions
Name Status
Apache Spark 3.4.4 Supported
Apache Spark 3.5.4 Supported