IBM Cloud Docs
How Event Streams uses limits and quotas

How Event Streams uses limits and quotas

Event Streams uses quotas to control the resources, such as network bandwidth, that a service can consume. The types and levels of quotas depend on whether you use the Lite, Standard, Enterprise, or Satellite plan.

Lite plan

Network throughput

A recommended maximum for network throughput is 100 KB per second. Throughput is expressed as the number of bytes per second that can be both sent and received in a cluster.

The recommended figure is based on a typical workload and considers the possible impact of operational actions such as internal updates or failure modes, like the loss of an availability zone. If the average throughput exceeds the recommended figure, a loss in performance might be experienced during these conditions.

Partitions

One partition for each service instance.

Retention

A maximum of 100 MB for the partition.

Consumer groups

A maximum of 10 consumer groups. When the limit is exceeded, the GROUP_MAX_SIZE_REACHED error is returned to the client.

Other limits

  • Maximum message size: 1 MB
  • Maximum concurrently active Kafka clients: 5
  • Maximum request rate [HTTP Produce API]: 5 per second
  • Maximum request rate [HTTP Admin API]: 10 per second
  • Maximum record key size when you use REST Producer API is 4 K.
  • Maximum record value size when you use REST Producer API is 64 K.

Standard plan

Network throughput

The maximum throughput for each service instance equates to 1 MB per second per partition up to a maximum of 20 MB per second. For example, for a service instance with 10 partitions, the maximum throughput is 10 MB per second and for 30 partitions it is 20 MB per second.

The throughput is measured separately for producers and consumers. When exceeded, throttling is applied by slightly delaying the responses to requests, effectively applying a gentle brake to producers and consumers until the bandwidth is reduced.

Partitions

One hundred partitions for each service instance.

Retention

A maximum of 1 GB for each partition.

Consumer groups

A maximum of 1000 consumer groups. When the limit is exceeded, the GROUP_MAX_SIZE_REACHED error is returned to the client.

Other limits

  • Maximum message size: 1 MB
  • Maximum concurrently active Kafka clients: 500
  • Maximum request rate [HTTP Produce API]: 100 per second
  • Maximum request rate [HTTP Admin API]: 10 per second
  • Maximum record key size when you use REST Producer API is 4 K.
  • Maximum record value size when you use REST Producer API is 64 K.

Enterprise plan

Network throughput

Network throughput capacity is based on the peak maximum. Each peak maximum has a recommended maximum for typical production workloads.

Table 1. Network throughput capacity on Enterprise
Peak Maximum Recommended maximum
150 MB/s (75 MB/s producing and 75 MB/s consuming) 100 MB/s (50 MB/s producing and 50 MB/s consuming)
300 MB/s (150 MB/s producing and 150 MB/s consuming) 200/s (100 MB/s producing and 100 MB/s consuming)
450 MB/s (225 MB/s producing and 225 MB/s consuming) 300 MB/s (150 MB/s producing and 150 MB/s consuming)

Throughput is expressed as the number of bytes per second that can be both sent and received in a service instance. The peak maximum throughput capacity can be selected when the service instance is created, and later scaled as demands increase.

Throughput capacity cannot be scaled down. To move to a lower throughput capacity would require creating a new Event Streams service instance at the lower capacity unit.

The recommended maximum figure is based on a typical workload and considers the possible impact of operational actions such as internal updates or failure modes, like the loss of an availability zone. If the average throughput exceeds the recommended figure, a loss in performance might be experienced during these conditions. It is recommended to plan your maximum throughput capacity as two-thirds of the peak maximum. For example, two-thirds of the 150 MB/s peak maximum with one capacity unit are 100 MB/s.

For more information, see Scaling Event Streams.

Partitions

The maximum number of partitions increases in line with the number of capacity units, so 3000 for 150 MB/s, 6000 for 300 MB/s and 9000 for 450 MB/s in Enterprise.

You can change the number of capacity units by using the self-service option as described in Scaling Enterprise plan capacity.

Retention

The storage capacity can be selected when the service instance is created, and later scaled as demands increase. Storage capacity is dependent upon the configured throughput capacity. For more information, see Scaling Event Streams on storage capacity options.

Storage capacity cannot be scaled down. To move to a lower storage capacity would require creating a new Event Streams service instance at the lower capacity unit.

Schema Registry

Schemas

  • Maximum number of schemas that can be stored: 1000
  • Maximum number of schema versions for each schema that can be stored: 100
  • Maximum schema size: 64 kB

Limits

  • Maximum request rate [HTTP Schema Admin] 10 per second
  • Maximum request rate [HTTP Serdes] 100 per second

Other limits

  • Maximum message size: 1 MB
  • Maximum concurrently active Kafka clients: 10000
  • Maximum record key size when you use REST Producer API is 4 K.
  • Maximum record value size when you use REST Producer API is 64 K.
  • Maximum messages per second when you use REST Producer API is 200.

Satellite plan

Network throughput

Network throughput capacity is based on the peak maximum. Each peak maximum has a recommended maximum for typical production workloads.

Table 2. Network throughput capacity on Satellite
Peak Maximum Recommended maximum
150 MB/s (75 MB/s producing and 75 MB/s consuming) 100 MB/s (50 MB/s producing and 50 MB/s consuming)

Throughput is expressed as the number of bytes per second that can be both sent and received in a service instance.

Throughput capacity cannot be scaled down. To move to a lower throughput capacity, requires creating a new Event Streams service instance at the lower capacity unit.

The following figures are not verified. They are guidelines only.

The recommended maximum figure is based on a typical workload and considers the possible impact of operational actions such as internal updates or failure modes, like the loss of an availability zone. If the average throughput exceeds the recommended figure, a loss in performance might be experienced during these conditions. It is recommended to plan your maximum throughput capacity as two-thirds of the peak maximum. For example, two-thirds of the 150 MB/s peak maximum with one capacity unit are 100 MB/s.

Partitions

The maximum number of partitions is related to the number of capacity units, so 3000 for 150 MB/s in Satellite.

It is a hard limit for the Satellite plan. If you reach this limit, you can no longer create topics.

Retention

You must implement mechanisms to back up your data to meet your retention requirements.

Schema Registry

The Schema Registry is not supported on the Satellite plan.

Other limits

  • Maximum message size: 1 MB
  • Maximum concurrently active Kafka clients: 10000