Service limitations
Red Hat® OpenShift® on IBM Cloud® and the Red Hat OpenShift open source project come with default service settings and limitations to ensure security, convenience, and basic functionality. Some limitations you might be able to change where noted.
If you anticipate reaching any of the following Red Hat OpenShift on IBM Cloud limitations, contact IBM Support and provide the cluster ID, the new quota limit, and the region in your support ticket.
Service and quota limitations
Red Hat OpenShift on IBM Cloud comes with the following service limitations and quotas that apply to all clusters, independent of what infrastructure provider you plan to use. Keep in mind that the classic and VPC cluster limitations also apply.
To view quota limits on cluster-related resources in your IBM Cloud account, use the ibmcloud oc quota ls
command.
Category | Description |
---|---|
API rate limits | 200 requests per 10 seconds to the Red Hat OpenShift on IBM Cloud API from each unique source IP address. |
App deployment | The apps that you deploy to and services that you integrate with your cluster must be able to run on the operating system of the worker nodes. |
Container-native virtualization | The Red Hat OpenShift container-native virtualization add-on to run VM workloads alongside container workloads is not supported by IBM. If you choose to install the add-on yourself, you must use bare metal machines, not virtual machines. You are responsible for resolving any issues and impact to your workloads from using container-native virtualization. |
Calico network plug-in | Changing the Calico plug-in, components, or default Calico settings is not supported. For example, don't deploy a new Calico plug-in version, or modify the daemon sets or deployments for the Calico components, default IPPool resources, or Calico nodes. Instead, you can follow the documentation to create a Calico NetworkPolicy or GlobalNetworkPolicy , to change the Calico MTU,
or to disable the port map plug-in for the Calico CNI. |
Cluster quota | You can't exceed 100 clusters per region and per infrastructure provider. However, as of 01 January 2024, quotas are increased incrementally before reaching
100. If you need more of the resource, contact IBM Support. In the support case, include the new quota limit for the region and infrastructure provider that you want.. To list quotas,
run ibmcloud quota ls . |
Kubernetes | Make sure to review the Kubernetes project limitations. |
KMS provider | Customizing the IP addresses that are allowed to connect to your IBM® Key Protect for IBM Cloud® instance is not supported. |
Red Hat OpenShift | Make sure to review the OpenShift Container Platform limitations for your version. |
Kubernetes pod logs | To check the logs for individual app pods, you can use the command line to run oc logs <pod name> . Do not use the Kubernetes dashboard to stream logs for your pods, which might cause a disruption in your access to the
Kubernetes dashboard. |
Monitoring |
|
Operating system | Worker nodes must run one of the supported operating systems. You can't create a cluster with worker nodes that run different types of operating systems. For more information, see the Red Hat OpenShift on IBM Cloud version information. |
OperatorHub catalog | To use the OperatorHub catalog in private clusters see Disabling OperatorHub and mirroring catalog source images to icr.io . |
Pod instances | You can run 110 pods per worker node. If you have worker nodes with 11 CPU cores or more, you can support 10 pods per core, up to a limit of 250 pods per worker node. The number of pods includes kube-system and ibm-system pods that run on the worker node. For improved performance, consider limiting the number of pods that you run per compute core so that you don't overuse the worker node. For example, on a worker node with a b3c.4x16 flavor,
you might run 10 pods per core that use no more than 75% of the worker node total capacity. |
Deprecated Time-based one-time passcode (TOTP) | To use TOTP, make sure that you enable multifactor authentication (MFA) for your entire IBM Cloud account. If MFA is enabled only for some users but not at the account level, authentication errors might occur. |
Worker node quota | A maximum 500 worker nodes for any accounts created before 01 January 2024. For accounts created on or after that date, the maximum quota is 200 after a period of lower quotas. Quotas apply per cluster infrastructure provider.
If you need more of the resource, contact IBM Support. In the support case, include the new quota limit for the region and infrastructure provider that you want.. To list quotas
run, ibmcloud ks quota ls . |
Worker pool size | You must always have a minimum of 2 nodes in your cluster. Because of the worker node quota, you are limited in the number of worker pools per cluster and number of worker nodes per worker pool. For example, with the default worker node quota of 500 per region, you might have up to 500 worker pools of 1 worker node each in a region with only 1 cluster. Or, you might have 1 worker pool with up to 500 worker nodes in a region with only 1 cluster. |
Red Hat Enterprise Linux CoreOS worker nodes | The maximum amount of zones added to a cluster is 15. For example, 4 RHCOS worker pools with 3 zones each will account for 12/15 of the quota for that cluster. |
Cluster naming | To ensure that the Ingress subdomain and certificate are correctly registered, the first 24 characters of the clusters' names must be different. If you create and delete clusters with the same name or names that have the same first 24 characters 5 times or more within 7 days, such as for automation or testing purposes, you might reach the Let's Encrypt Duplicate Certificate rate limit. |
Resource groups | A cluster can be created in only one resource group that you can't change afterward. If you create a cluster in the wrong resource group, you must delete the cluster and re-create it in the correct resource group. Furthermore, if you need
to use the ibmcloud oc cluster service bind command to integrate with an IBM Cloud service, that service must be in the same resource group as the
cluster. Services that don't use resource groups like IBM Cloud Container Registry or that don't need service binding like IBM Log Analysis work even if the cluster is in a different resource group. |
Red Hat OpenShift on IBM Cloud cluster limitations
Review limitations that are specific to Red Hat OpenShift clusters. Keep in mind that the service and classic cluster or VPC cluster limitations also apply.
Category | Description |
---|---|
Cluster autoscaling | The Red Hat OpenShift cluster autoscaler from the Red Hat OpenShift Administration > Cluster Settings console or ClusterAutoscaler object from the autoscaling.openshift.io/v1 API is not supported.
Instead, use the ibm-iks-cluster-autoscaler Helm plug-in. |
Cluster updates | You must update your cluster by using the Red Hat OpenShift on IBM Cloud API, CLI, or console tools. You can't update your cluster version from OpenShift Container Platform tools such as the Red Hat OpenShift web console. |
Container logs | If you use a container logging operator such as Fluentd to send logs to an ElasticSearch stack, you must update the cluster logging deployment to use the ibmc-block-gold storage class. |
Private clusters |
Depending on the infrastructure provider, your options for private clusters are limited.
|
Logging | To set up an OpenShift Container Platform Elasticsearch, Fluentd, and Kibana (EFK) stack, see installing the cluster logging operator. |
Service catalog | The service catalog is not supported. Use Operators instead. Do not use the OperatorHub to install the service catalog. |
Service mesh | The Istio managed add-on is not supported. Instead, use the Red Hat service mesh operator. Note: The default IBM Cloud configuration of the routers enables host networking, which is not compatible with the service mesh network policy. For the service mesh ingress to work, apply a network policy. |
Classic cluster limitations
Classic infrastructure clusters in Red Hat OpenShift on IBM Cloud are released with the following limitations.
Compute
Keep in mind that the service limitations also apply.
Category | Description |
---|---|
Reserved instances | Reserved capacity and reserved instances are not supported. |
Worker node flavors | Worker nodes are available in select flavors of compute resources. |
Worker node host access | For security, you can't SSH into the worker node compute host. |
Networking
Keep in mind that the service limitations also apply.
Category | Description |
---|---|
Ingress ALBs |
|
Network load balancers (NLB) |
|
Red Hat OpenShift web console | The web console cannot be exposed on the private network on clusters that have both public and private endpoints. If you want to expose the web console on the private network, your cluster cannot have a public endpoint enabled. |
Private VLANs only | Private network load balancers (NLBs) can't be registered with the domain name server (DNS), so the cluster can't be created with only a private network interface. Worker nodes must be connected to both public and private VLANs. You can still create a private service to expose your apps on only the private network. |
Service endpoints | When you create a cluster, you can enable the public and private cloud service endpoint or the public cloud service endpoint only, but you can't enable the private cloud service endpoint only. After cluster creation, you can't later change the service endpoints. |
strongSwan VPN service | See strongSwan VPN service considerations. |
Service IP addresses | You can have 65,000 IP addresses per cluster in the 172.21.0.0/16 range that you can assign to Kubernetes services within the cluster. |
Subnets per VLAN | Each VLAN has a limit of 40 subnets. |
Storage
Keep in mind that the service limitations also apply.
Category | Description |
---|---|
Volume instances | You can have a total of 250 IBM Cloud infrastructure file and block storage volumes per account. If you mount more than this amount, you might see an out of capacity message when you provision persistent volumes. For more
FAQs, see the file and block storage docs. If you want to mount more volumes,
contact IBM Support. In your support ticket, include your account ID and the new file or block storage volume quota that you want. |
Portworx | Review the Portworx limitations. |
File storage | Because of the way that IBM Cloud NFS file storage configures Linux user permissions, you might encounter errors when you use file storage. If so, you might need to configure Red Hat OpenShift Security Context Constraints or use a different storage type. |
VPC cluster limitations
VPC clusters in Red Hat OpenShift on IBM Cloud are released with the following limitations. Additionally, all the underlying VPC quotas, VPC limits, VPC service limitations, and regular service limitations apply.
Compute
Keep in mind that the service limitations also apply.
Category | Description |
---|---|
Encryption | The secondary disks of your worker nodes are encrypted at rest by default by the underlying VPC infrastructure provider. However, you can't bring your own encryption to the underlying virtual server instances. |
Location | VPC clusters are available only in select multizone regions. |
Virtual Private Cloud | See Limitations and Quotas. |
Worker node flavors | Only certain flavors are available for worker node virtual machines. Bare metal machines are not supported. |
Worker node host access | For security, you can't SSH into the worker node compute host. |
Worker node updates | You can't update or reload VPC worker nodes. Instead, you can delete the worker node and rebalance the worker pool with the ibmcloud oc worker replace command. If you replace multiple worker nodes at the same time, they
are deleted and replaced concurrently, not one by one. Make sure that you have enough capacity in your cluster to reschedule your workloads before you replace worker nodes. |
Networking
Keep in mind that the service limitations also apply.
Category | Description |
---|---|
App URL length | DNS resolution is managed by the cluster's virtual private endpoint (VPE), which can resolve URLs up to 130 characters. If you expose apps in your cluster with URLs, such as the Ingress subdomain or Red Hat OpenShift routes, ensure that the URLs are 130 characters or fewer. |
Network speeds | VPC profile network speeds refer to the speeds of the worker node interfaces. The maximum speed available to your worker nodes is 25Gbps . Because IP in IP encapsulation is required
for traffic between pods that are on different subnets, data transfer speeds between pods on different subnets might be slower, about half the compute profile network speed. Overall network speeds for apps that you deploy to your cluster
depend on the worker node size and application's architecture. |
NodePort | You can access an app through a NodePort only if you are connected to your private VPC network, such as through a VPN connection. To access an app from the internet, you must use a VPC load balancer or Ingress service instead. |
Pod network | VPC access control lists (ACLs) filter incoming and outgoing traffic for your cluster at the subnet level, and security groups filter incoming and outgoing traffic for your cluster at the worker nodes level. To control traffic within the cluster at the pod-to-pod level, you can't use VPC security groups or ACLs. Instead, use Calico and Kubernetes network policies, which can control the pod-level network traffic that uses IP in IP encapsulation. |
Public gateway | If the public service endpoint is enabled, you must attach a public gateway to each VPC subnet so that your worker nodes can communicate on the public network. Default Red Hat OpenShift components, such as the web console and OperatorHub, require public network access. |
Service endpoints | When you create your VPC cluster in the IBM Cloud console, your cluster has both a public and a private cloud service endpoint. If you want only a private cloud service endpoint, you must create the cluster in the CLI instead, and include the --disable-public-service-endpoint option. If you include this option, your cluster is created with routers and Ingress controllers that expose your apps on the private network only by default.
If you later want to expose apps to a public network, you must manually create public routers and Ingress controllers. |
strongSwan VPN service | The strongSwan service is not supported. To connect your cluster to resources in an on-premises network or another VPC, see Using VPN with your VPC. |
Subnets |
|
VPC load balancer | See VPC load balancer limitations. |
Storage
Keep in mind that the service limitations also apply.
Category | Description |
---|---|
Storage class for profile sizes | For more information, see available volume profiles. |
Supported types | You can set up IBM Cloud Object Storage and Cloud Databases only. |
Volume attachments | See Volume attachment limits. |
Portworx | Review the Portworx limitations. |
Block Storage for VPC | The default storage class in VPC clusters can not be changed. However, you can create your own storage class. |
Satellite cluster limitations
Review the following limitations for Red Hat OpenShift on IBM Cloud clusters that you create in a Satellite location. Keep in mind that the service limitations also apply.
Category | Description |
---|---|
Cluster add-ons | Review the unsupported managed add-ons for Red Hat OpenShift clusters in a Satellite location. For example, the cluster autoscaler and Istio are not supported. |
Logging and monitoring | You can't currently use the Red Hat OpenShift on IBM Cloud console or the observability plug-in CLI (ibmcloud ob ) to enable logging and monitoring for Satellite clusters. Instead, you can manually deploy Log Analysis agents and Monitoring agents to your cluster to forward logs and metrics to IBM Log Analysis and IBM Cloud Monitoring. |
Network |
|
Storage for worker node hosts | See Host storage and attached devices. |
Storage for apps | No storage provider is installed in your Satellite clusters by default. Therefore, no pre-configured Kubernetes storage classes are set up by default in your clusters to store your application data in a Kubernetes persistent volume that is backed by storage device. For options to set up a storage provider, see Understanding Satellite storage templates. |
Worker nodes | Worker nodes run on hosts in your own infrastructure environments. The hosts must meet host and provider-specific requirements, such as for AWS,
Azure, GCP, and IBM Cloud (testing and demonstration purposes only). You are responsible
for managing the infrastructure lifecycle of your hosts, including adding and updating worker nodes.
As such, worker node operations like ibmcloud oc worker add, update, replace, reload commands are not supported. |
Worker pools | To use operations like resize , your worker pool uses host labels that must match available (unassigned) hosts in the Satellite location. |
Single node clusters | Any cluster with fewer than three worker nodes lacks high availability. By provisioning a single-node cluster, you accept that you are more likely to experience downtime and disruptions in your workload, and that regular worker node upgrades result in your workload going offline. Additionally, if a cluster is provisioned as a single-node cluster, it cannot later be converted to a standard, highly available cluster. You can add more nodes, but standard deployments do not increase in replica size and the cluster does not become highly available. Single node clusters must run on a Satellite location with Red Hat CoreOS (RHCOS) enabled. Control plane hosts on your location and the host you assign to your single-node cluster must run either the RHEL 8 or RHCOS operating systems. Only supported for Satellite clusters that run version 4.11 or later. OpenShift Data Foundation is not supported on single-node clusters. Portworx is not supported on single-node clusters. |
Unsupported features and operators in Red Hat OpenShift on IBM Cloud
The following features and operators are not supported in Red Hat OpenShift on IBM Cloud.
Instead of tuning worker node performance with MachineConfig
files in Red Hat OpenShift, you can modify the host with a daemonset
file. For more information, see Changing the Calico MTU or Tuning performance for Red Hat CoreOS worker nodes.
- AMQ Broker
- AMQ Broker LTS
- AMQ Interconnect
- AMQ Online
- AMQ Streams
- Ansible Automation Platform Resource Operator
- API Designer
- Business Automation Operator
- Camel K
- Cost management Operator
- Data Grid Operator
- Device Manager
- File Integrity Operator
- Fuse Console
- Fuse Online
- Gatekeeper Operator
- JBoss EAP
- JBoss Web Server
- Logical volume manager storage (LVM)
- MachineConfigs
- Metering and Cost Management SaaS Service
- OpenShift Cloud Manager (OCM) SaaS Service
- OpenShift Data Foundation: Supported through the cluster add-on for Classic and VPC clusters or through the Satellite template for Satellite clusters.
- OVS and OVN SDN
- Performance Add-on Operator
- PTP Operator
- Quay Operator
- Red Hat OpenStack Platform
Kuryr
Integration - Red Hat Integration Operator
- Service Registry Operator
- Smart Gateway Operator
- SR-IOV Network Operator: Supported in Satellite clusters only.
- Telemeter and Insights Connected Experience
- Windows Machine Config: Worker nodes with Windows operating systems are not supported.