IBM Cloud Docs
4.14 version information and update actions

4.14 version information and update actions

Review information about version 4.14 of Red Hat OpenShift on IBM Cloud. This version is based on Kubernetes version 1.27.

Looking for general information about updating clusters, or information on a different version? See Red Hat Red Hat OpenShift on IBM Cloud version information and the version 4.14 blog.

This badge indicates Kubernetes version 1.27 certification for Red Hat OpenShift on IBM Cloud
Kubernetes version 1.27 certification badge

Red Hat OpenShift on IBM Cloud is a Certified Kubernetes product for version 1.27 under the CNCF Kubernetes Software Conformance Certification program. Kubernetes® is a registered trademark of The Linux Foundation in the United States and other countries, and is used pursuant to a license from The Linux Foundation.

Release timeline

The following table includes the expected release timeline for version 4.14. You can use this information for planning purposes, such as to estimate the general time that the version might become unsupported.

Dates that are marked with a dagger () are tentative and subject to change.

Release history for Red Hat OpenShift on IBM Cloud version 4.14.
Supported? Red Hat OpenShift / Kubernetes version Release date Unsupported date
Supported 4.14 / 1.27 13 December 2023 08 January 2026

Preparing to update

Review changes that you might need to make when you update a cluster to version 4.14. This information summarizes updates that are likely to have an impact on deployed apps when you update.

Update before master

The following table shows the actions that you must take before you update the cluster master.

Cluster master access for VPC clusters with a private service endpoint changed significantly from version 4.13. Before updating a cluster of this type, review the following information and consider what changes you must make before upgrading your cluster. Also, consider this before creating a new 4.14 cluster with only a private service endpoint.

Changes to make before you update the master to Red Hat OpenShift 4.14
Type Description
Unsupported: Deprecated and removed OpenShift features For more information, review the OpenShift version 4.14 deprecated and removed features and Preparing to update to OpenShift Container Platform 4.14 for possible actions required.
Known OpenShift issues For more information, review the OpenShift version 4.14 known issues for possible actions required.
Upgrade requires OpenShift cluster version currency A cluster master upgrade will now be cancelled if the OpenShift cluster version status indicates that an update is already in progress. See Why does OpenShift show the cluster version is not up to date? for details.
Upgrade requires resolution to OpenShift cluster version upgradeable conditions A cluster master upgrade might be cancelled if the OpenShift cluster version Upgradeable status condition indicates that the cluster is not upgradeable. To determine if the cluster is upgradeable, see Checking the Upgradeable status of your cluster. If the cluster is not in an upgradeable status, the condition information provides instructions that must be followed before upgrading. For more information, see Providing the administrator acknowledgment.
Pod security admission label synchronization changes Highly privileged namespaces default, kube-public, and kube-system are exempt from pod security admission enforcement. That is, pod security admission label synchronization will ensure that these namespaces enforce privileged pod security admission. You can disable pod security admission label synchronization for other namespaces by setting the value of the security.openshift.io/scc.podSecurityLabelSync namespace label to false. For more information, see Understanding and managing pod security admission.
OpenVPN replaced by Konnectivity Konnectivity replaces OpenVPN as the Kubernetes API server network proxy used to secure OpenShift master to worker node communication. If your apps rely on OpenVPN to implement a secure OpenShift master to worker node communication, update your apps to support Konnectivity.
Networking changes to VPC clusters In version 4.13 and earlier, VPC clusters pull images from the IBM Cloud Container Registry through a private cloud service endpoint for the Container Registry. For version 4.14 and later, this network path is updated so that images are pulled through a VPE gateway instead of a private service endpoint. For update actions, see Networking changes for VPC clusters.
VPC Cluster access changes for VPC clusters with a private service endpoint only that were introduced in version 4.13 have been reverted.
  • Previously In VPC clusters with a private service endpoint only, if you wanted to access the cluster through the Openshift Console, run terraform scripts, create a kubeconfig file with oc login command, or make similar API calls that required oauth to get a token, then you accessed the private service endpoint, which was in the format https://cX00.private.us-south.containers.cloud.ibm.com:port. This setup only required access to the IBM Cloud private network 166.8.0.0/14.
  • Changes introduced in version 4.13: In 4.13, the default behavior for accessing the Red Hat OpenShift console, running oc login command, or making similar API calls, was changed to use the VPE gateway in the VPC.
  • Now in version 4.14: The change to VPE has been reverted and the previous default behavior has been restored. The OpenShift console/OAuth now use the private service endpoint (https://cX00.private.us-south.containers.cloud.ibm.com:port) by default. You can now set the Oauth access type for your cluster to either the PSE or the VPE. If you want to keep the same behavior as from version 4.13, you can set the access type to use the VPE gateway. For more information, see Setting the OAuth access type for VPC clusters.

Checking the Upgradeable status of your cluster

Run the following command to check the Upgradeable status of your cluster.

oc get clusterversion version -o json | jq '.status.conditions[] | select(.type == "Upgradeable")'

Example output where the Upgradeable status is False.

{
  "lastTransitionTime": "2023-10-04T15:55:54Z",
  "message": "Kubernetes 1.27 and therefore OpenShift 4.14 remove several APIs which require admin consideration. Please see the knowledge article https://access.redhat.com/articles/6958395 for details and instructions.",
  "reason": "AdminAckRequired",
  "status": "False",
  "type": "Upgradeable"
}

If the Upgradeable status is False, the condition information provides instructions that must be followed before upgrading. For more information, see Providing the administrator acknowledgment.

Networking changes for VPC clusters

In version 4.13 and earlier, VPC clusters pull images from the IBM Cloud Container Registry through a private cloud service endpoint for the Container Registry. For version 4.14 and later, this network path is updated so that images are pulled through a VPE gateway instead of a private service endpoint. This change affects all clusters in a VPC; when you create or update a single cluster in a VPC to version 4.14, all clusters in that VPC, regardless of their version, have their network path updated. Depending on the setup of your security groups, network ACLs, and network policies, you might need to make changes to ensure that your workers continue to successfully pull container images after updating to version 4.14.

The following image shows the new network path for version 4.14, which uses a VPE Gateway for Registry instead of the private service endpoint.

VPE Gateway for Registry in 4.14 and later clusters.
VPE Gateway for Registry

With the network path updates, creating or updating a VPC cluster to run at version 4.14 adds a new VPE gateway to your VPC. This VPE gateway is specifically used for pulling images from the IBM Cloud Container Registry and is assigned one IP address for each zone in the VPC that has at least one cluster worker. DNS entries are added to the entire VPC that resolve all icr.io domain names to the new VPE gateway IP addresses. Depending on how you have configured your network security components, you might need to act to ensure that connections to the new VPE are allowed.

What do I need to do?

The steps you need to take to ensure that your VPC cluster worker nodes continue pulling images from the Container Registry depend on your network security setup.

  • If you use the default network rules for all security groups, network ACLs, and network policies, you do not need to take any action.
  • If you have a customized network security setup that blocks certain TCP connections within the VPC, you must take additional actions before updating to or creating a new cluster at version 4.14. Make the adjustments in the following sections to ensure that connections to the new VPE Gateway for Registry are allowed.

Regardless of whether you need to take additional steps, if you keep other clusters in the VPC that do not run version 4.14 you must refresh the cluster master on those clusters. This refresh ensures that the correct updates are applied to the non-4.14 clusters so that traffic to the new VPE is allowed.

I have custom security groups. What do I change?

The necessary allow rules are automatically added to the IBM-managed kube-<cluster_ID> cluster security group when you update to or create a cluster at version 4.14. However, if you created a VPC cluster that does NOT use the kube-<cluster_ID> cluster security group rules, you must make sure that the following security group rules are implemented to allow traffic to the VPE gateway for registry. If the rules are not already implemented in your custom setup, add them. Each of these rules must be created for each zone in the VPC and must specify the entire VPC address prefix range for the zone as the destination CIDR. To find the VPC address prefix range for each zone in the VPC, run ibmcloud is vpc-address-prefixes <vpc_name_or_id>.

Add the following rules to your custom security group.

Outbound security group rules to add for version 4.14
Rule type Protocol Destination IP or CIDR Destination Port
Outbound TCP Entire VPC address prefix range 443

To make these rules more restrictive, you can set the destination to the security group used by the VPE Gateway or you can specify the exact VPE Gateway reserved IP address. Note that these IP addresses can change if all cluster workers in a VPC are removed.

I have custom ACLs. What do I change?

If VPC networks ACLs that apply to your cluster workers have been customized to only allow certain egress and ingress traffic, make sure that the following ACL rules, or equivalent rules, are implemented to allow connections to and from the VPE Gateway for Registry. If the rules are not already implemented, add them. Each of these rules must be created for each zone in the VPC and must specify the entire VPC address prefix range for the zone as the source (for outbound rules) or destination (for inbound rules) CIDR. To find the VPC address prefix range for each zone in the VPC, run ibmcloud is vpc-address-prefixes <vpc_name_or_id>. The priority for each rule should be higher than any rule that would otherwise deny this traffic, such as a rule that denies all traffic.

Add the following rules to your custom ACLs.

Outbound and inbound ACL rules to add for version 4.14
Rule type Protocol Source IP or CIDR Source Port Destination IP or CIDR Destination Port
Outbound/Allow TCP Entire VPC address prefix range Any Entire VPC address prefix range 443
Inbound/Allow TCP Entire VPC address prefix range 443 Entire VPC address prefix range Any

I have custom network policies. What do I change?

If you use Calico policies to restrict outbound connections from cluster workers, you must add the following policy rule to allow connections to the VPE Gateway for Registry. This policy must be created for each zone in the VPC and must specify the entire VPC address prefix range for the zone as the destination CIDR. To find the VPC address prefix range for each zone in the VPC, run ibmcloud is vpc-address-prefixes <vpc_name_or_id>.

apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
  name: allow-vpe-gateway-registry
spec:
  egress:
  - action: Allow
    destination:
      nets:
      - <entire-vpc-address-prefix-range> # example: 10.245.0.0/16
      ports:
      - 443
    protocol: TCP
    source: {}
  order: 500
  selector: ibm.role == 'worker_private'
  types:
  - Egress