IBM Cloud Docs
Migrating VPC worker nodes to RHCOS

Migrating VPC worker nodes to RHCOS

Virtual Private Cloud

Complete the following steps to migrate your VPC worker nodes to RHCOS

The information on this page applies to VPC clusters only. It does not apply to Satellite or Classic clusters.

Beginning with cluster version 4.18:

  • Red Hat Enterprise Linux CoreOS (RHCOS) is the default operating system for VPC clusters.
  • RHEL worker nodes are deprecated for VPC clusters.
  • Support for RHEL worker nodes on VPC ends with the release of version 4.21.

Migrate your VPC clusters to use RHCOS worker nodes as soon as possible.

RHEL deprecation timeline
Milestone Description
4.18 release: 23 May 2025 Beginning with cluster version 4.18, Red Hat Enterprise Linux CoreOS (RHCOS) is the default operating system and RHEL worker nodes are deprecated in this version. RHEL workers are still available in version 4.18 only to complete the migration to RHCOS workers.
4.21 release Cluster version 4.21 supports only RHCOS worker nodes. Migrate your RHEL 9 worker nodes to RHCOS before updating to version 4.21.

Complete the following steps to migrate your worker nodes to RHCOS.

To migrate to RHCOS, you must provision a new worker pool, then delete the previous RHEL worker pool. The new worker pool must reside in the same zone as the previous worker pool.

Step 1: Upgrade your cluster master

Run the following command to update the master.

ibmcloud ks cluster master update --cluster <clusterNameOrID> --version 4.18_openshift

Step 2: Creating a new RHCOS worker pool

  • Make sure to specify RHCOS as the --operating-system of the new pool.
  • Make sure that the number of nodes specified with the --size-per-zone option matches the number of workers per zone for the RHEL worker pool. To list a worker pool's zones and the number of workers per zone, run ibmcloud oc worker-pool get --worker-pool WORKER_POOL --cluster CLUSTER.
  • Make sure to include the --entitlement ocp_entitled option if you have a Cloud Pak entitlement.
  1. Run the ibmcloud oc worker-pool create command to create a new worker pool.

    Example command to create a RHCOS worker pool. For more information about the worker pool create vpc-gen2 command, see the CLI reference for command details. Adding worker nodes in VPC clusters.

    ibmcloud oc worker-pool create vpc-gen2 --name <worker_pool_name> --cluster <cluster_name_or_ID> --flavor <flavor> --size-per-zone <number_of_workers_per_zone> --operating-system RHCOS [--entitlement ocp_entitled]
    
  2. Verify that the worker pool is created and note the worker pool ID.

    ibmcloud oc worker-pool ls --cluster <cluster_name_or_ID>
    

    Example output

    Name            ID                              Flavor                 OS              Workers 
    my_workerpool   aaaaa1a11a1aa1aaaaa111aa11      b3c.4x16.encrypted     REDHAT_8_64    0 
    
  3. Add one or more zones to your worker pool. When you add a zone, the number of worker nodes you specified with the --size-per-zone option are added to the zone. These worker nodes run the RHCOS operating system. It's recommended that the zones you add to the RHCOS worker pool match the zones added to the RHEL worker pool that you are replacing. To view the zones attached to a worker pool, run ibmcloud oc worker-pool zones --worker-pool WORKER_POOL --cluster CLUSTER. If you add zones that do not match those of RHEL worker pool, make sure that your workloads will not be impacted by moving them to a new zone. Note that File or Block storage are not supported across zones.

Step 3: Add worker nodes to your RHCOS worker pool

See Adding a zone to a worker pool in a VPC cluster.

Step 4: Migrate your workloads

If you have software-defined storage (SDS) solutions like OpenShift Data Foundation or Portworx, update your storage configurations to include the new worker nodes and verify your workloads before removing your RHEL worker nodes.

For more information about rescheduling workloads, see Safely Drain a Node in the Kubernetes docs or Understanding how to evacuate pods on nodes in the Red Hat OpenShift docs.

  • Migrate per pod by cordoning node and deleting individual pods.

    oc adm cordon no/<nodeName>
    oc delete po -n <namespace> <podName>
    
  • Migrate per Node by draining nodes. For more information, see Safely drain a node.

  • Migrate per worker pool by deleting your entire RHEL worker pool.

    ibmcloud ks worker-pool rm --cluster <clusterNameOrID> --worker-pool <workerPoolNameOrID>
    

Step 5: Remove the RHEL worker nodes

Remove the worker pool that contains the RHEL workers.

Consider scaling down your RHEL worker pool and keeping it for several days before you remove it. This way, you can easily scale the worker pool back up if your workload experiences disruptions during the migration process. When you have determined that your workload is stable and functions normally, you can safely remove the RHEL worker pool.

  1. List your worker pools and note the name of the worker pool you want to remove.
    ibmcloud oc worker-pool ls --cluster CLUSTER
    
  2. Run the command to remove the worker pool.
    ibmcloud oc worker-pool rm --worker-pool WORKER_POOL --cluster CLUSTER
    

Optional Step 5: Uninstall and reinstall the Object Storage plug-in

If you use the COS plug-in in your cluster, after migrating from RHEL to RHCOS, you must uninstall and reinstall it because the kube-driver path is different between the two operating systems. If this is not done, you may see an error similar to Error: failed to mkdir /usr/libexec/kubernetes: mkdir /usr/libexec/kubernetes: read-only file system.