IBM Cloud Docs
Installing OpenShift Data Foundation on a private cluster

Installing OpenShift Data Foundation on a private cluster

OpenShift Data Foundation is a highly available storage solution that you can use to manage persistent storage for your containerized workloads in Red Hat® OpenShift® on IBM Cloud® clusters.

This is an experimental feature that is available for evaluation and testing purposes and might change without notice.

In standard OpenShift Data Foundation configurations, the operators and drivers pull images from public container registries like registry.redhat.io. However, in private-only, air-gapped clusters (without access to the public internet) you must first mirror the ODF images to IBM Cloud Container Registry, then configure your OpenShift Data Foundation deployment to pull those images from a private container registry.

Since this approach involves manually mirroring images from registry.redhat.io to your IBM Cloud Container Registry, this means you are responsible for repeating the mirroring process to get the latest patch updates or security fixes when they are available for OpenShift Data Foundation.

Prerequisites

Before you install OpenShift Data Foundation in your cluster, meet the following prerequisite conditions.

  1. Create a Red Hat account if you do not already have one. For more information on creating a Red Hat account, see Create a Red Hat login.
  2. Create or have access to a private cluster for OpenShift Data Foundation. If you already have a private cluster make sure it meets the following requirements.
    • Your cluster version must be at least version 4.11.
    • Your worker node operating system must be RHEL 8.
    • 1 Virtual Private Cloud (VPC) with 3 subnets (1 per zone) with no public gateway attached.
    • 1 Red Hat OpenShift on IBM Cloud cluster with at least 3 worker nodes spread evenly across 3 zones. The worker nodes must be at least 16x64.
  3. An IBM Cloud Container Registry instance with at least one namespace in the same region as your cluster. If you don't have an instance of IBM Cloud Container Registry, see Getting started with Container Registry to create one.
  4. Optional: If you plan to use Hyper Protect Crypto Services or Key Protect for encryption, create a virtual private endpoint gateway that allows access to your KMS instance. Make sure to bind at least 1 IP address from each subnet in your VPC to the VPE.

Create an additional subnet in your VPC and attach a Public Gateway

In addition to the 3 required subnets, create another subnet and attach a public gateway to it.

From the Subnets for VPC console, create an additional subnet in your VPC. Note that this subnet must be separate from the subnets your worker nodes are in and must have a public gateway attached.

Create a bastion host

From the Virtual Servers for VPC console, create a virtual server in the subnet that you created in the previous step. This virtual server is used as a bastion host to connect to your private cluster. The operating system for your bastion host must be at least Ubuntu 20.04 or RHEL 8.

Reserve a floating IP and bind it to your bastion host

From the Floating IPs console, reserve a floating IP in the zone where the subnet that you created earlier is located and bind it to your bastion host.

Install the CLI tools

  1. From the Red Hat OpenShift downloads page, download the Red Hat OpenShift command-line interface (oc) and the Red Hat OpenShift Client (oc) mirror plug-in.

  2. Copy the oc and the oc-mirror tar files to your bastion host.

    scp /path/to/download root@BASTION-HOST-IP:/root
    
  3. Log in to your bastion host. For more information, see Connecting to your instance.

    ssh -i <path-to-key-file> root@<bastion-host-ip-address>
    
  4. Unpack each of tar files and move them to /usr/local/bin.

    tar -C /usr/local/bin -xvzf oc.tar.gz
    
    tar -C /usr/local/bin -xvzf oc-mirror.tar.gz
    
  5. Install the IBM Cloud CLI tools.

    curl -fsSL https://clis.cloud.ibm.com/install/linux | sh
    
  6. Install the container-service and container-registry plug-ins.

    ibmcloud plugin install container-service
    
    ibmcloud plugin install container-registry
    
  7. Install Podman.

Log in to your cluster and disable the default OperatorHub sources

In a restricted network environment, you must have administrator access to disable the default catalogs. You can then configure OperatorHub to use local catalog sources.

While logged into your bastion host, complete the following steps.

  1. Access your Red Hat OpenShift cluster.

  2. Disable the default remote OperatorHub sources.

    oc patch operatorhub cluster --type json -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'
    

Log in to your container registries

  1. Log in to registry.redhat.io. If you don't have a Red Hat account, follow the steps to create one.

    podman login registry.redhat.io
    
  2. Log in to IBM Cloud Container Registry with the username iamapikey.

    podman login us.icr.io -u iamapikey -p IAM-API-KEY
    

Create a namespace in IBM Cloud Container Registry

  1. Set the IBM Cloud Container Registry region in your CLI. This region must be the same region your cluster is in.
    ibmcloud cr region-set us-south
    
  2. Create a namespace in IBM Cloud Container Registry. This namespace is used for the OpenShift Data Foundation images.
    ibmcloud cr namespace-add NAMESPACE
    

Mirror the Operator index to IBM Cloud Container Registry

  1. Copy the following ImageSetConfiguration and save it as a file called imageset.yaml.

    apiVersion: mirror.openshift.io/v1alpha2
    kind: ImageSetConfiguration
    storageConfig:
      registry:
        imageURL: us.icr.io/NAMESPACE/redhat-operator-index 
        skipTLS: false
    mirror:
      operators:
        - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.10
          packages:
            - name: local-storage-operator
            - name: ocs-operator
            - name: mcg-operator
            - name: odf-operator
            - name: odf-csi-addons-operator
    
  2. Mirror the OpenShift Data Foundation images from registry.redhat.io to your IBM Cloud Container Registry namespace.

    Before running oc-mirror make sure to set the umask of your bastion host to 0022.

    oc-mirror --config=imageset.yaml docker://us.icr.io/NAMESPACE --dest-skip-tls
    

Create a secret to pull images from IBM Cloud Container Registry

  1. Find and record your unique Red Hat registry pull secret. For more information on how to find your Red Hat registry pull secret, see Red Hat Container Registry Authentication.

  2. Rename the pull-secret file secret to auth.json.

  3. Encode your IAM API key in base64.

    printf "iamapikey:IAM-API-KEY" | base64
    
  4. Add the following section to your auth.json file.

    {"auths": {"us.icr.io": {"auth": "BASE64-VALUE","email": "IBM-EMAIL"}}}
    
  5. Create the secret in the openshift-marketplace namespace.

    oc create secret generic odf-secret -n openshift-marketplace --from-file=.dockerconfigjson=auth.json --type=kubernetes.io/dockerconfig
    

Update the catalog source in your cluster

  1. After mirroring is complete, a results directory is created on your bastion host called oc-mirror-workspace.

  2. Change directories into the oc-mirror-workspace directory.

    cd oc-mirror-workspace
    
  3. Look for a results-XXX directory and cd into it.

    ls
    
    cd results-XXX
    
  4. Look for the catalogSource-redhat-operator-index.yaml.

    ls
    
  5. Edit the catalog source. Change the name to redhat-operators, add the odf-secret, and your registry details.

    apiVersion: operators.coreos.com/v1alpha1
    kind: CatalogSource
    metadata:
      name: redhat-operators # Make sure the name is redhat-operators
      namespace: openshift-marketplace
    spec:
      image: us.icr.io/NAMESPACE/redhat/redhat-operator-index:v4.10 # Add your registry
      sourceType: grpc
      displayName: Red Hat Operators
      publisher: Red Hat
      updateStrategy:
        registryPoll:
          interval: 10m0s
      secrets: # Add the odf-secret
        - "odf-secret" 
    
  6. Create the catalog source in your cluster.

    oc create -f catalogSource-redhat-operator-index.yaml
    
  7. Verify that the pods and packagemanifest are created in your cluster.

    oc get pods,packagemanifest -n openshift-marketplace
    

Update your image pull secret

  1. Extract the global pull secret to a file called .dockerconfigjson.

    oc extract secret/pull-secret -n openshift-config --to=.
    

    Example output

    .dockerconfigjson
    
  2. Print the details of your auth.json file.

    printf auth.json
    
  3. Add the icr.io section from auth.json to your .dockerconfigjson.

    {"auths": {"us.icr.io": {"auth": "BASE64-VALUE","email": "IBM-EMAIL"}}}
    
  4. Update the pull secret in the openshift-config namespace to use your .dockerconfigjson.

    oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson
    

Replace each worker node to pick up configuration changes

  1. Get a list of nodes in your cluster.
    ibmcloud oc worker ls -c CLUSTER
    
  2. Run the ibmcloud oc worker replace to replace each worker node in your cluster.
    ibmcloud oc worker replace -c CLUSTER --worker WORKER-NODE
    

Update the registries.conf file on each node

After replacing each worker node, start a debug pod on each node and update the registries.conf file.

  1. List your worker nodes with oc get nodes.

    oc get nodes
    
  2. Start a debug pod on one of the nodes.

    oc debug node/NODE-NAME
    
  3. Allow host binaries.

    chroot /host
    
  4. Open the registries.conf file.

    vi etc/containers/registries.conf
    
  5. Append the following image mappings to the registries.conf file.

    [[registry]]
      location = "registry.redhat.io/odf4"
      insecure = false
      blocked = false
      mirror-by-digest-only = false
      prefix = ""
    
      [[registry.mirror]]
      location = "us.icr.io/NAMESPACE/odf4"
      insecure = false
    
    [[registry]]
      location = "registry.redhat.io/openshift4"
      insecure = false
      blocked = false
      mirror-by-digest-only = false
      prefix = ""
    
      [[registry.mirror]]
      location = "us.icr.io/NAMESPACE/openshift4"
      insecure = false
    
    [[registry]]
      location = "registry.redhat.io/ocs4"
      insecure = false
      blocked = false
      mirror-by-digest-only = false
      prefix = ""
    
      [[registry.mirror]]
      location = "us.icr.io/NAMESPACE/ocs4"
      insecure = false
    
    [[registry]]
      location = "registry.redhat.io/rhceph"
      insecure = false
      blocked = false
      mirror-by-digest-only = false
      prefix = ""
    
      [[registry.mirror]]
      location = "us.icr.io/NAMESPACE/rhceph"
      insecure = false
    
    [[registry]]
      location = "registry.redhat.io/rhel8"
      insecure = false
      blocked = false
      mirror-by-digest-only = false
      prefix = ""
    
      [[registry.mirror]]
      location = "us.icr.io/NAMESPACE/rhel8"
      insecure = false
    
  6. For each of the registry mirrors that you added in the previous step (openshift4, ocs4, rhceph, rhel8), remove the duplicate entry in registries.conf that has an armada-master mirror location.

    Example rhel8 registry to remove from registries.conf.

    [[registry]]
      location = "registry.redhat.io/rhel8/postgresql-12"
      insecure = false
      blocked = false
      mirror-by-digest-only = false
      prefix = ""
    
      [[registry.mirror]]
      location = "us.icr.io/armada-master/rhel8-postgresql-12"
      insecure = false
    
  7. Repeat the previous steps to update the registries.conf file on each worker node.

Reboot each worker node

  1. Reboot each worker node in your cluster one at a time.
    ibmcloud oc worker reboot -c CLUSTER -w WORKER
    
  2. Wait for each node to reach the Ready status before rebooting the next.

Install the OpenShift Data Foundation add-on from the console

To install ODF in your cluster, complete the following steps.

  1. Before you enable the add-on, review the change log for the latest version information. Note that the add-on supports n+1 cluster versions. For example, you can deploy version 4.10.0 of the add-on to an OCP 4.9 or 4.11 cluster. If you have a cluster version other than the default, you must install the add-on from the CLI and specify the --version option.
  2. Review the parameter reference
  3. From the Red Hat OpenShift clusters console, select the cluster where you want to install the add-on.
  4. On the cluster Overview page, find the OpenShift Data Foundation card and click Install. The Install ODF panel opens.
  5. In the Install ODF panel, enter the configuration parameters that you want to use for your ODF deployment.
  6. Select either Essentials or Advanced as your billing plan.
  7. If you want to automatically discover the available storage devices on your worker nodes and use them in ODF, select Local disk discovery.
  8. In the Worker nodes field, enter the node names of the worker nodes where you want to deploy ODF. You must enter at least 3 worker node names. To find your node names, run the oc get nodes command in your cluster. Node names must be comma-separated with no spaces between names. For example: 10.240.0.24,10.240.0.26,10.240.0.25.Leave this field blank to deploy ODF on all worker nodes.
  9. In the Number of OSD disks required field, enter the number of OSD disks (app storage) to provision on each worker node.
  10. If you are re-enabling the add-on to upgrade the add-on version, select the Upgrade ODF option.
  11. If you want to encrypt the volumes used by the ODF system pods, select Enable cluster encryption.
  12. If you want to enable encryption on the OSD volumes (app storage), select Enable volume encryption.
    1. In the Instance name field, enter the name of your Hyper Protect Crypto Services instance. For example: Hyper-Protect-Crypto-Services-eugb.
    2. In the Instance ID field, enter your Hyper Protect Crypto Services instance ID. For example: d11a1a43-aa0a-40a3-aaa9-5aaa63147aaa.
    3. In the Secret name field, enter the name of the secret that you created by using your Hyper Protect Crypto Services credentials. For example: ibm-hpcs-secret.
    4. In the Base URL field, enter the public endpoint of your Hyper Protect Crypto Services instance. For example: https://api.eu-gb.hs-crypto.cloud.ibm.com:8389.
    5. In the Token URL field, enter https://iam.cloud.ibm.com/identity/token.

Verify OpenShift Data Foundation is running

  1. List the pods in the openshift-storage namespace and verify they are running.

    oc get pods -n openshift-storage
    
  2. List the available storage classes.

    oc get sc