IBM Cloud Docs
Creating confidential containers

This is an experimental feature that is available for evaluation and testing purposes and might change without notice.

Creating confidential containers

Learn how to install and use confidential containers, which are also known as Kata Containers or OpenShift Sandboxed Containers, in a Red Hat OpenShift on IBM Cloud cluster.

What are confidential containers?

A confidential container provides a secure runtime environment for sensitive workloads, but allows you to continue to work within existing workflows.

The IBM Cloud implementation of confidential containers leverages peer pods to extend the functionality of Red Hat OpenShift pods into a separate VSI from the worker node. This extension creates a trusted execution environment beyond traditional Kubernetes and OpenShift.

Learn more:

  • Review the reasons you might use confidential containers.
  • Check out the FAQ for confidential containers.

Confidential containers architecture
Confidential containers architecture

Prerequisites

  • When you create or choose which Red Hat OpenShift on IBM Cloud cluster to use, the cluster must meet the following requirements:

  • If necessary, enable OperatorHub. Sometimes OperatorHub is disabled in a cluster for security reasons.

Step 1: Installing the operator

Install the OpenShift Sandboxed Containers Operator to manage the lifecycle of confidential containers in clusters.

  1. Open the cluster dashboard.

  2. Click OpenShift web console > Operators > OperatorHub.

  3. Search for OpenShift sandboxed containers Operator and click the tile.

  4. Click Install to get the supported and stable version of the OpenShift Sandboxed Containers Operator, version 1.10.3. Refer to Red Hat's Operator Update Information Checker for supported OpenShift versions.

  5. In the Install Operator window, you can keep the default selections and click Install.

  6. Wait for the installation to complete. Click the View installed Operators in Namespace openshift-sandboxed-containers-operator link and wait for the status to be Succeeded. While you wait, you can complete the next step to set up the CLI.

Step 2: Setting up the CLI

Before you begin, you can either complete these steps to set up the CLI or you can use the IBM Cloud shell to run commands.

  1. Install the IBM Cloud command line.

  2. Install the ks and the oc CLI tools.

  3. Log in to the IBM Cloud CLI.

    ibmcloud login --apikey <API_KEY> -g <RESOURCE_GROUP>
    
  4. List the clusters in the account and copy the ID of the cluster you want to use for the next step.

    ibmcloud ks cluster ls
    
  5. Run the config command.

    ibmcloud ks cluster config --cluster CLUSTER_ID --admin --endpoint link
    

    In your home directory, a .kube folder is created and information is stored for communicating with that cluster.

  6. Confirm that the oc commands run properly by viewing the details of the worker nodes in the cluster.

    oc get nodes
    
  7. Set the namespace project so that you do not have to include the namespace in later commands.

    oc project openshift-sandboxed-containers-operator
    
  8. Optional: Explore the namespace.

    oc get all
    

    For example, in the list of pods, the controller manager that is named pod/controller-manager-<id> manages the microservices within the operator.

  9. Install the is CLI tool.

Step 3: Importing the peer pod image

The OpenShift Sandboxed Containers Operator launches a special operating system inside the peer pod that must be imported into your IBM Cloud Account. This operating system is required to deploy a workload to a confidential container.

The peer pod image contains a full Red Hat Enterprise Linux (RHEL) 9.6 operating system with the software required to instantiate a container in a Confidential Virtual Machine (CVM).

All configurations and installed packages in the operating system remain the default Red Hat values. However, the IBM Cloud VSIs require cloud init to work. In the scripts, cloud init is prevented from being uninstalled when it is finished building the podvm, which is a key difference from its source image.

Before you begin:

Validate version compatibility. The image is supported for the following versions.

  • OpenShift Sandboxed Containers Operator version 1.10.3
  • OpenShift versions 4.19, 4.18, 4.17, and 4.16 clusters

To import the peer pod image:

  1. Run the image-create command.

    ibmcloud is image-create "IMAGE_NAME" --file cos://us-south/podvm-image/rhel9-podvm-latest.qcow2  --os-name red-9-amd64
    
  2. Open the compute images.

  3. Click the Create + icon, choose a region with TDX-capable VSIs, and complete the required fields.

    a. For Image source, select Cloud Object Storage.

    b. Select the Locate by image file URL tab and for the Image URL, enter cos://us-south/podvm-image/rhel9-podvm-latest.qcow2.

    c. For the Operating system, select Red Hat Enterprise Linux > red-9-amd64.

    d. Optional: To create another confidential container from the API with the same details later, click the Get sample API call button and copy the Curl command.

    e. Click Create custom image.

  4. When the image is added to the Images list, click the image name and select the IDs tab. Then, note the Image ID to use later.

  5. Wait for the image's status to be Available.

    ibmcloud is image IMAGE_NAME 
    
  6. Repeat these steps when a new version of the image is available.

Step 4: Creating an API key or trusted profile

Confidential containers require a credential to instantiate the peer pod through kata-remote when a secure workload is launched. This credential must be either a valid API key or trusted profile with permissions to create a VSI in your account.

If you are testing out confidential containers, you can use an API key. If you are using Secrets Manager, you must set up a trusted profile.

  • API Key from the UI

    1. From the IBM Cloud dashboard, click Manage > Access (IAM) > API keys.

    2. Click Create.

    3. Save this key securely because it cannot be retrieved from this page later.

  • API Key from the CLI.

    Run the following command, and save the output.

    ibmcloud iam api-key-create <key_name>
    
  • Trusted Profile

    1. Open the trusted profiles dashboard.

    2. Create a trusted profile and grant the profile the necessary permissions to create virtual servers from OpenShift.

      a. Create a trusted profile.

      ibmcloud iam trusted-profile-create <NAME> [--description <DESCRIPTION>]
      

      b. Allow the resources in openshift-sandboxed-containers-operator to use the trusted profile.

      ibmcloud iam trusted-profile-rule-create <NAME or ID of the trusted profile> --name <RULE_NAME> --type Profile-CR --conditions claim:namespace,operator:EQUALS,value:openshift-sandboxed-containers-operator --cr-type ROKS_SA
      

      c. Allow access to the VPC Infrastructure Services (is).

      To allow access for every resource in the account:

      ibmcloud iam trusted-profile-policy-create <NAME or ID of the trusted profile>  --roles Editor,Writer --service-name is
      

      To allow access for a specific resource group:

      ibmcloud iam trusted-profile-policy-create <NAME or ID of the trusted profile> --roles Viewer [--resource-group-id <resource group>]
      

Step 5: Creating an SSH key (Optional)

In test clusters, you might find it helpful to have an SSH key ready to troubleshoot why something isn't starting and to view logs. In production clusters, you might not want the SSH functionality enabled.

  1. Click Infrastructure > Compute > SSH Keys.

  2. Create an SSH key and note the SSH key ID.

Step 6: Configuring confidential containers

After the Operator is installed, create ConfigMaps to allow Kata to handle workloads in the IBM Cloud account.

  1. Create a directory to store the files.

    mkdir <directory-name>
    
  2. Switch to the directory.

    cd <directory-name>
    
  3. Copy the following environment variables for the API key, trusted profile ID, cluster name, PodVM image ID, SSH key ID, and VPC ID (optional).

    Optional: You can store them in a Shell script in the new directory to set them again later. Example: <directory-name>/env-vars.sh

    a. Gather the values for the following variables and update the values in the script.

    • For the CLUSTER_NAME, open the details for the cluster in the clusters list and copy the name.
    • Optional: For the VPC_ID, in the Cluster details section of the same page, you can click the vpc name to open the details for the VPC and copy the VPC ID field.
    • For the PODVM_IMAGE_ID, use the image ID you saved for the peer pod image.
    • If you are using an API key, you can remove the IBMCLOUD_TRUSTED_PROFILE_ID line.
    • If you are using a trusted profile, you can remove the IBMCLOUD_API_KEY line.
    • If you did not set an SSH key, you can remove the SSH_KEY_ID line.
    export IBMCLOUD_API_KEY=<your API key>
    export IBMCLOUD_TRUSTED_PROFILE_ID="<your Trusted Profile ID>"
    export CLUSTER_NAME=<cluster-name-region-flavor>
    export PODVM_IMAGE_ID=<PodVM image ID provided by IBM or a custom-built image>
    export SSH_KEY_ID=<SSH key ID to be used by the peer pod VSI>
    export VPC_ID=<Optional: the VPC that your Openshift cluster is in>
    

    b. If you stored the variables in a Shell script, run it. Example:

    sh env-vars.sh
    
  4. Run the command to create the feature-gates.yaml ConfigMap.

    cat > feature-gates.yaml <<EOF
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: osc-feature-gates
      namespace: openshift-sandboxed-containers-operator
    data:
      deploymentMode: "DaemonSetFallback" # or DaemonSet to force it
      confidential: "true"
      layeredImageDeployment: "false"
    EOF
    
  5. Apply the ConfigMap.

    oc apply -f feature-gates.yaml
    
  6. Run the command to create the peer-pods-secret.yaml. Remove any optional environment variables from the stringData section that you did need.

    cat > peer-pods-secret.yaml <<EOF
    apiVersion: v1
    kind: Secret
    metadata:
      name: peer-pods-secret
      namespace: openshift-sandboxed-containers-operator
    type: Opaque
    stringData:
      # either IBMCLOUD_API_KEY or IBMCLOUD_IAM_PROFILE_ID must be set
      # if you specify both the IBMCLOUD_API_KEY will be used
      # IBMCLOUD_IAM_ENDPOINT is optional
      IBMCLOUD_API_KEY: "$IBMCLOUD_API_KEY"
      IBMCLOUD_IAM_ENDPOINT: "https://iam.cloud.ibm.com/identity/token"
      IBMCLOUD_IAM_PROFILE_ID: "$IBMCLOUD_TRUSTED_PROFILE_ID"
    EOF
    
  7. Apply the secret to the cluster.

    oc apply -f peer-pods-secret.yaml
    
  8. Run the command to create the peer-pods-cm.yaml ConfigMap. Remove any optional environment variables from the data section that you did not set.

    cat > peer-pods-cm.yaml <<EOF
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: peer-pods-cm
      namespace: openshift-sandboxed-containers-operator
    data:
      CLOUD_PROVIDER: "ibmcloud"
      IBMCLOUD_PODVM_IMAGE_ID: "$PODVM_IMAGE_ID"
      IBMCLOUD_PODVM_INSTANCE_PROFILE_LIST: "bx3dc-2x10"
      IBMCLOUD_PODVM_INSTANCE_PROFILE_NAME: "bx3dc-2x10"
      IBMCLOUD_RESOURCE_GROUP_ID: "$(ibmcloud is vpc "$VPC_ID" -json | jq -r .resource_group.id)"
      IBMCLOUD_SSH_KEY_ID: "$SSH_KEY_ID"
      IBMCLOUD_VPC_ENDPOINT: "https://us-east.iaas.cloud.ibm.com/v1"
      IBMCLOUD_VPC_ID: "$VPC_ID"
      IBMCLOUD_VPC_SG_ID: "$(ibmcloud ks security-group ls --cluster $CLUSTER_NAME -json | jq -r '.[] | select(.type == "cluster") | .id')"
      CLOUD_CONFIG_VERIFY: "false"
      CRI_RUNTIME_ENDPOINT: "/run/cri-runtime/containerd.sock"
      ENABLE_CLOUD_PROVIDER_EXTERNAL_PLUGIN: "false"
      VXLAN_PORT: ""
      TUNNEL_TYPE: ""
      INITDATA: ""
    EOF
    
  9. Apply the ConfigMap.

    oc apply -f peer-pods-cm.yaml
    
  10. Run the command to create the kata-runtime-settings.yaml KataConfig.

    cat > kata-runtime-settings.yaml <<EOF
    apiVersion: kataconfiguration.openshift.io/v1
    kind: KataConfig
    metadata:
      name: kata-runtime-settings
      namespace: openshift-sandboxed-containers-operator
    spec:
      enablePeerPods: true
      logLevel: info
     #checkNodeEligibility: true
     #kataConfigPoolSelector:
     #  matchLabels:
     #    <label_key>: '<label_value>'
    EOF
    
  11. Apply the KataConfig.

    oc apply -f kata-runtime-settings.yaml
    
  12. As Kata is installed and the daemonsets are started, you can monitor the progress.

    • You can look in the OperatorHub openshift-sandboxed-containers-operator project to see that the KataConfig is in progress.

    • You can run the following command to watch the labels update with the current state of the installation.

      oc get nodes --output yaml|egrep "kata-ds-rpm-install|ibm-cloud.kubernetes.io/worker-id"
      

      Possible states:

      • waiting_to_install: The Kata installation is queued on the node.
      • installing: The Kata installation is in progress.
      • installed: Kata is installed successfully on the node.
      • waiting_for_reboot: The node must be rebooted to complete the installation or uninstallation.
      • waiting_to_uninstall: The Kata uninstallation is queued on the node.
      • uninstalling: The Kata uninstallation is in progress.
      • uninstalled: Kata is uninstalled successfully from the node.
  13. When the labels are updated and in the waiting_for_reboot state, reboot each worker node one at a time.

When you run oc get nodes and each worker node is in the installed state, the installation is complete.

Step 7: Configuring a trust authority

Attestation is a critical part of confidential containers. You must validate supply chain code security, that the code running in the container wasn’t modified. You can leverage an Intel TDX chip and the key-broker-service protocol. The image for a podvm already has working TDX driver code and a kbs_client in it. You must configure the INITDATA with the trustee details though.

  1. Select a trustee. There are many options for trustee in confidential containers.

  2. If you selected the VM for a trustee for development purposes, complete these configuration steps.

    a. Insert the trustee IP address into the following script and run it to set the INITDATA variable.

    export KBS_SERVICE_ENDPOINT="https://REPLACE_WITH_TRUSTEE_IP:8080"
    export INITDATA=$(cat <<EOF | gzip | base64 -w0
    algorithm = "sha256"
    version = "0.1.0"
    
    [data]
    "aa.toml" = '''
    [token_configs]
    [token_configs.coco_as]
    url = "$KBS_SERVICE_ENDPOINT"
    
    [token_configs.kbs]
    url = "$KBS_SERVICE_ENDPOINT"
    '''
    
    "cdh.toml"  = '''
    socket = 'unix:///run/confidential-containers/cdh.sock'
    credentials = []
    
    [kbc]
    name = "cc_kbc"
    url = "$KBS_SERVICE_ENDPOINT"
    '''
    EOF
    )
    

    b. Verify the $INITDATA environment variable.

    echo $INITDATA
    

    c. Add the value for the variable to the peer-pods-cm.yaml ConfigMap in the openshift-sandboxed-containers-operator namespace.

    d. Restart the osc-caa-ds daemonset in the openshift-sandboxed-containers-operator namespace. This Cloud API Adapter daemonset is used to communicate with IBM Cloud.

    oc rollout restart daemonset.apps/osc-caa-ds
    

    e. Run the following command to view the pods. For each osc-caa-ds-<id> pod, look at the Age of each pod to verify that the pod was restarted.

    oc get pods
    

    If a pod did not restart, delete the pod to re-create it.

    oc delete pod/osc-caa-ds-<id>
    

    View the pods again.

    oc get pods
    

    f. Repeat these steps for each workload entry of INITDATA.

    g. The INITDATA value can be applied to an individual container as an annotation and the container that starts is configured to use the trustee. This annotation can be helpful when testing new trustees or making sure that changes to INITDATA don’t break any confidential containers.

    Example annotation:

    apiVersion: v1
    kind: Pod
      metadata:
        name: mypod
        annotations:
          io.katacontainers.config.hypervisor.cc_init_data: $INITDATA
    spec:
      runtimeClassName: kata-remote
    

Step 8: Running a confidential container workload

After all labels are updated to installed, deploy a workload by using the kata-remote runtime class name in a pod.yaml file. You can use the Hello World example as a test workload in a confidential container.

  1. Create a pod.yaml file.

    oc apply -f - <<EOF
    apiVersion: v1
    kind: Pod
    metadata:
      labels:
        app: helloworld
        version: v1
      name: helloworld
    spec:
      containers:
      - name: helloworld
        image: docker.io/istio/examples-helloworld-v1:1.0
        ports:
        - containerPort: 5000
      runtimeClassName: kata-remote
    EOF
    
  2. Monitor the deployment in the Virtual Servers list. When the VSI is created, it displays the Running state. If the VSI appears to be stuck in the Starting state, check the logs for issues.

    a. Get the pod names.

    oc get pods
    

    b. Get the logs for one of the Cloud API Adapter pods and look for errors.

    oc logs osc-caa-ds-<id>
    
  3. Verify the pod by running the following command.

    oc describe pod/helloworld
    
  4. To check attestation, exec into the container by running the following command.

    oc exec -it helloworld -- bash
    

    Then, run the following curl command to get information from the trustee.

    curl http://127.0.0.1:8006/cdh/resource/default/kbsres1/key1
    

    When you are finished, you can exit the container.

    exit
    
  5. If there are issues, review the logs in the openshift-sandboxed-containers-operator namespace.

    • Controller managed pod logs:

      oc logs pod/controller-manager-<UNIQUE_ID>
      
    • Cloud API Adapter pod logs:

      oc logs pod/osc-caa-ds-<UNIQUE_ID>
      
    • Application logs: Depends on location specified.

Your confidential containers setup is now complete! Still need help? Check out the troubleshooting.

Removing workloads and tools

Completing these steps in the wrong order could leave resources behind that you are billed for, such as a VSI.

Removing workloads

  1. Delete the workloads from the cluster that use confidential containers.

    a. Show all pods.

    oc get pods -A -o json | jq '.items[] | select(.spec.runtimeClassName == "kata-remote") | "\(.metadata.namespace)/\(.metadata.name)"'
    

    b. Delete the pods, which deletes the VSIs deployed for them.

    oc delete -f pod.yaml
    

    If you changed the previous ConfigMaps to invalid configurations or the credentials have been removed, the software cannot complete the API calls to remove the resources and they will have to manually be removed. Manual removal should only be used in this scenario because you might be required you to make a new OpenShift cluster or replace workers.

  2. Delete the Kata configuration. The kata-runtime-settings.yaml removes the Kata from the workers, which you can watch as the labels are updated.

    a. Monitor the node labels until they are in the waiting_for_reboot state.

    b. Reboot the workers one at a time to finish uninstalling the Kata on the worker node.

    c. If other workloads are running on this cluster, cordon the worker, drain it, and then restart it.

    d. Wait until the kata-runtime-settings.yaml deletion is finished after rebooting to continue to the next step. There are processes that must finish uninstalling after reboot.

    Do not continue if kata-runtime-settings.yaml resources fail to delete.

  3. Delete the ConfigMaps.

Uninstalling the operator

After you have removed the workloads, you can uninstall the OpenShift Sandboxed Containers Operator.

  1. From OperatorHub, uninstall the operator.

  2. Confirm there are no remaining resources in the openshift-sandboxed-containers-operator namespace.

  3. Delete the namespace.