This is an experimental feature that is available for evaluation and testing purposes and might change without notice.
Creating confidential containers
Learn how to install and use confidential containers, which are also known as Kata Containers or OpenShift Sandboxed Containers, in a Red Hat OpenShift on IBM Cloud cluster.
What are confidential containers?
A confidential container provides a secure runtime environment for sensitive workloads, but allows you to continue to work within existing workflows.
The IBM Cloud implementation of confidential containers leverages peer pods to extend the functionality of Red Hat OpenShift pods into a separate VSI from the worker node. This extension creates a trusted execution environment beyond traditional Kubernetes and OpenShift.
Learn more:
- Review the reasons you might use confidential containers.
- Check out the FAQ for confidential containers.
Prerequisites
-
When you create or choose which Red Hat OpenShift on IBM Cloud cluster to use, the cluster must meet the following requirements:
- The cluster must be in a region that supports TDX Virtual Server Instances (VSIs).
- The cluster must have a public interface or you can connect to its environment through a VPN.
- To permit the cluster to talk to any VSI created with confidential containers, a security group must be created that is named
kube-CLUSTER_IDfor the cluster.
-
If necessary, enable OperatorHub. Sometimes OperatorHub is disabled in a cluster for security reasons.
Step 1: Installing the operator
Install the OpenShift Sandboxed Containers Operator to manage the lifecycle of confidential containers in clusters.
-
Open the cluster dashboard.
-
Click OpenShift web console > Operators > OperatorHub.
-
Search for
OpenShift sandboxed containers Operatorand click the tile. -
Click Install to get the supported and stable version of the OpenShift Sandboxed Containers Operator, version 1.10.3. Refer to Red Hat's Operator Update Information Checker for supported OpenShift versions.
-
In the Install Operator window, you can keep the default selections and click Install.
-
Wait for the installation to complete. Click the View installed Operators in Namespace openshift-sandboxed-containers-operator link and wait for the status to be Succeeded. While you wait, you can complete the next step to set up the CLI.
Step 2: Setting up the CLI
Before you begin, you can either complete these steps to set up the CLI or you can use the IBM Cloud shell to run commands.
-
Install the IBM Cloud command line.
-
Log in to the IBM Cloud CLI.
ibmcloud login --apikey <API_KEY> -g <RESOURCE_GROUP> -
List the clusters in the account and copy the ID of the cluster you want to use for the next step.
ibmcloud ks cluster ls -
Run the
configcommand.ibmcloud ks cluster config --cluster CLUSTER_ID --admin --endpoint linkIn your home directory, a
.kubefolder is created and information is stored for communicating with that cluster. -
Confirm that the
occommands run properly by viewing the details of the worker nodes in the cluster.oc get nodes -
Set the namespace project so that you do not have to include the namespace in later commands.
oc project openshift-sandboxed-containers-operator -
Optional: Explore the namespace.
oc get allFor example, in the list of pods, the controller manager that is named
pod/controller-manager-<id>manages the microservices within the operator.
Step 3: Importing the peer pod image
The OpenShift Sandboxed Containers Operator launches a special operating system inside the peer pod that must be imported into your IBM Cloud Account. This operating system is required to deploy a workload to a confidential container.
The peer pod image contains a full Red Hat Enterprise Linux (RHEL) 9.6 operating system with the software required to instantiate a container in a Confidential Virtual Machine (CVM).
All configurations and installed packages in the operating system remain the default Red Hat values. However, the IBM Cloud VSIs require cloud init to work. In the scripts, cloud init is prevented from being uninstalled
when it is finished building the podvm, which is a key difference from its source image.
Before you begin:
Validate version compatibility. The image is supported for the following versions.
- OpenShift Sandboxed Containers Operator version 1.10.3
- OpenShift versions 4.19, 4.18, 4.17, and 4.16 clusters
To import the peer pod image:
-
Run the
image-createcommand.ibmcloud is image-create "IMAGE_NAME" --file cos://us-south/podvm-image/rhel9-podvm-latest.qcow2 --os-name red-9-amd64 -
Open the compute images.
-
Click the Create + icon, choose a region with TDX-capable VSIs, and complete the required fields.
a. For Image source, select Cloud Object Storage.
b. Select the Locate by image file URL tab and for the Image URL, enter
cos://us-south/podvm-image/rhel9-podvm-latest.qcow2.c. For the Operating system, select Red Hat Enterprise Linux > red-9-amd64.
d. Optional: To create another confidential container from the API with the same details later, click the Get sample API call button and copy the Curl command.
e. Click Create custom image.
-
When the image is added to the Images list, click the image name and select the IDs tab. Then, note the Image ID to use later.
-
Wait for the image's status to be Available.
ibmcloud is image IMAGE_NAME -
Repeat these steps when a new version of the image is available.
Step 4: Creating an API key or trusted profile
Confidential containers require a credential to instantiate the peer pod through kata-remote when a secure workload is launched. This credential must be either a valid API key or trusted profile with permissions to create a VSI
in your account.
If you are testing out confidential containers, you can use an API key. If you are using Secrets Manager, you must set up a trusted profile.
-
API Key from the UI
-
From the IBM Cloud dashboard, click Manage > Access (IAM) > API keys.
-
Click Create.
-
Save this key securely because it cannot be retrieved from this page later.
-
-
API Key from the CLI.
Run the following command, and save the output.
ibmcloud iam api-key-create <key_name> -
Trusted Profile
-
Open the trusted profiles dashboard.
-
Create a trusted profile and grant the profile the necessary permissions to create virtual servers from OpenShift.
a. Create a trusted profile.
ibmcloud iam trusted-profile-create <NAME> [--description <DESCRIPTION>]b. Allow the resources in
openshift-sandboxed-containers-operatorto use the trusted profile.ibmcloud iam trusted-profile-rule-create <NAME or ID of the trusted profile> --name <RULE_NAME> --type Profile-CR --conditions claim:namespace,operator:EQUALS,value:openshift-sandboxed-containers-operator --cr-type ROKS_SAc. Allow access to the VPC Infrastructure Services (
is).To allow access for every resource in the account:
ibmcloud iam trusted-profile-policy-create <NAME or ID of the trusted profile> --roles Editor,Writer --service-name isTo allow access for a specific resource group:
ibmcloud iam trusted-profile-policy-create <NAME or ID of the trusted profile> --roles Viewer [--resource-group-id <resource group>]
-
Step 5: Creating an SSH key (Optional)
In test clusters, you might find it helpful to have an SSH key ready to troubleshoot why something isn't starting and to view logs. In production clusters, you might not want the SSH functionality enabled.
-
Click Infrastructure > Compute > SSH Keys.
-
Create an SSH key and note the SSH key ID.
Step 6: Configuring confidential containers
After the Operator is installed, create ConfigMaps to allow Kata to handle workloads in the IBM Cloud account.
-
Create a directory to store the files.
mkdir <directory-name> -
Switch to the directory.
cd <directory-name> -
Copy the following environment variables for the API key, trusted profile ID, cluster name, PodVM image ID, SSH key ID, and VPC ID (optional).
Optional: You can store them in a Shell script in the new directory to set them again later. Example:
<directory-name>/env-vars.sha. Gather the values for the following variables and update the values in the script.
- For the
CLUSTER_NAME, open the details for the cluster in the clusters list and copy the name. - Optional: For the
VPC_ID, in the Cluster details section of the same page, you can click the vpc name to open the details for the VPC and copy the VPC ID field. - For the
PODVM_IMAGE_ID, use the image ID you saved for the peer pod image. - If you are using an API key, you can remove the
IBMCLOUD_TRUSTED_PROFILE_IDline. - If you are using a trusted profile, you can remove the
IBMCLOUD_API_KEYline. - If you did not set an SSH key, you can remove the
SSH_KEY_IDline.
export IBMCLOUD_API_KEY=<your API key> export IBMCLOUD_TRUSTED_PROFILE_ID="<your Trusted Profile ID>" export CLUSTER_NAME=<cluster-name-region-flavor> export PODVM_IMAGE_ID=<PodVM image ID provided by IBM or a custom-built image> export SSH_KEY_ID=<SSH key ID to be used by the peer pod VSI> export VPC_ID=<Optional: the VPC that your Openshift cluster is in>b. If you stored the variables in a Shell script, run it. Example:
sh env-vars.sh - For the
-
Run the command to create the
feature-gates.yamlConfigMap.cat > feature-gates.yaml <<EOF apiVersion: v1 kind: ConfigMap metadata: name: osc-feature-gates namespace: openshift-sandboxed-containers-operator data: deploymentMode: "DaemonSetFallback" # or DaemonSet to force it confidential: "true" layeredImageDeployment: "false" EOF -
Apply the ConfigMap.
oc apply -f feature-gates.yaml -
Run the command to create the
peer-pods-secret.yaml. Remove any optional environment variables from thestringDatasection that you did need.cat > peer-pods-secret.yaml <<EOF apiVersion: v1 kind: Secret metadata: name: peer-pods-secret namespace: openshift-sandboxed-containers-operator type: Opaque stringData: # either IBMCLOUD_API_KEY or IBMCLOUD_IAM_PROFILE_ID must be set # if you specify both the IBMCLOUD_API_KEY will be used # IBMCLOUD_IAM_ENDPOINT is optional IBMCLOUD_API_KEY: "$IBMCLOUD_API_KEY" IBMCLOUD_IAM_ENDPOINT: "https://iam.cloud.ibm.com/identity/token" IBMCLOUD_IAM_PROFILE_ID: "$IBMCLOUD_TRUSTED_PROFILE_ID" EOF -
Apply the secret to the cluster.
oc apply -f peer-pods-secret.yaml -
Run the command to create the
peer-pods-cm.yamlConfigMap. Remove any optional environment variables from thedatasection that you did not set.cat > peer-pods-cm.yaml <<EOF apiVersion: v1 kind: ConfigMap metadata: name: peer-pods-cm namespace: openshift-sandboxed-containers-operator data: CLOUD_PROVIDER: "ibmcloud" IBMCLOUD_PODVM_IMAGE_ID: "$PODVM_IMAGE_ID" IBMCLOUD_PODVM_INSTANCE_PROFILE_LIST: "bx3dc-2x10" IBMCLOUD_PODVM_INSTANCE_PROFILE_NAME: "bx3dc-2x10" IBMCLOUD_RESOURCE_GROUP_ID: "$(ibmcloud is vpc "$VPC_ID" -json | jq -r .resource_group.id)" IBMCLOUD_SSH_KEY_ID: "$SSH_KEY_ID" IBMCLOUD_VPC_ENDPOINT: "https://us-east.iaas.cloud.ibm.com/v1" IBMCLOUD_VPC_ID: "$VPC_ID" IBMCLOUD_VPC_SG_ID: "$(ibmcloud ks security-group ls --cluster $CLUSTER_NAME -json | jq -r '.[] | select(.type == "cluster") | .id')" CLOUD_CONFIG_VERIFY: "false" CRI_RUNTIME_ENDPOINT: "/run/cri-runtime/containerd.sock" ENABLE_CLOUD_PROVIDER_EXTERNAL_PLUGIN: "false" VXLAN_PORT: "" TUNNEL_TYPE: "" INITDATA: "" EOF -
Apply the ConfigMap.
oc apply -f peer-pods-cm.yaml -
Run the command to create the
kata-runtime-settings.yamlKataConfig.cat > kata-runtime-settings.yaml <<EOF apiVersion: kataconfiguration.openshift.io/v1 kind: KataConfig metadata: name: kata-runtime-settings namespace: openshift-sandboxed-containers-operator spec: enablePeerPods: true logLevel: info #checkNodeEligibility: true #kataConfigPoolSelector: # matchLabels: # <label_key>: '<label_value>' EOF -
Apply the KataConfig.
oc apply -f kata-runtime-settings.yaml -
As Kata is installed and the daemonsets are started, you can monitor the progress.
-
You can look in the OperatorHub
openshift-sandboxed-containers-operatorproject to see that the KataConfig is in progress. -
You can run the following command to watch the labels update with the current state of the installation.
oc get nodes --output yaml|egrep "kata-ds-rpm-install|ibm-cloud.kubernetes.io/worker-id"Possible states:
waiting_to_install: The Kata installation is queued on the node.installing: The Kata installation is in progress.installed: Kata is installed successfully on the node.waiting_for_reboot: The node must be rebooted to complete the installation or uninstallation.waiting_to_uninstall: The Kata uninstallation is queued on the node.uninstalling: The Kata uninstallation is in progress.uninstalled: Kata is uninstalled successfully from the node.
-
-
When the labels are updated and in the
waiting_for_rebootstate, reboot each worker node one at a time.
When you run oc get nodes and each worker node is in the installed state, the installation is complete.
Step 7: Configuring a trust authority
Attestation is a critical part of confidential containers. You must validate supply chain code security, that the code running in the container wasn’t modified. You can leverage an Intel TDX chip and the key-broker-service protocol.
The image for a podvm already has working TDX driver code and a kbs_client in it. You must configure the INITDATA with the trustee details though.
-
Select a trustee. There are many options for trustee in confidential containers.
-
For development purposes, you might launch a VM for a trustee on it.
-
For a production-level service, you might use Intel Trust Authority.
-
-
If you selected the VM for a trustee for development purposes, complete these configuration steps.
a. Insert the trustee IP address into the following script and run it to set the
INITDATAvariable.export KBS_SERVICE_ENDPOINT="https://REPLACE_WITH_TRUSTEE_IP:8080" export INITDATA=$(cat <<EOF | gzip | base64 -w0 algorithm = "sha256" version = "0.1.0" [data] "aa.toml" = ''' [token_configs] [token_configs.coco_as] url = "$KBS_SERVICE_ENDPOINT" [token_configs.kbs] url = "$KBS_SERVICE_ENDPOINT" ''' "cdh.toml" = ''' socket = 'unix:///run/confidential-containers/cdh.sock' credentials = [] [kbc] name = "cc_kbc" url = "$KBS_SERVICE_ENDPOINT" ''' EOF )b. Verify the
$INITDATAenvironment variable.echo $INITDATAc. Add the value for the variable to the
peer-pods-cm.yamlConfigMap in theopenshift-sandboxed-containers-operatornamespace.d. Restart the
osc-caa-dsdaemonset in theopenshift-sandboxed-containers-operatornamespace. This Cloud API Adapter daemonset is used to communicate with IBM Cloud.oc rollout restart daemonset.apps/osc-caa-dse. Run the following command to view the pods. For each
osc-caa-ds-<id>pod, look at the Age of each pod to verify that the pod was restarted.oc get podsIf a pod did not restart, delete the pod to re-create it.
oc delete pod/osc-caa-ds-<id>View the pods again.
oc get podsf. Repeat these steps for each workload entry of
INITDATA.g. The
INITDATAvalue can be applied to an individual container as an annotation and the container that starts is configured to use the trustee. This annotation can be helpful when testing new trustees or making sure that changes toINITDATAdon’t break any confidential containers.Example annotation:
apiVersion: v1 kind: Pod metadata: name: mypod annotations: io.katacontainers.config.hypervisor.cc_init_data: $INITDATA spec: runtimeClassName: kata-remote
Step 8: Running a confidential container workload
After all labels are updated to installed, deploy a workload by using the kata-remote runtime class name in a pod.yaml file. You can use the Hello World example as a test workload in a confidential container.
-
Create a
pod.yamlfile.oc apply -f - <<EOF apiVersion: v1 kind: Pod metadata: labels: app: helloworld version: v1 name: helloworld spec: containers: - name: helloworld image: docker.io/istio/examples-helloworld-v1:1.0 ports: - containerPort: 5000 runtimeClassName: kata-remote EOF -
Monitor the deployment in the Virtual Servers list. When the VSI is created, it displays the Running state. If the VSI appears to be stuck in the Starting state, check the logs for issues.
a. Get the pod names.
oc get podsb. Get the logs for one of the Cloud API Adapter pods and look for errors.
oc logs osc-caa-ds-<id> -
Verify the pod by running the following command.
oc describe pod/helloworld -
To check attestation, exec into the container by running the following command.
oc exec -it helloworld -- bashThen, run the following
curlcommand to get information from the trustee.curl http://127.0.0.1:8006/cdh/resource/default/kbsres1/key1When you are finished, you can exit the container.
exit -
If there are issues, review the logs in the
openshift-sandboxed-containers-operatornamespace.-
Controller managed pod logs:
oc logs pod/controller-manager-<UNIQUE_ID> -
Cloud API Adapter pod logs:
oc logs pod/osc-caa-ds-<UNIQUE_ID> -
Application logs: Depends on location specified.
-
Your confidential containers setup is now complete! Still need help? Check out the troubleshooting.
Removing workloads and tools
Completing these steps in the wrong order could leave resources behind that you are billed for, such as a VSI.
Removing workloads
-
Delete the workloads from the cluster that use confidential containers.
a. Show all pods.
oc get pods -A -o json | jq '.items[] | select(.spec.runtimeClassName == "kata-remote") | "\(.metadata.namespace)/\(.metadata.name)"'b. Delete the pods, which deletes the VSIs deployed for them.
oc delete -f pod.yamlIf you changed the previous ConfigMaps to invalid configurations or the credentials have been removed, the software cannot complete the API calls to remove the resources and they will have to manually be removed. Manual removal should only be used in this scenario because you might be required you to make a new OpenShift cluster or replace workers.
-
Delete the Kata configuration. The
kata-runtime-settings.yamlremoves the Kata from the workers, which you can watch as the labels are updated.a. Monitor the node labels until they are in the
waiting_for_rebootstate.b. Reboot the workers one at a time to finish uninstalling the Kata on the worker node.
c. If other workloads are running on this cluster, cordon the worker, drain it, and then restart it.
d. Wait until the
kata-runtime-settings.yamldeletion is finished after rebooting to continue to the next step. There are processes that must finish uninstalling after reboot.Do not continue if
kata-runtime-settings.yamlresources fail to delete. -
Delete the ConfigMaps.
Uninstalling the operator
After you have removed the workloads, you can uninstall the OpenShift Sandboxed Containers Operator.
-
From OperatorHub, uninstall the operator.
-
Confirm there are no remaining resources in the
openshift-sandboxed-containers-operatornamespace. -
Delete the namespace.