Setting up an image registry
Red Hat® OpenShift® on IBM Cloud® clusters include an internal registry to build, deploy, and manage container images locally. For a private registry to manage and control access to images across your enterprise, you can also set up your cluster to use IBM Cloud® Container Registry.
Choosing an image registry solution
Your container images must be stored in a container registry that your cluster can access to deploy apps into your cluster. You can choose to use the built-in registry of your Red Hat OpenShift cluster, a private registry with access restricted to select users, or a public registry. Review the following table to choose the best option for your use case.
- Internal Red Hat OpenShift Container Registry (OCR)
-
Your cluster is set up with the internal Red Hat OpenShift Container Registry so that Red Hat OpenShift can automatically build, deploy, and manage your application lifecycle from within the cluster. Images are stored in a backing IBM Cloud classic file storage device that is provisioned at cluster creation time. If you need more storage, you can resize the device. Use cases:
- Red Hat OpenShift-native image stream, build, and app deployment process on a per cluster basis.
- Images can be shared across all projects in the cluster, with access that is controlled through RBAC roles.
- Integrating the internal registry with other Red Hat products like CloudForms for extended features such as vulnerability scanning.
- Option to expose the internal registry with a route so that users can pull images from the registry over the public network.
- Option to set up the internal registry to pull images from or push images to a private registry such as IBM Cloud Container Registry.
-
For more information, see Using the internal registry.
- Private registry
-
Private registries are a good choice to protect your images from unauthorized users. Private registries must be set up by the cluster administrator to make sure that access, storage quotas, image trust and other features work as intended. By default, your Red Hat OpenShift clusters are integrated with the private IBM Cloud Container Registry through image pull secrets in the
default
project. IBM Cloud Container Registry is a highly available, multi-tenant private registry to store your own images. You can also pull IBM-provided images from the globalicr.io
registry, and licensed software from the entitled registry. With IBM Cloud Container Registry, you can manage images for multiple clusters with integration with IBM Cloud IAM and billing. Advantages of using IBM Cloud Container Registry with the internal registry:- Local image caching for faster builds via the internal registry.
- Deployments in other projects can see the image stream so that you don't have to copy pull secrets to each project.
- Sharing images across multiple clusters without needing to push images to multiple registries.
- Automatically scanning the vulnerability of images.
- Controlling access through IBM Cloud IAM policies and separate regional registries.
- Retaining images without requiring storage space in your cluster or an attached storage device. You can also set policies to manage the quantity of images to prevent them from taking up too much space.
- VPC infrastructure: Using the private registry service endpoint so that clusters that use only a private cloud service endpoint can still access the registry.
- Setting storage and image pull traffic quotas to better control image storage, usage, and billing.
- Pulling licensed IBM content from the entitled registry.
-
To get started, see the following topics:
- Getting started with IBM Cloud Container Registry.
- Importing images from IBM Cloud Container Registry into the internal registry image stream.
- Using IBM Cloud Container Registry.
- Public registry
-
Public registries such as Docker Hub are a way to share images across teams, companies, clusters, or cloud providers. Some public registries might also offer a private registry component. Use cases:
- Pushing and pulling images on the public network.
- Quick testing of a container across cloud providers.
- Don't need enterprise-grade features such as vulnerability scanning or access management.
-
For more information, see the public registry's documentation, such as Quay or Docker Hub.
Storing images in the internal registries
Red Hat OpenShift clusters have an internal registry by default. The images in the internal registry are backed up, but vary depending on the infrastructure provider of your Red Hat OpenShift on IBM Cloud cluster.
- Classic clusters
- Your Red Hat OpenShift cluster is set up by default with an internal registry that uses File Storage for Classic as the backing storage. When you delete the cluster, the internal registry and its images are also deleted. If you want to persist your images, consider using a private registry such as IBM Cloud Container Registry, backing up your images to persistent storage such as Object Storage, or creating a separate, stand-alone Red Hat OpenShift container registry (OCR) cluster. For more information, see the Red Hat OpenShift docs.
- VPC clusters
- The internal registry of your Red Hat OpenShift cluster backs up your images to a bucket that is automatically created in an IBM Cloud Object Storage instance in your account. Any data that is stored in the object storage bucket remains even if you delete the cluster.
- Classic, VPC, or Satellite clusters
- You can optionally choose to set up the internal registry to store data in the
emptyDir
of the worker node where the internal registry pod runs. Keep in mind that this data is not persistent, and if the pod or worker node is restarted, the stored data is deleted and unrecoverable. You might store the images locally in theemptyDir
to increase performance if you build containers from large images regularly.
Backing up your internal image registry to IBM Cloud Object Storage
Virtual Private Cloud
Your images in your Red Hat OpenShift cluster internal registry are automatically backed up to an IBM Cloud Object Storage bucket. Any data that is stored in the object storage bucket remains even if you delete the cluster.
However, if the bucket fails to create when you create your cluster, you must manually create a bucket and set up your cluster to use the bucket. In the meantime, the internal registry uses an emptyDir
Kubernetes volume that stores
your container images on the secondary disk of your worker node. The emptyDir
volumes are not considered persistent highly available storage, and if you delete the pods that use the image, the image is automatically deleted.
To manually create a bucket for your internal registry, see Cluster create error about cloud object storage bucket.
Storing images in the internal registry in Classic clusters
By default, your Red Hat OpenShift cluster's internal registry uses an IBM Cloud File Storage for Classic volume to store the registry images. You can review the default size of the storage volume, or update the volume size.
Viewing volume details
To view volume details including the storage class and size, you can describe the persistent volume claim.
oc describe pvc -n openshift-image-registry image-registry-storage
Changing volume details
If your registry needs additional gigabytes of storage for your images, you can resize the file storage volume. For more information, see Changing the size and IOPS of your existing storage device.
When you resize the volume in your IBM Cloud infrastructure account, the attached PVC description is not updated. Instead, you can log in to the openshift-image-registry
pod that uses the registry-backing
PVC
to verify that the volume is resized.
Storing images in the worker node empty directory
You might store the internal registry images locally in the emptyDir
of the worker node, such as a bare metal worker node, to increase performance if you build containers from large images regularly.
Keep in mind that this data is not persistent, and if the pod or worker node is restarted, the stored data is deleted and unrecoverable.
- Access your Red Hat OpenShift cluster.
- Update the image registry operator configmap to set the storage to use the
emptyDir
of the worker node. Note that updating the configmap to useemptyDir
does not remove the image registry's original PVC.oc patch configs.imageregistry.operator.openshift.io/cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}'
- If the image registry operator management state is set to
Unmanaged
, such as in Satellite clusters, update the management state toManaged
. Now, the operator updates the internal registry pod.oc patch configs.imageregistry.operator.openshift.io/cluster --type merge -p '{"spec":{"managementState":"Managed"}}'
- Get the details of the internal registry pod so that you can verify your updates.
-
Check that the
image-registry
pod is running and that a pod runs per worker node in the cluster.oc get pods -n openshift-image-registry
Example output
NAME READY STATUS RESTARTS AGE cluster-image-registry-operator-695bf78ffc-zvkhd 2/2 Running 0 33m image-registry-6774598589-65cnx 1/1 Running 0 112s node-ca-gg66r 1/1 Running 0 113s node-ca-n8jpq 1/1 Running 0 113s node-ca-p2d7j 1/1 Running 0 113s
-
Get the public IP address of the Node that the
image-registry
pod runs on.oc describe pod -n openshift-image-registry <image-registry-pod> | grep Node
Example output
Node: 169.xx.xxx.xxx/169.xx.xxx.xxx
If the worker node IP address is private, run
ibmcloud oc worker ls -c <cluster> | grep <private_IP>
and note the corresponding public IP address. -
Get the UID of the
image-registry
pod in themetadata.uid
section in the pod YAML (not the UID of the replica set in themetadata.ownerReferences.uid
section).oc get pod -n openshift-image-registry <image-registry-pod> -o yaml
Example output
apiVersion: v1 kind: Pod metadata: uid: e8d7718d-b0bd-47e2-9aaa-05f3a608fd9b ...
-
- Verify that the internal registry stores data to the
emptyDir
of the worker node.-
Access the registry directly from the cluster, using the worker node that you previously retrieved. Follow the steps to push a test image to the internal registry.
To complete these steps in the Red Hat OpenShift documentation, you need the
podman
CLI tool. Your worker nodes might not have this CLI tool by default. See the Podman installation guide for the available RHEL versions. -
Navigate to the internal registry pod folder that saves to the
emptyDir
. For<pod_uid>
, use the pod UID that you retrieved earlier.cd var/lib/kubelet/pods/<pod_uid>/volumes/kubernetes.io~empty-dir/registry-storage/docker/registry/v2/repositories/openshift
-
Verify that your image is in repository directory.
ls
Example output
<myimage> nginx ...
-
Removing the internal image registry
Virtual Private Cloud
If you don't want to use the internal image registry, you can complete the following steps to remove it.
- Save a copy of your internal registry configurations.
oc get configs.imageregistry.operator.openshift.io cluster -o yaml > configs.yaml
- Run the following patch command to change the management state of the image registry to
Removed
.kubectl patch configs.imageregistry.operator.openshift.io cluster -p '{"spec":{"managementState":"Removed"}}' --type='merge'
- After changing the management state, the image registry service and deployment is removed from the
openshift-image-registry
namespace in your cluster. You can run the following commands to verify they were removed. Note that only the image registry deployment and service are removed. The image registry operator deployment and service remain.oc get deployment -n openshift-image-registry
oc get svc -n openshift-image-registry
Setting up a secure external route for the internal registry
By default, your Red Hat OpenShift cluster has an internal registry that is available through a service with an internal IP address. If you want to make the internal registry available on the public network, you can set up a secure re-encrypt route. For example, you might set up your cluster's internal registry to act as a public registry for deployments in other projects or clusters.
Before you begin:
- Confirm that you have the Manager IBM Cloud IAM service access role for the cluster.
- Make sure that your cluster has public network connectivity to expose the internal registry with a public route.
- Install Docker on your local machine.
- Access your Red Hat OpenShift cluster.
To use the internal registry, set up a public route to access the registry. Then, create an image pull secret that includes the credentials to access the registry so that deployments in other projects can pull images from this registry.
-
From the
openshift-image-registry
project, make sure that theimage-registry
service exists for the internal registry.oc get svc -n openshift-image-registry
Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE image-registry ClusterIP 172.21.xxx.xxx <none> 5000/TCP 36d image-registry-operator ClusterIP None <none> 60000/TCP 36d
-
Create a secured route for the
image-registry
service that usesreencrypt
TLS termination. With re-encryption, the router terminates the TLS connection with a certificate, and then re-encrypts the connection to the internal registry with a different certificate. With this approach, the full path of the connection between the user and the internal registry is encrypted. To provide your own custom domain name, include the--hostname
option.oc create route reencrypt --service=image-registry -n openshift-image-registry
-
Retrieve the hostname (HOST/PORT) and the PORT that were assigned to the
image-registry
route.oc get route image-registry -n openshift-image-registry
Example output
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD image-registry image-registry-openshift-image-registry.<cluster_name>-<ID_string>.<region>.containers.appdomain.cloud image-registry 5000-tcp reencrypt None
-
Edit the route to set the load balancing strategy to
source
so that the same client IP address reaches the same server, as in a passthrough route setup. You can set the strategy by adding an annotation in themetadata.annotations
section:haproxy.router.openshift.io/balance: source
. You can edit the configuration file from the Red Hat OpenShift Application Console or in your command line by running the following command.oc edit route image-registry -n openshift-image-registry
Add the annotation.
apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/balance: source ...
-
If corporate network policies prevent access from your local system to public endpoints via proxies or firewalls, allow access to the route subdomain that you create for the internal registry in the following steps.
-
Log in to the internal registry by using the route as the hostname.
docker login -u $(oc whoami) -p $(oc whoami -t) image-registry-openshift-image-registry.<cluster_name>-<ID_string>.<region>.containers.appdomain.cloud
-
Now that you are logged in, try pushing a sample
hello-world
app to the internal registry.-
Pull the
hello-world
image from DockerHub, or build an image on your local machine.docker pull hello-world
-
Tag the local image with the hostname of your internal registry, the project that you want to deploy the image to, and the image name and tag.
docker tag hello-world:latest image-registry-openshift-image-registry.<cluster_name>-<ID_string>.<region>.containers.appdomain.cloud/<project>/<image_name>:<tag>
-
Push the image to your cluster's internal registry.
docker push image-registry-openshift-image-registry.<cluster_name>-<ID_string>.<region>.containers.appdomain.cloud/<project>/<image_name>:<tag>
-
Verify that the image is added to the Red Hat OpenShift image stream.
oc get imagestream
Example output
NAME DOCKER REPO TAGS UPDATED hello-world image-registry-openshift-image-registry.svc:5000/default/hello-world latest 7 hours ago
-
-
To enable deployments in your project to pull images from the internal registry, create an image pull secret in your project that holds the credentials to access your internal registry. Then, add the image pull secret to the default service account for each project.
-
List the image pull secrets that the default service account uses, and note the secret that begins with
default-dockercfg
.oc describe sa default
Example output
... Image pull secrets: all-icr-io default-dockercfg-mpcn4 ...
-
Get the encoded secret information from the
data
field of the configuration file.oc get secret <default-dockercfg-name> -o yaml
Example output
apiVersion: v1 data: .dockercfg: ey...=
-
Decode the value of the
data
field.echo "<ey...=>" | base64 -D
Example output
{"172.21.xxx.xxx:5000":{"username":"serviceaccount","password":"eyJ...
-
Create a new image pull secret for the internal registry.
secret_name
: Give your image pull secret a name, such asinternal-registry
.--namespace
: Enter the project to create the image pull secret in, such asdefault
.--docker-server
: Instead of the internal service IP address (172.21.xxx.xxx:5000
), enter the hostname of theimage-registry
route with the port (image-registry-openshift-image-registry.<cluster_name>-<ID_string>.<region>.containers.appdomain.cloud:5000
).--docker-username
: Copy the"username"
from the previous image pull secret, such asserviceaccount
.--docker-password
: Copy the"password"
from the previous image pull secret.--docker-email
: If you have one, enter your Docker email address. If you don't, enter a fictional email address, such asa@b.c
. This email is required to create a Kubernetes secret, but is not used after creation.
oc create secret docker-registry internal-registry --namespace default --docker-server image-registry-openshift-image-registry.<cluster_name>-<ID_string>.<region>.containers.appdomain.cloud:5000 --docker-username serviceaccount --docker-password <eyJ...> --docker-email a@b.c
-
Add the image pull secret to the default service account of your project.
oc patch -n <namespace_name> serviceaccount/default --type='json' -p='[{"op":"add","path":"/imagePullSecrets/-","value":{"name":"<image_pull_secret_name>"}}]'
-
Repeat these steps for each project that you want to pull images from the internal registry.
-
Now that you set up the internal registry with an accessible route, you can log in, push, and pull images to the registry. For more information, see the Red Hat OpenShift documentation.
Importing images from IBM Cloud Container Registry into the internal registry image stream
By default, your Red Hat OpenShift on IBM Cloud cluster is set up to pull images from the remote, private IBM Cloud Container Registry icr.io
domains in the default
project. You can import an image from IBM Cloud Container
Registry into the internal registry of your Red Hat OpenShift cluster by tagging the image as an image stream.
With this setup, you can deploy apps from the image by using the local cache of the internal registry, which can make your app deployments build faster. Also, deployments in other projects can refer to the image stream so that you don't have
to create image pull secret credentials to IBM Cloud Container Registry in each project.
If you update your image in IBM Cloud Container Registry, the image is not pulled automatically into the internal registry of your Red Hat OpenShift cluster. Instead, configure periodic importing, or repeat these steps to tag the image. Depending on the image pull policy that you use in your deployment, you might also need to restart your deployment.
Want to learn more about how builds, image streams, and the internal registry work together? Read the Red Hat OpenShift docs, or check out this blog on managing container images.
-
Switch to the
default
project to pull your image into the image stream. Thedefault
project is already set up with credentials to access theicr.io
registries.oc project default
-
List the available images in your IBM Cloud Container Registry. Note the Repository and Tag of the image that you want to pull into the internal registry of your Red Hat OpenShift cluster.
ibmcloud cr images
-
Tag the image to pull it from your IBM Cloud Container Registry namespace into the internal registry as an image stream. For more information, see the Red Hat OpenShift documentation or run
oc tag --help
.oc tag <region>.icr.io/<namespace>/<image>:<tag> default/<image>:<tag> --reference-policy=local [--scheduled]
<region>.icr.io/<namespace>/<image>:<tag>
: Use the Repository and Tag information that you previously retrieved to fill out the IBM Cloud Container Registry region, namespace, image, and tag name of the image that you want to pull.default/<image>:<tag>
: Enter the information for the internal image stream that you create from the IBM Cloud Container Registry tagged image. You create this image stream in thedefault
project, which is also the project where the image stream is created if you don't specify a project. The values for<image>:<tag>
typically match the values that you previously retrieved.--reference-policy=local
: Set this value tolocal
so that a copy of the image from IBM Cloud Container Registry is imported into the local cache of the internal registry and made available to the cluster's projects as an image stream. If you don't include this value, the image stream refers back to IBM Cloud Container Registry when you use it in your deployments and therefore requires credentials in the project.--scheduled
: Set this optional option to set up periodic importing of the image from IBM Cloud Container Registry into the internal registry. The default frequency is 15 minutes. For more information, see the Red Hat OpenShift documentation.
-
Verify that your image stream is created.
oc get imagestreams
-
Verify that the image stream successfully pulled the image from IBM Cloud Container Registry. In the output, check that the latest tagged from image matches your
* <region>.icr.io/<namespace>/<image>@<digest>
image.oc describe is/<imagestream>
Example output
NAME: <imagestream> Namespace: default Created: 2 days ago Labels: <none> Annotations: openshift.io/image.dockerRepositoryCheck=2020-03-31T09:41:36Z Image Repository: image-registry.openshift-image-registry.svc:5000/default/ant1 Image Lookup: local=false Unique Images: 1 Tags: 1 latest tagged from <region>.icr.io/<namespace>/<image>:<tag> * <region>.icr.io/<namespace>/<image>@<digest> 2 days ago
Now, your developers can use the image stream in an app deployment. The image successfully builds from the locally pulled image in the internal registry. You don't need to set up an image pull secret in the project to IBM Cloud Container Registry, because the image stream is local to the cluster.
Setting up builds in the internal registry to push images to IBM Cloud Container Registry
When you create a build in your Red Hat OpenShift on IBM Cloud cluster, you can set up the internal registry to push the image to your external repository in IBM Cloud Container Registry. By default, the image pull secret in the default
project of your cluster only has
read access to pull images from IBM Cloud Container Registry. To push images, you must add a secret with write access.
-
Switch to the
default
project.oc project default
-
Follow the steps to set up an IBM Cloud IAM API key with the
Reader
andWriter
service access roles to pull from and push images to youricr.io
registries.Keep in mind that any user with access to the project can use this secret to push images to your private registry. You might want to set up logging and monitoring tools so that you can observe who does what actions in your cluster.
-
Repeat the previous step for each
icr.io
region that you want to push images to. -
Add the secret to the build service account and refer to the secrets in the build configuration file. For more information, see the Red Hat OpenShift documentation.
-
Add the secret to the build service account by linking the secret that you just created to the
builder
role that all builds in the cluster use.oc secrets link builder <secret_name>
-
List the build configurations and note the ones that you want to give push and pull access to IBM Cloud Container Registry.
oc get bc
-
Set the image push secret for the build configuration to use the secret that you just created with
Writer
service access to IBM Cloud Container Registry.oc set build-secret --push bc/<build_config_name> <secret_name>
-
Set the image pull secret for the build configuration to pull from the registry that you want to pull the initial build image from. For example, you can use the secret that you just created with
Reader
service access to IBM Cloud Container Registry if the source image is in an IBM Cloud Container Registry repository.oc set build-secret --pull bc/<build_config_name> <secret_name>
-
-
After you update the build service account and build configuration file to push to IBM Cloud Container Registry, restart your build.
oc start-build <build_name>
-
Get the name of your build pod, such as
<build>-2-build
.oc get pods
-
Check the logs of the build and note where the image was pushed.
oc logs <build_pod>
Example of a successful image push log.
... Successfully pushed <region>.icr.io/<namespace>/<build_name>@sha256:<hash> Push successful
-
Check your images in your private registry to confirm that the image is created.
ibmcloud cr image list
Example output
Repository Tag Digest Namespace Created Size Security status <region>.icr.io/<namespace>/<build_name> latest <digest> <namespace> 2 minutes ago 182 MB 33 Issues
Your Red Hat OpenShift build can now pull images from and push images to IBM Cloud Container Registry.
Using IBM Cloud Container Registry
By default, your Red Hat OpenShift on IBM Cloud cluster is set up to pull images from the remote, private IBM Cloud Container Registry icr.io
domains in the default
project. If you want to use images that are stored
in IBM Cloud Container Registry for other projects, you can pull the image to the internal registry in an image stream, or create image pull secrets for each global and regional registry in each project.
To import images into the internal registry: See Importing images from IBM Cloud Container Registry into the internal registry image stream.
To pull images directly from the external IBM Cloud Container Registry: See the following topics.
- Understanding how to authorize your cluster to pull images from a registry.
- Copying the
all-icr-io
secret from thedefault
project to the project that you want to pull images from. - Creating your own image pull secret.
- Adding the image pull secret to your deployment configuration or to the project service account.
Understanding how to authorize your cluster to pull images from a private registry
To pull images from a registry, your Red Hat OpenShift on IBM Cloud cluster uses a special type of Kubernetes secret, an imagePullSecret
. This image pull secret stores the credentials to access a container registry.
The container registry can be:
- A private namespace in your own IBM Cloud Container Registry.
- A private namespace in IBM Cloud Container Registry that belongs to a different IBM Cloud account.
- Any other private registry such as Docker.
However, by default, your cluster is set up to pull images from only your account's namespaces in IBM Cloud Container Registry, and deploy containers from these images to the default
Red Hat OpenShift project in your cluster. If
you need to pull images in other projects of the cluster or from other container registries, then you must set up your own image pull secrets.
Default image pull secret setup
Generally, your Red Hat OpenShift on IBM Cloud cluster is set up to pull images from all IBM Cloud Container Registry icr.io
domains from the default
Red Hat OpenShift project only. Review the following FAQs to learn
more about how to pull images in other Red Hat OpenShift projects or accounts, restrict pull access, or why your cluster might not have the default image pull secrets.
- How is my cluster set up to pull images from the
default
Red Hat OpenShift project? - When you create a cluster, the cluster has an IBM Cloud IAM service ID that is given an IAM Reader service access role policy to IBM Cloud Container Registry. The service ID credentials are impersonated in a non-expiring
API key that is stored in image pull secrets in your cluster. The image pull secrets are added to the
default
Kubernetes namespace and the list of secrets in thedefault
service account for this Red Hat OpenShift project. By using image pull secrets, your deployments can pull images (read-only access) from the global and regional IBM Cloud Container Registry to deploy containers in thedefault
Red Hat OpenShift project.
- The global registry securely stores public images that are provided by IBM. You can refer to these public images across your deployments instead of having different references for images that are stored in each regional registry.
- The regional registry securely stores your own private Docker images.
- What if I don't have image pull secrets in the
default
Red Hat OpenShift project? - You can check the image pull secrets by logging in to your cluster and running
oc get secrets -n default | grep "icr-io"
. If noicr
secrets are listed, the person who created the cluster might not have had the required permissions to IBM Cloud Container Registry in IAM. See Updating existing clusters to use the API key image pull secret. - Can I restrict pull access to a certain regional registry?
- Yes, you can edit the existing IAM policy of the service ID that restricts the Reader service access role to that regional registry or a registry resource such as a namespace. Before you can customize registry IAM policies, you must enable IBM Cloud IAM policies for IBM Cloud Container Registry.
Want to make your registry credentials even more secured Ask your cluster admin to enable a key management service provider in your cluster to encrypt Kubernetes secrets in your cluster, such as the image pull secret that stores your registry credentials.
- Can I pull images in a Red Hat OpenShift project other than
default
? - Not by default. By using the default cluster setup, you can deploy containers from any image that is stored in your IBM Cloud Container Registry namespace into the
default
Red Hat OpenShift project of your cluster. To use these images in any other Red Hat OpenShift projects or other IBM Cloud accounts, you have the option to copy or create your own image pull secrets. - Can I pull images from a different IBM Cloud account?
- Yes, create an API key in the IBM Cloud account that you want to use. Then, in each project of each cluster that you want to pull images from the IBM Cloud account, create a secret that holds the API key. For more information, follow along with this example that uses an authorized service ID API key.
To use a non-IBM Cloud registry such as Docker, see Accessing images that are stored in other private registries.
- Does the API key need to be for a service ID? What happens if I reach the limit of service IDs for my account?
- The default cluster setup creates a service ID to store IBM Cloud IAM API key credentials in the image pull secret. However, you can also create an API key for an individual user and store those credentials in an image pull secret. If you
reach the IAM limit for service IDs, your cluster is created without the service ID and image pull secret and can't pull images from the
icr.io
registry domains by default. You must create your own image pull secret, but by using an API key for an individual user such as a functional ID, not an IBM Cloud IAM service ID. - I see image pull secrets for the regional registry domains and all registry domains. Which one do I use?
- Previously, Red Hat OpenShift on IBM Cloud created separate image pull secrets for each regional, public
icr.io
registry domain. Now, all the public and privateicr.io
registry domains for all regions are stored in a singleall-icr-io
image pull secret that is automatically created in thedefault
Kubernetes project of your cluster.
For workloads in other Kubernetes namespaces in the cluster to pull container images from a private registry, you can now copy only the all-icr-io
image pull secret to that Kubernetes project. Then, specify the all-icr-io
secret in your service account or deployment. You don't need to copy the image pull secret that matches the regional registry of your image anymore. Also, keep in mind that you don't need image pull secrets for public registries, which don't
require authentication.
- After I copy or create an image pull secret in another Red Hat OpenShift project, am I done?
- Not quite. Your containers must be authorized to pull images by using the secret that you created. You can add the image pull secret to the service account for the namespace, or refer to the secret in each deployment. For instructions, see Using the image pull secret to deploy containers.
Private network connection to icr.io
registries
When you set up your IBM Cloud account to use service endpoints, you can use a private network connection to push images to and to pull images from IBM Cloud Container Registry.
What do I need to do to set up my cluster to use the private connection to icr.io
registries?
- Enable a Virtual Router Function (VRF) for your IBM Cloud infrastructure account so that you can use the IBM Cloud Container Registry private cloud service endpoint.
To enable VRF, see Enabling VRF. To check whether a VRF is already enabled, use the
ibmcloud account show
command. - Enable your IBM Cloud account to use service endpoints.
IBM Cloud Container Registry automatically uses the private cloud service endpoint. You don't need to enable the private cloud service endpoint for your Red Hat OpenShift on IBM Cloud clusters.
Updating existing clusters to use the API key image pull secret
New Red Hat OpenShift on IBM Cloud clusters store an API key in image pull secrets to authorize access to IBM Cloud Container Registry. With these image pull secrets, you can deploy containers from images
that are stored in the icr.io
registry domains. You can add the image pull secrets to your cluster if your cluster was not created with the secrets.
Before you begin
-
Make sure that you have the following permissions: IBM Cloud IAM Operator or Administrator platform access role for Red Hat OpenShift on IBM Cloud. The account owner can give you the role by running the following command.
ibmcloud iam user-policy-create EMAIL --service-name containers-kubernetes --roles "Administrator,Operator"
-
IBM Cloud IAM Administrator platform access role for IBM Cloud Container Registry, across all regions and resource groups. The policy can't be scoped to a particular region or resource group. The account owner can give you the role by running the following command.
Verify that the secret was created successfully
ibmcloud iam user-policy-create <your_user_email> --service-name container-registry --roles Administrator
-
If your account restricts service ID creation, add the Service ID creator role to Identity and Access Management in the console (
iam-identity
in the API or CLI). -
If your account restricts API key creation, add the User API key creator role to Identity and Access Management in the console (
iam-identity
in the API or CLI).
Updating your image pull secret
To update your cluster image pull secret in the default
Kubernetes namespace.
-
Get your cluster ID.
ibmcloud oc cluster ls
-
Run the following command to create a service ID for the cluster and assign the service ID an IAM Reader service access role for IBM Cloud Container Registry. The command also creates an API key to impersonate the service ID credentials and stores the API key in a Kubernetes image pull secret in the cluster. The image pull secret is in the
default
Red Hat OpenShift project.ibmcloud oc cluster pull-secret apply --cluster <cluster_name_or_ID>
When you run this command, the creation of IAM credentials and image pull secrets is initiated and can take some time to complete. You can't deploy containers that pull an image from the IBM Cloud Container Registry
icr.io
domains until the image pull secrets are created. -
Verify that the image pull secrets are created in your cluster.
oc get secrets | grep icr-io
Example output
all-icr-io kubernetes.io/dockerconfigjson 1 16d
-
Update your container deployments to pull images from the
icr.io
domain name. -
Optional: If you have a firewall, make sure you allow outbound network traffic to the registry subnets for the domains that you use.
-
Complete your setup by using one of the following options.
- To pull images in Red Hat OpenShift projects other than
default
or from other IBM Cloud accounts, copy or create another image pull secret. - To restrict the image pull secret access to particular registry resources such as namespaces or regions:
- Make sure that IBM Cloud IAM policies for IBM Cloud Container Registry are enabled.
- Edit the IBM Cloud IAM policies for the service ID, or create another image pull secret.
- To pull images in Red Hat OpenShift projects other than
Using an image pull secret to access images in external private registries
Set up your own image pull secret in your cluster to deploy containers to Red Hat OpenShift projects other than default
, use images that are stored in other IBM Cloud accounts, or use images that are stored in external private registries.
Further, you might create your own image pull secret to apply IAM access policies that restrict permissions to specific registry image namespaces, or actions (such as push
or pull
).
After you create the image pull secret, your containers must use the secret to be authorized to pull an image from the registry. You can add the image pull secret to the service account for the project, or refer to the secret in each deployment. For instructions, see Using the image pull secret to deploy containers.
Image pull secrets are valid only for the Red Hat OpenShift projects that they were created for. Repeat these steps for every namespace where you want to deploy containers.
Before you begin:
- Set up a namespace in IBM Cloud Container Registry and push images to this namespace.
- Create a cluster.
- Access your Red Hat OpenShift cluster.
To use your own image pull secret, choose among the following options:
- Copy the image pull secret from the default Red Hat OpenShift project to other projects in your cluster.
- Create new IAM API key credentials and store them in an image pull secret to access images in other IBM Cloud accounts or to apply IAM policies that restrict access to certain registry domains or namespaces.
- Create an image pull secret to access images in external private registries.
If you already created an image pull secret in your project that you want to use in your deployment, see Deploying containers by using the created imagePullSecret
.
Copying an existing image pull secret
You can copy an image pull secret, such as the one that is automatically created for the default
Red Hat OpenShift project, to other projects in your cluster. If you want to use different IBM Cloud IAM API key credentials for
this project such as to restrict access to specific projects, or to pull images from other IBM Cloud accounts, create an image pull secret instead.
-
List available Red Hat OpenShift projects in your cluster, or create a project to use.
oc get projects
Example output
default Active ibm-cert-store Active ibm-system Active kube-public Active kube-system Active
To create a project
oc new-project <project_name>
-
List the existing image pull secrets in the
default
Red Hat OpenShift project for IBM Cloud Container Registry.oc get secrets -n default | grep icr-io
Example output
all-icr-io kubernetes.io/dockerconfigjson 1 16d
-
Copy the
all-icr-io
image pull secret from thedefault
project to the project of your choice. The new image pull secrets are named<project_name>-icr-<region>-io
.oc get secret all-icr-io -n default -o yaml | sed 's/default/<new-project>/g' | oc create -n <new-project> -f -
-
Verify that the secrets are created successfully.
oc get secrets -n <project_name> | grep icr-io
-
To deploy containers, add the image pull secret to each deployment or to the service account of the project so that any deployment in the project can pull images from the registry.
Creating an image pull secret with different IAM API key credentials
You can assign IBM Cloud IAM access policies to users or a service ID to restrict permissions to specific registry image namespaces or actions (such as push
or pull
). Then, create an API key and store these registry
credentials in an image pull secret for your cluster.
For example, to access images in other IBM Cloud accounts, create an API key that stores the IBM Cloud Container Registry credentials of a user or service ID in that account. Then, in your cluster's account, save the API key credentials in an image pull secret for each cluster and cluster project.
The following steps create an API key that stores the credentials of an IBM Cloud IAM service ID. Instead of using a service ID, you might want to create an API key for a user ID that has an IBM Cloud IAM service access policy to IBM Cloud Container Registry. However, make sure that the user is a functional ID or have a plan in case the user leaves so that the cluster can still access the registry.
-
List available Red Hat OpenShift projects in your cluster, or create a project to use where you want to deploy containers from your registry images.
oc get projects
Example output
default Active ibm-cert-store Active ibm-system Active kube-public Active kube-system Active
To create a project
oc new-project <project_name>
-
Create an IBM Cloud IAM service ID for your cluster that is used for the IAM policies and API key credentials in the image pull secret. Be sure to give the service ID a description that helps you retrieve the service ID later, such as including both the cluster and project name.
ibmcloud iam service-id-create <cluster_name>-<project>-id --description "Service ID for IBM Cloud Container Registry in Red Hat OpenShift on IBM Cloud cluster <cluster_name> project <project>"
-
Create a custom IBM Cloud IAM policy for your cluster service ID that grants access to IBM Cloud Container Registry.
ibmcloud iam service-policy-create <cluster_service_ID> --roles <service_access_role> --service-name container-registry [--region <IAM_region>] [--resource-type namespace --resource <registry_namespace>]
cluster_service_ID
- Required. Replace with the
<cluster_name>-<kube_namespace>-id
service ID that you previously created for your Kubernetes cluster. --service-name container-registry
- Required. Enter
container-registry
so that the IAM policy is for IBM Cloud Container Registry. --roles <service_access_role>
- Required. Enter the service access role for IBM Cloud Container Registry that you want to scope the service ID access to. Possible values are
Reader
,Writer
, andManager
. --region <IAM_region>
- Optional. If you want to scope the access policy to certain IAM regions, enter the regions in a comma-separated list. Possible values are
global
and the local registry regions. --resource-type namespace --resource <registry_namespace>
- Optional. If you want to limit access to only images in certain IBM Cloud Container Registry namespaces, enter
namespace
for the resource type and specify the<registry_namespace>
. To list registry namespaces, runibmcloud cr namespaces
.
-
Create an API key for the service ID. Name the API key similar to your service ID, and include the service ID that you previously created,
<cluster_name>-<kube_namespace>-id
. Be sure to give the API key a description that helps you retrieve the key later.ibmcloud iam service-api-key-create <cluster_name>-<project>-key <cluster_name>-<project>-id --description "API key for service ID <service_id> in Red Hat OpenShift on IBM Cloud cluster <cluster_name> project <project>"
-
Retrieve your API Key value from the output of the previous command.
Please preserve the API key! It can't be retrieved after it's created. Name <cluster_name>-<kube_namespace>-key Description key_for_registry_for_serviceid_for_kubernetes_cluster_multizone_namespace_test Bound To crn:v1:bluemix:public:iam-identity::a/1bb222bb2b33333ddd3d3333ee4ee444::serviceid:ServiceId-ff55555f-5fff-6666-g6g6-777777h7h7hh Created At 2019-02-01T19:06+0000 API Key i-8i88ii8jjjj9jjj99kkkkkkkkk_k9-llllll11mmm1 Locked false UUID ApiKey-222nn2n2-o3o3-3o3o-4p44-oo444o44o4o4
-
Create an image pull secret to store the API key credentials in the cluster project. Repeat this step for each project of each cluster for each
icr.io
domain that you want to pull images from.oc --namespace <project> create secret docker-registry <secret_name> --docker-server=<registry_URL> --docker-username=iamapikey --docker-password=<api_key_value> --docker-email=<docker_email>
--namespace <project>
- Required. Specify the Red Hat OpenShift project of your cluster that you used for the service ID name.
<secret_name>
- Required. Enter a name for your image pull secret.
--docker-server <registry_URL>
- Required. Set the URL to the image registry where your registry namespace is set up. For available domains, see Local regions.
--docker-username iamapikey
- Required. Enter the username to log in to your private registry. If you use IBM Cloud Container Registry, enter
iamapikey
. --docker-password <token_value>
- Required. Enter the value of your
API Key
that you previously retrieved. --docker-email <docker-email>
- Required. If you have one, enter your Docker email address. If you don't, enter a fictional email address, such as
a@b.c
. This email is required to create a Kubernetes secret, but is not used after creation.
-
Verify that the secret was created successfully. Replace
<project>
with theproject
where you created the image pull secret.oc get secrets --namespace <project>
Accessing images that are stored in other private registries
If you already have a private registry, you must store the registry credentials in a Kubernetes image pull secret and reference this secret from your configuration file.
Before you begin:
To create an image pull secret:
-
Create the Kubernetes secret to store your private registry credentials.
oc --namespace <project> create secret docker-registry <secret_name> --docker-server=<registry_URL> --docker-username=<docker_username> --docker-password=<docker_password> --docker-email=<docker_email>
--namespace <project>
- Required. The Red Hat OpenShift project of your cluster where you want to use the secret and deploy containers to. To list available projects in your cluster, run
oc get projects
. <secret_name>
- Required. The name that you want to use for your image pull secret.
--docker-server <registry_URL>
- Required. The URL to the registry where your private images are stored.
--docker-username <docker_username>
- Required. The username to log in to your private registry.
--docker-password <token_value>
- Required. The password to log in to your private registry, such as a token value.
--docker-email <docker-email>
- Required. If you have one, enter your Docker email address. If you don't have one, enter a fictional email address, such as
a@b.c
. This email is required to create a Kubernetes secret, but is not used after creation.
-
Verify that the secret was created successfully. Replace
<project>
with the name of the project where you created the image pull secret.oc get secrets --namespace <project>
Using the image pull secret to deploy containers
You can define an image pull secret in your pod deployment or store the image pull secret in your Kubernetes service account so that it is available for all deployments that don't specify a Kubernetes service account in the project.
To plan how image pull secrets are used in your cluster, choose between the following options.
- Referring to the image pull secret in your pod deployment: Use this option if you don't want to grant access to your registry for all pods in your project by default. Developers can include the image pull secret in each pod deployment that must access your registry.
- Storing the image pull secret in the Kubernetes service account: Use this option to grant access to images in your registry for all deployments in the selected Red Hat OpenShift projects. To store an image pull secret in the Kubernetes service account, use the following steps.
Storing the image pull secret in the Kubernetes service account for the selected project
Every Red Hat OpenShift project has a Kubernetes service account that is named default
. Within the project, you can add the image pull secret to this service account to grant access for pods to pull images from your registry.
Deployments that don't specify a service account automatically use the default
service account for this Red Hat OpenShift project.
-
Check if an image pull secret already exists for your default service account.
oc describe serviceaccount default -n <project_name>
When
<none>
is displayed in the Image pull secrets entry, no image pull secret exists. -
Add the image pull secret to your default service account.
-
Example command to add the image pull secret when no image pull secret is defined.
oc patch -n <project_name> serviceaccount/default -p '{"imagePullSecrets":[{"name": "<image_pull_secret_name>"}]}'
-
Example command to add the image pull secret when an image pull secret is already defined.
oc patch -n <project_name> serviceaccount/default --type='json' -p='[{"op":"add","path":"/imagePullSecrets/-","value":{"name":"<image_pull_secret_name>"}}]'
-
-
Verify that your image pull secret was added to your default service account.
oc describe serviceaccount default -n <project_name>
Example output
Name: default Namespace: <namespace_name> Labels: <none> Annotations: <none> Image pull secrets: <image_pull_secret_name> Mountable secrets: default-token-sh2dx Tokens: default-token-sh2dx Events: <none>
If the Image pull secrets says
<secret> (not found)
, verify that the image pull secret exists in the same project as your service account by runningoc get secrets -n project
. -
Create a pod configuration file that is named
mypod.yaml
to deploy a container from an image in your registry.apiVersion: v1 kind: Pod metadata: name: mypod spec: containers: - name: mypod-container image: <region>.icr.io/<project>/<image>:<tag>
-
Create the pod in the cluster by applying the
mypod.yaml
configuration file.oc apply -f mypod.yaml
Setting up a cluster to pull entitled software
You can set up your Red Hat OpenShift on IBM Cloud cluster to pull entitled software, which is a collection of protected container images that are packaged in Helm charts that you are licensed to use by IBM. Entitled software is stored in a
special IBM Cloud Container Registry cp.icr.io
domain. To access this domain, you must create an image pull secret with an entitlement key for your cluster and add this image pull secret to the Kubernetes service account of each
project where you want to deploy this entitled software.
Before you begin: Access your Red Hat OpenShift cluster.
-
Get the entitlement key for your entitled software library.
- Log in to MyIBM.com and scroll to the Container software library section. Click View library.
- From the Access your container software > Entitlement keys page, click Copy key. This key authorizes access to all the entitled software in your container software library.
-
In the project that you want to deploy your entitled containers, create an image pull secret so that you can access the
cp.icr.io
entitled registry. Use the entitlement key that you previously retrieved as the--docker-password
value. For more information, see Accessing images that are stored in other private registries.oc create secret docker-registry entitled-cp-icr-io --docker-server=cp.icr.io --docker-username=cp --docker-password=<entitlement_key> --docker-email=<docker_email> -n <project>
-
Add the image pull secret to the service account of the namespace so that any container in the project can use the entitlement key to pull entitled images. For more information, see Using the image pull secret to deploy containers.
oc patch -n <project> serviceaccount/default --type='json' -p='[{"op":"add","path":"/imagePullSecrets/-","value":{"name":"entitled-cp-icr-io"}}]'
-
Create a pod in the project that builds a container from an image in the entitled registry.
oc run <pod_name> --image=cp.icr.io/<image_name> -n <project> --generator=run-pod/v1
-
Check that your container was able to successfully build from the entitled image by verifying that the pod is in a Running status.
oc get pod <pod_name> -n <project>
Wondering what to do next? You can set up the entitled Helm chart repository, where Helm charts that incorporate entitled software are stored. If you already have Helm installed
in your cluster, run helm repo add entitled https://raw.githubusercontent.com/IBM/charts/master/repo/entitled
.
Adding a private registry to the global pull secret
With OpenShift Container Platform, you can set up a global image pull secret that each worker node in the cluster can use to pull images from a private registry.
By default, your Red Hat OpenShift on IBM Cloud cluster has a global image pull secret for the following registries, so that default Red Hat OpenShift components can be deployed.
cloud.openshift.com
quay.io
registry.connect.redhat.com
registry.redhat.io
Do not replace the global pull secret with a pull secret that does not have credentials to the default Red Hat registries. If you do, the default Red Hat OpenShift components that are installed in your cluster, such as the OperatorHub, might fail because they can't pull images from these registries.
Before you begin:
- Download the
jq
JSON processor command line package. You usejq
to combine the JSON value of the default global pull secret with the private registry pull secret that you want to add. - Access your Red Hat OpenShift cluster.
To add private registries, edit the global pull-secret
in the openshift-config
project.
-
Create a secret value that holds the credentials to access your private registry and store the decoded secret value in a JSON file. When you create the secret value, the credentials are automatically encoded to base64. By using the
--dry-run
option, the secret value is created only and no secret object is created in your cluster. The decoded secret value is then stored in a JSON file to later use in your global pull secret.oc create secret docker-registry <secret_name> --docker-server=<registry_URL> --docker-username=<docker_username> --docker-password=<docker_password> --docker-email=<docker_email> --dry-run=true --output="jsonpath={.data.\.dockerconfigjson}" | base64 --decode > myregistryconfigjson
--namespace <project>
- Required. The Red Hat OpenShift project of your cluster where you want to use the secret and deploy containers to. To list available projects in your cluster, run
oc get ns
. <secret_name>
- Required. The name that you want to use for your image pull secret.
--docker-server <registry_URL>
- Required. The URL to the registry where your private images are stored.
--docker-username <docker_username>
- Required. The username to log in to your private registry.
--docker-password <token_value>
- Required. The password to log in to your private registry, such as a token value.
--docker-email <docker-email>
- Required. If you have one, enter your Docker email address. If you don't have one, enter a fictional email address, such as
a@b.c
. This email is required to create a Kubernetes secret, but is not used after creation. --dry-run=true
- Include this option to create the secret value only, and not create and store the secret object in your cluster.
--output="jsonpath={.data.\.dockerconfigjson}"
- Get only the
.dockerconfigjson
value from the data section of the Kubernetes secret. | base64 --decode > myregistryconfigjson
- Download the decoded secret data to a local
myregistryconfigjson
file.
-
Retrieve the decoded secret value of the default global pull secret and store the value in a
dockerconfigjson
file.oc get secret pull-secret -n openshift-config --output="jsonpath={.data.\.dockerconfigjson}" | base64 --decode > dockerconfigjson
-
Combine the downloaded private registry pull secret
myregistryconfigjson
file with the default global pull secretdockerconfigjson
file.jq -s '.[0] * .[1]' dockerconfigjson myregistryconfigjson > dockerconfigjson-merged
-
Update the global pull secret with the combined
dockerconfigjson-merged
file.oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=dockerconfigjson-merged
-
Verify that the global pull secret is updated. Check that your private registry and each of the default Red Hat registries are in the output of the following command.
oc get secret pull-secret -n openshift-config --output="jsonpath={.data.\.dockerconfigjson}" | base64 --decode
Example output
{ "auths": { "cloud.openshift.com": { "auth": "<encoded_string>", "email": "email@example.com" }, "quay.io": { "auth": "<encoded_string>", "email": "email@example.com" }, "registry.connect.redhat.com": { "auth": "<encoded_string>", "email": "email@example.com" }, "registry.redhat.io": { "auth": "<encoded_string>", "email": "email@example.com" }, "<private_registry>": { "username": "iamapikey", "password": "<encoded_string>", "email": "email@example.com", "auth": "<encoded_string>" } } }
-
To pick up the global configuration changes, reload all the worker nodes in your cluster.
-
Note the ID of the worker nodes in your cluster.
ibmcloud oc worker ls -c <cluster_name_or_ID>
-
For Classic clusters Reload each worker node. You can reload multiple worker nodes by including multiple
-w
options, but make sure to leave enough worker nodes running at the same time for your apps to avoid an outage.ibmcloud oc worker reload -c <cluster_name_or_ID> -w <workerID_1> -w <workerID_2>
-
For VPC clusters Replace each worker node. Before you begin, make sure that your cluster has enough other worker nodes so that your pods can be rescheduled and continue to run.
ibmcloud oc worker replace --cluster <cluster_name_or_ID> --worker <worker_node_ID>
-
-
After the worker nodes are back in a healthy state, verify that the global pull secret is updated on a worker node.
-
Start a debugging pod to log in to a worker node. Use the Private IP that you retrieved earlier for the
<node_name>
.oc debug node/<node_name>
-
Change the root directory to the host so that you can view files on the worker node.
chroot /host
-
Verify that the Docker configuration file has the registry credentials that match the global pull secret that you set.
vi /.docker/config.json
-
Updating the global pull secret
Complete the following steps to update the global pull secret in your Red Hat OpenShift on IBM Cloud cluster.
- Create a secret that has the credentials for the registry you want to use.
Exampleoc create secret docker-registry docker-auth-secret \ --docker-server=REGISTRY \ --docker-username=USERNAME \ --docker-password=PASSWORD \ --namespace kube-system
create secret
command.oc create secret docker-registry docker-auth-secret \ --docker-server=REGISTRY \ --docker-username=cp \ --docker-password=ENTITLEMENT-KEY \ --namespace kube-system
- Create a DaemonSet to apply the secret across all worker nodes.
cat << EOF | oc create -f - apiVersion: apps/v1 kind: DaemonSet metadata: name: update-docker-config namespace: kube-system labels: app: update-docker-config spec: selector: matchLabels: name: update-docker-config template: metadata: labels: name: update-docker-config spec: initContainers: - command: ["/bin/sh", "-c"] args: - > echo "Checking if RHEL or RHCOS host"; [[ -s /docker-config/.docker/config.json ]] && CONFIG_PATH=/docker-config/.docker || CONFIG_PATH=/docker-config/root/.docker; echo "Backing up or restoring config.json"; [[ -s \$CONFIG_PATH/config.json ]] && cp \$CONFIG_PATH/config.json \$CONFIG_PATH/config.json.bak || cp \$CONFIG_PATH/config.json.bak \$CONFIG_PATH/config.json; echo "Merging secret with config.json"; /host/usr/bin/jq -s '.[0] * .[1]' \$CONFIG_PATH/config.json /auth/.dockerconfigjson > \$CONFIG_PATH/config.tmp; mv \$CONFIG_PATH/config.tmp \$CONFIG_PATH/config.json; echo "Sending signal to reload crio config"; pidof crio; kill -1 \$(pidof crio) image: icr.io/ibm/alpine:latest imagePullPolicy: IfNotPresent name: updater resources: {} securityContext: privileged: true volumeMounts: - name: docker-auth-secret mountPath: /auth - name: docker mountPath: /docker-config - name: bin mountPath: /host/usr/bin - name: lib64 mountPath: /lib64 containers: - resources: requests: cpu: 0.01 image: icr.io/ibm/alpine:latest name: sleepforever command: ["/bin/sh", "-c"] args: - > while true; do sleep 100000; done hostPID: true volumes: - name: docker-auth-secret secret: secretName: docker-auth-secret - name: docker hostPath: path: / - name: bin hostPath: path: /usr/bin - name: lib64 hostPath: path: /lib64 hostPathType: Directory EOF
- Verify the pods are running.
oc get daemonset -n kube-system update-docker-config
Updating an IBM Cloud Kubernetes Service containerd custom registry configuration
With Kubernetes version 1.22 or later, you can use containerd configuration files on worker nodes to configure pulling from a container registry. You can use a daemonset to update the configurations across all nodes in a cluster, which prevents configurations from being wiped when worker nodes reload or when new workers are added.
Example daemonset to update a containerd custom registry configuration
Use the example YAML file to define a daemonset that runs on all worker nodes to set or update a containerd registry host configuration and mount to the corresponding containerd registry path.
The example sets the following registry host configuration for dockerhub. This registry host configuration is already provided and automatically configured during the worker provisioning phase. The init
container initializes hosts.toml
on every worker node after deployment and after worker nodes reload or restart.
server = "https://docker.io"
[host."https://registry-1.docker.io"]
capabilities = ["pull", "resolve"]
Example YAML file:
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
name: containerd-dockerhub-registry-config
name: containerd-dockerhub-registry-config
namespace: kube-system
spec:
selector:
matchLabels:
name: containerd-dockerhub-registry-config
template:
metadata:
labels:
name: containerd-dockerhub-registry-config
spec:
initContainers:
- image: alpine:3.13.6
name: containerd-dockerhub-registry-config
command:
- /bin/sh
- -c
- |
#!/bin/sh
set -uo pipefail
cat << EOF > /etc/containerd/certs.d/docker.io/hosts.toml
server = "https://docker.io"
[host."https://registry-1.docker.io"]
capabilities = ["pull", "resolve"]
EOF
volumeMounts:
- mountPath: /etc/containerd/certs.d/docker.io/
name: dockerhub-registry-config
containers:
- name: pause
image: "us.icr.io/armada-master/pause:3.5"
imagePullPolicy: IfNotPresent
volumes:
- name: dockerhub-registry-config
hostPath:
path: /etc/containerd/certs.d/docker.io/
For more information on updating a containerd registry host configuration, see the containerd documentation.