Architecture and dependencies of the service
Review sample architectures, components, and dependencies for your Red Hat® OpenShift® on IBM Cloud® clusters.
In Red Hat OpenShift on IBM Cloud, your clusters comprise an IBM-managed master that secures components such as the API server and etcd, and customer-managed worker nodes that you configure to run you app workloads, as well as Red Hat OpenShift-provided default components. The default components within the cluster, such as the Red Hat OpenShift web console or OperatorHub, vary with the Red Hat OpenShift version of your cluster.
Classic Red Hat OpenShift architecture
Review the architecture diagram and then scroll through the following tables for a description of master and worker node components in Red Hat OpenShift on IBM Cloud clusters that run on classic infrastructure. For more information about the OpenShift Container Platform architecture, see the Red Hat OpenShift docs.
When you run oc get nodes
, you might notice that the ROLES of your worker nodes are marked as both master,worker
. These nodes are worker nodes in IBM Cloud, and don't include the master components that
are managed by IBM. Instead, these nodes are marked as master
because they run OpenShift Container Platform components that are required to set up and manage default resources within the cluster, such as the OperatorHub and internal
registry.
Red Hat OpenShift master components
Review the following components in the IBM-managed master of your Red Hat OpenShift on IBM Cloud cluster.
You can't modify these components. IBM manages the components and automatically updates them during master patch updates.
In OpenShift Container Platform 4, many components are configured by a corresponding operator for ease of management. The following table discusses these operators and components together, to focus on the main functionality the component provides to the cluster.
- Single tenancy
-
The master and all master components are dedicated only to you, and are not shared with other IBM customers.
- Replicas
-
Master components, including the Red Hat OpenShift API server and etcd data store have three replicas and, if located in a multizone metro, are spread across zones for even higher availability. The master components are backed up every 8 hours.
cloud-controller-manager
-
The cloud controller manager manages cloud provider-specific components such as the IBM Cloud load balancer.
cluster-health
-
The cluster health component monitors the health of the cluster and integrates with IBM Cloud monitoring and metrics for the service.
cluster-policy-controller
-
The
cluster-policy-controller
maintains policy resources that are required to create pods within the cluster. cluster-version-operator
-
The cluster version operator (CVO) installs and updates other operators that run in the cluster. For more information, see the GitHub project.
control-plane-operator
-
The control plane operator manages the installation and update of control plane components in the master.
etcd
,etcd-molecule
,etcd-operator
-
etcd is a highly available key value store that stores the state of all Kubernetes resources of a cluster, such as services, deployments, and pods. Data in etcd is backed up every 8 hours to an encrypted storage instance that IBM manages.
kube-controller-manager
,openshift-controller-manager
-
The Kubernetes controller watches the state of objects within the cluster, such as the replica set of a workload. When the state of an object changes, for example if a pod in a replica set goes down, the controller manager initiates correcting actions to achieve the required state. The Red Hat OpenShift controller performs the same function for objects that are specific to the Red Hat OpenShift API, such as projects.
kube-scheduler
-
The Kubernetes scheduler watches for newly created pods and decides where to deploy them based on capacity, performance needs, policy constraints, anti-affinity specifications, and workload requirements. If no worker node can be found that matches the requirements, the pod is not deployed in the cluster.
manifests-bootstrapper
-
The
manifests-boot-strapper
job sets up the master with the required certificates to join as the master node of the cluster. oauth-openshift
-
The built-in OAuth server is automatically set up to integrate with IBM Cloud Identity and Access Management (IAM). You can't add other supported identity providers to the cluster. For more information about how to authenticate with the cluster via IAM, see Accessing Red Hat OpenShift clusters.
openshift-apiserver
,openshift-apiserver-operator
,kube-apiserver
-
The API server is the main entry point for all cluster management requests from the worker node to the master. The API server validates and processes requests that change the state of Kubernetes objects, such as pods or services, and Red Hat OpenShift objects, such as projects or users. Then, the API server stores this state in the etcd data store.
konnectivity-server
,konnectivity-operator
-
The Konnectivity server works with the Konnectivity agent to securely connect the master to the worker node. This connection supports
apiserver proxy
calls to your pods and services;oc exec
,attach
, andlogs
calls to the kubelet; and mutating and validating webhooks. - Admission controllers
-
Admission controllers are implemented for specific features in Red Hat OpenShift on IBM Cloud clusters. With admission controllers, you can set up policies in your cluster that determine whether a particular action in the cluster is allowed or not. In the policy, you can specify conditions when a user can't perform an action, even if this action is part of the general permissions that you assigned the user by using RBAC. Therefore, admission controllers can provide an extra layer of security for your cluster before an API request is processed by the Red Hat OpenShift API server. When you create a Red Hat OpenShift cluster, the following Kubernetes admission controllers are automatically installed in the given order in the Red Hat OpenShift master, which can't be changed by the user:
NamespaceLifecycle
LimitRanger
ServiceAccount
DefaultStorageClass
ResourceQuota
StorageObjectInUseProtection
PersistentVolumeClaimResize
Priority
BuildByStrategy
OriginPodNodeEnvironment
PodNodeSelector
ExternalIPRanger
NodeRestriction
SecurityContextConstraint
SCCExecRestrictions
PersistentVolumeLabel
OwnerReferencesPermissionEnforcement
PodTolerationRestriction
openshift.io/JenkinsBootstrapper
openshift.io/BuildConfigSecretInjector
openshift.io/ImageLimitRange
openshift.io/RestrictedEndpointsAdmission
openshift.io/ImagePolicy
openshift.io/IngressAdmission
openshift.io/ClusterResourceQuota
MutatingAdmissionWebhook
ValidatingAdmissionWebhook
-
You can install your own admission controllers in the cluster or choose from the optional admission controllers that Red Hat OpenShift on IBM Cloud provides. Container image security enforcement: Install Portieris to block container deployments from unsigned images.
-
If you manually installed admission controllers and you don't want to use them anymore, make sure to remove them entirely. If admission controllers aren't entirely removed, they might block all actions that you want to perform on the cluster.
Red Hat OpenShift worker node components
Review the following components in the customer-managed worker nodes of your Red Hat OpenShift on IBM Cloud cluster.
These components run on your worker nodes because you are able to use them with the workloads that you deploy to your cluster. For example, your apps might use an operator from the OperatorHub that runs a container from an image in the internal registry. You are responsible for your usage of these components, but IBM provides updates for them in the worker node patch updates that you choose to apply.
In OpenShift Container Platform, many components are configured by a corresponding operator for ease of management. The following table discusses these operators and components together, to focus on the main functionality the component provides to the cluster.
- Single tenancy
- The worker nodes and all worker node components are dedicated only to you, and are not shared with other IBM customers. However, if you use worker node virtual machines, the underlying hardware might be shared with other IBM customers depending on the level of hardware isolation that you choose.
- Operating System
- For a list of supported operating systems by cluster version, see the Version information.
- CRI-O container runtime
- Your worker nodes are installed with CRI-O as the container runtime interface. For more information, see Container runtime.
- Projects
- Red Hat OpenShift organizes your resources into projects, which are Kubernetes namespaces with annotations, and includes many more components than community Kubernetes clusters to run Red Hat OpenShift features such as the catalog. Select components of projects are described in the following rows. For more information, see Working with projects.
calico-system
,tigera-operator
- Calico manages network policies for your cluster, and includes a few components to manage container network connectivity, IP address assignment, and network traffic control. The Tigera operator installs and manages the lifecycle of Calico components.
default
- This project is used if you don't specify a project or create a project for your Kubernetes resources.
ibm-system
- This project includes the
ibm-cloud-provider-ip
deployment that works withkeepalived
to provide health checking and Layer 4 load balancing for requests to app pods. kube-system
- This project includes many components that are used to run Kubernetes on the worker node.
ibm-master-proxy
: Theibm-master-proxy
is a daemon set that forwards requests from the worker node to the IP addresses of the highly available master replicas. In single zone clusters, the master has three replicas on separate hosts. For clusters that are in a multizone-capable zone, the master has three replicas that are spread across zones. A highly available load balancer forwards requests to the master domain name to the master replicas.kubelet
: The kubelet is a worker node agent that runs on every worker node and is responsible for monitoring the health of pods that run on the worker node and for watching the events that the API server sends. Based on the events, the kubelet creates or removes pods, ensures liveness and readiness probes, and reports back the status of the pods to the API server.vpn
: The Konnectivity agent works with the Konnectivity server to securely connect the master to the worker node. This connection supportsapiserver proxy
calls to your pods and services, andoc exec
,attach
, andlogs
calls to the kubelet.- Other components: The
kube-system
project also includes components to manage IBM-provided resources such as storage plug-ins for file and block storage, ingress application load balancer (ALB), andkeepalived
.
openshift-cloud-credential-operator
- The cloud credential operator manages a controller for Red Hat OpenShift components that request cloud provider credentials. The controller ensures that only the credentials that are required for the operation are used, and not any elevated
permissions like
admin
. For more information, see the GitHub project. openshift-cluster-node-tuning-operator
- IBM manages the node tuning operator, which runs a daemon set on each worker node in the cluster to tune worker nodes.
openshift-cluster-samples-operator
- The samples operator manages select image streams and templates that come with the Red Hat OpenShift cluster by default. You can deploy these templates from the Developer perspective in the Red Hat OpenShift web console.
openshift-cluster-storage-operator
- The cluster storage operator makes sure that a default storage class is set.
openshift-console
,openshift-console-operator
- The Red Hat OpenShift web console is a user-friendly, web-based interface that you can use to manage the Red Hat OpenShift and Kubernetes resources that run in your cluster. You can also use the console to display an
oc login
token to authenticate to your cluster from a CLI. For more information, see Navigating the Red Hat OpenShift console. openshift-dns
,openshift-dns-operator
- The DNS project includes the components to validate incoming network traffic against the
iptables
rules that are set up on the worker node, and proxies requests that are allowed to enter or leave the cluster. openshift-image-registry
- Red Hat OpenShift provides an internal container image registry that you can use to locally manage and view images through
the console. Alternatively, you can set up the private IBM Cloud Container Registry or import images from IBM Cloud Container Registry to the internal registry.
The internal registry comes with a File Storage for Classic volume in your IBM Cloud infrastructure account to store the registry images. The file storage
volume is provisioned through the
image-registry-storage
persistent volume claim (PVC). openshift-ingress
,openshift-ingress-operator
- Red Hat OpenShift uses routes to directly expose an app's service on a hostname so that external clients can reach the service. To create routes, the cluster uses the Ingress operator. You can also use Ingress to expose apps externally and customize routing. Ingress consists of three components: the Ingress operator, Ingress controller, and Route resources. The Ingress controller maps the service to the hostname. By default, the Ingress controller includes two replicas. Make sure that your cluster has at least two worker nodes so that the Ingress controller can run on separate compute hosts for higher availability.
openshift-marketplace
- The marketplace includes the OperatorHub that comes with the Red Hat OpenShift cluster by default. The OperatorHub includes operators from Red Hat and 3rd-party providers. Keep in mind that these operators are provided by the community, might not integrate with your cluster, and are not supported by IBM. You can enable operators from the OperatorHub in Red Hat OpenShift web console.
openshift-monitoring
- OpenShift Container Platform includes a built-in monitoring stack for your cluster, that includes metrics and alert managing capabilities. For a comparison of the built-in monitoring stack and other options such as IBM Cloud Monitoring, see Understanding options for logging and monitoring.
openshift-multus
- OpenShift Container Platform uses the Multus container network interface (CNI) plug-in to allow multiple pod networks. However, you can't configure the cluster to use multiple pod networks. Red Hat OpenShift on IBM Cloud clusters support only Calico, which is set up for your cluster by default. If enabled, Service Mesh uses the Multus plug-in.
openshift-network-operator
- The cluster network operator (CNO) manages the cluster network components that are set up by default, such as the CNI pod network provider plug-in and DNS operator.
openshift-operator-lifecycle-manager
- The operator lifecycle manager (OLM) manages the lifecycle of all operators and the catalog that run in the cluster, including the operators for the default components and any custom operators that you add.
openshift-service-ca
,openshift-service-ca-operator
- The certificate authority (CA) operator runs certificate signing and injects certificates into API server resources and configmaps in the cluster. For more information, see the GitHub project.
VPC cluster service architecture
The following architectural overviews are specific to the VPC infrastructure provider. For an architectural overview for the classic infrastructure provider, see Classic cluster service architecture.
Review the architecture diagrams and then scroll through the following table for a description of master and worker node components in Red Hat OpenShift on IBM Cloud clusters that run on virtual private cloud (VPC) compute infrastructure.
Cluster with public and private cloud service endpoints
The following diagram shows the components of your cluster and how they interact when both the public and private cloud service endpoints are enabled. Because both service endpoints are enabled, your VPC creates a public load balancer for each service for inbound traffic.
Cluster with private cloud service endpoint only
The following diagram shows the components of your cluster and how they interact when only the private cloud service endpoint is enabled. Because only the private cloud service endpoint is enabled, your VPC creates a private load balancer for each service for inbound traffic.
VPC master and worker node components
Masters and worker nodes include the same components as described in the Classic cluster architecture for clusters. For more information about the OpenShift Container Platform architecture, see the Red Hat OpenShift docs.
- Master
- Master components, including the API server and etcd, have three replicas and are spread across zones for even higher availability. Masters include the same components as described in the Classic cluster architecture for clusters. The master and all the master components are dedicated only to you, and are not shared with other IBM customers.
- Worker node
- With Red Hat OpenShift on IBM Cloud, the virtual machines that your cluster manages are instances that are called worker nodes. These worker nodes virtual machines and all the worker node components are dedicated to you only and are not shared with other IBM customers. However, the underlying hardware is shared with other IBM customers. You manage the worker nodes through the automation tools that are provided by Red Hat OpenShift on IBM Cloud, such as the API, CLI, or console. Unlike classic clusters, you don't see VPC compute worker nodes in your infrastructure portal or separate infrastructure bill, but instead manage all maintenance and billing activity for the worker nodes from Red Hat OpenShift on IBM Cloud.
- Worker nodes include the same components as described in the Classic cluster architecture.
- When you run
oc get nodes
, you might notice that the ROLES of your worker nodes are marked as bothmaster,worker
. These nodes are worker nodes in IBM Cloud, and don't include the master components that are managed by IBM. Instead, these nodes are marked asmaster
because they run OpenShift Container Platform components that are required to set up and manage default resources within the cluster, such as the OperatorHub and internal registry. - Cluster networking
- Your worker nodes are created in a VPC subnet in the zone that you specify. Communication between the master and worker nodes is over the private network. If you create a cluster with the public and private cloud service endpoints enabled,
authenticated external users can communicate with the master over the public network, such as to run
oc
commands. If you create a cluster with only the private cloud service endpoints enabled, authenticated external users can communicate with the master over the private network only. You can set up your cluster to communicate with resources in on-premises networks, other VPCs, or classic infrastructure by setting up a VPC VPN, IBM Cloud Direct Link, or IBM Cloud Transit Gateway on the private network. - App networking
- Virtual Private Cloud load balancers are automatically created in your VPC outside the cluster for any networking services that you create in your cluster. For example, a VPC load balancer exposes the router services in your cluster by default.
Or, you can create a Kubernetes
LoadBalancer
service for your apps, and a VPC load balancer is automatically generated. VPC load balancers are multizone and route requests for your app through the private node ports that are automatically opened on your worker nodes. If the public and private cloud service endpoints are enabled, the routers and VPC load balancers are created as public by default. If only the private cloud service endpoint is enabled, the routers and VPC load balancers are created as private by default. For more information, see Public or Private app networking for VPC clusters. Calico is used as the cluster networking policy fabric. - Storage
- You can set up IBM Cloud Object Storage and Cloud Databases only.