IBM Cloud Docs
Understanding IBM Cloud Kubernetes Service

Understanding IBM Cloud Kubernetes Service

Learn more about IBM Cloud® Kubernetes Service, its capabilities, and the options that are available to you to customize the cluster to your needs.

IBM Cloud Kubernetes Service is a managed offering to create your own Kubernetes cluster of compute hosts to deploy and manage containerized apps on IBM Cloud. As a certified Kubernetes provider, IBM Cloud Kubernetes Service provides intelligent scheduling, self-healing, horizontal scaling, service discovery and load balancing, automated rollouts and rollbacks, and secret and configuration management for your apps. Combined with an intuitive user experience, built-in security and isolation, and advanced tools to secure, manage, and monitor your cluster workloads, you can rapidly deliver highly available and secure containerized apps in the public cloud.

Review frequently asked questions and key technologies that IBM Cloud Kubernetes Service uses.

What is Kubernetes?

Kubernetes is an open source platform for managing containerized workloads and services across multiple hosts, and offers management tools for deploying, automating, monitoring, and scaling containerized apps with minimal to no manual intervention.

Kubernetes certification badge
This badge indicates Kubernetes certification for IBM Cloud Container Service.

The open source project, Kubernetes, combines running a containerized infrastructure with production workloads, open source contributions, and Docker container management tools. The Kubernetes infrastructure provides an isolated and secure app platform for managing containers that is portable, extensible, and self-healing in case of a failover. For more information, see What is Kubernetes?.

Learn more about the key concepts of Kubernetes as illustrated in the following image.

Example deployment and namespaces
A description of key concepts for Kubernetes

Account

Your account refers to your IBM Cloud account.

Cluster, worker pool, and worker node

A Kubernetes cluster consists of a master and one or more compute hosts that are called worker nodes. Worker nodes are organized into worker pools of the same flavor, or profile of CPU, memory, operating system, attached disks, and other properties. The worker nodes correspond to the Kubernetes Node resource, and are managed by a Kubernetes master that centrally controls and monitors all Kubernetes resources in the cluster. So when you deploy the resources for a containerized app, the Kubernetes master decides which worker node to deploy those resources on, accounting for the deployment requirements and available capacity in the cluster. Kubernetes resources include services, deployments, and pods.

Namespace

Kubernetes namespaces are a way to divide your cluster resources into separate areas that you can deploy apps and restrict access to, such as if you want to share the cluster with multiple teams. For example, system resources that are configured for you are kept in separate namespaces like kube-system or ibm-system. If you don't designate a namespace when you create a Kubernetes resource, the resource is automatically created in the default namespace.

Service

A service is a Kubernetes resource that groups a set of pods and provides network connectivity to these pods without exposing the actual private IP address of each pod. You can use a service to make your app available within your cluster or to the public internet.

Deployment

A deployment is a Kubernetes resource where you might specify information about other resources or capabilities that are required to run your app, such as services, persistent storage, or annotations. You document a deployment in a configuration YAML file, and then apply it to the cluster. The Kubernetes master configures the resources and deploys containers into pods on the worker nodes with available capacity.

Define update strategies for your app, including the number of pods that you want to add during a rolling update and the number of pods that can be unavailable at a time. When you perform a rolling update, the deployment checks whether the update is working and stops the rollout when failures are detected.

A deployment is just one type of workload controller that you can use to manage pods. For help choosing among your options, see What type of Kubernetes objects can I make for my app?. For more information about deployments, see the Kubernetes documentation.

Pod

Every containerized app that is deployed into a cluster is deployed, run, and managed by a Kubernetes resource called a pod. Pods represent small deployable units in a Kubernetes cluster and are used to group the containers that must be treated as a single unit. Usually, each container is deployed in its own pod. However, an app might require a container and other helper containers to be deployed into one pod so that those containers can be addressed by using the same private IP address.

App

An app might refer to a complete app or a component of an app. You might deploy components of an app in separate pods or separate worker nodes. For more information, see Planning app deployments and Developing Kubernetes-native apps.

To dive deeper into Kubernetes, see the Kubernetes documentation.

What are containers?

Containers provide a standard way to package your application's code, configurations, and dependencies into a single unit that can run as a resource-isolated process on a compute server. To run your app on IBM Cloud, you must first containerize your app by creating a container image that you store in a container registry.

Review the following terms to get more familiar with the concepts.

Container
A container is an app that is packaged with all its dependencies so that the app can be moved between environments and run without changes. Unlike virtual machines, containers don't virtualize a device, its operating system, and the underlying hardware. Only the app code, run time, system tools, libraries, and settings are packaged inside the container. Containers run as isolated processes on compute hosts and share the host operating system and its hardware resources. This approach makes a container more lightweight, portable, and efficient than a virtual machine.
Image
A container image is a package that includes the files, configuration settings, and libraries to run a container. An image is built from a text file called a Dockerfile. Dockerfiles define how to build the image and which artifacts to include in it. The artifacts that are included in a container consist of the app code, configuration settings, and any dependencies.
Registry
An image registry is a place to store, retrieve, and share container images. Registries can either be publicly available to anyone or privately available to a small group of users. When it comes to enterprise applications, use a private registry like IBM Cloud to protect your images from being used by unauthorized users.

What compute host infrastructure does IBM Cloud Kubernetes Service offer?

With IBM Cloud® Kubernetes Service, you can create a cluster by using infrastructure from the following providers. All the worker nodes in a cluster must be from the same provider.

Infrastructure overview
Component Description
Overview Create clusters on virtual servers in your own Virtual Private Cloud (VPC).
Supported container platforms Red Hat OpenShift or Kubernetes
Compute and worker node resources Worker nodes are created as virtual machines by using either shared infrastructure or dedicated hosts. Unlike classic clusters, VPC cluster worker nodes on shared hardware don't appear in your infrastructure portal or a separate infrastructure bill. Instead, you manage all maintenance and billing activity for the worker nodes through IBM Cloud Kubernetes Service. Your worker node instances are connected to certain VPC instances that do reside in your infrastructure account, such as the VPC subnet or storage volumes. For dedicated hosts, the dedicated host price covers the vCPU, memory, and any instance storage to be used by any workers placed on the host. Note that all Intel® x86-64 servers have Hyper-Threading enabled by default. For more information, see Intel Hyper-Threading Technology.
Security Clusters on shared hardware run in an isolated environment in the public cloud. Clusters on dedicated hosts do not run in a shared environment, instead only your clusters are present on your hosts. Network access control lists protect the subnets that provide the floating IPs for your worker nodes.
High availability The master includes three replicas for high availability. Further, if you create your cluster in a multizone metro, the master replicas are spread across zones and you can also spread your worker pools across zones.
Reservations Reservations aren't available for VPC.
Cluster administration VPC clusters can't be reloaded or updated. Instead, use the worker replace --update CLI or API operation to replace worker nodes that are outdated or in a troubled state.
Cluster networking Unlike classic infrastructure, the worker nodes of your VPC cluster are attached to VPC subnets and assigned private IP addresses. The worker nodes are not connected to the public network, which instead is accessed through a public gateway, floating IP, or VPN gateway. For more information, see Overview of VPC networking in IBM Cloud Kubernetes Service.
Apps and container platform You can choose to create community Kubernetes or Red Hat OpenShift clusters to manage your containerized apps. Your app build processes don't differ because of the infrastructure provider, but how you expose the app does.
App networking All pods that are deployed to a worker node are assigned a private IP address in the 172.30.0.0/16 range and are routed between worker nodes on the worker node private IP address of the private VPC subnet. To expose the app on the public network, you can create a Kubernetes LoadBalancer service, which provisions a VPC load balancer and public hostname address for your worker nodes. For more information, see Exposing apps with VPC load balancers.
Storage You can choose from non-persistent and persistent storage solutions such as file, block, object, and software-defined storage. For more information, see Planning highly available persistent storage.
User access You can use IBM Cloud IAM access policies to authorize users to create infrastructure, manage your cluster, and access cluster resources. The cluster can be in a different resource group than the VPC.
Integrations VPC supports a select list of supported IBM Cloud services, add-ons, and third-party integrations. For a list, see Supported IBM Cloud and third-party integrations.
Locations and versions VPC clusters are available worldwide in the multizone location.
Service interface VPC clusters are supported by the next version (v2) of the IBM Cloud Kubernetes Service API, and you can manage your VPC clusters through the same CLI and console as classic clusters.
Service compliance See the VPC section in What standards does the service comply to?.
Service limitations See Service limitations. For VPC-specific limitations in IBM Cloud Kubernetes Service, see VPC cluster limitations. For general VPC infrastructure provider limitations, see Limitations.
Infrastructure overview
Component Description
Overview Create clusters on your own hardware, IBM Cloud Classic or VPC, or on virtual servers in another cloud provider like AWS or Azure.
Supported container platforms Red Hat OpenShift
Compute and worker node resources Worker nodes can be virtual machines using either shared infrastructure or dedicated hosts, or even bare metal servers. You manage maintenance and billing activity for the worker nodes through your host infrastructure provider whether that is IBM Cloud, your own on-premises hardware, or another cloud provider. You also manage billing through IBM Cloud. For more information about pricing, see What am I charged for when I use IBM Cloud Satellite?.
Security See Security and compliance.
High availability See About high availability and recover.
Reservations Reservations aren't available for Satellite.
Cluster administration See Updating hosts that are assigned as worker nodes.
Cluster networking If you attach IBM Cloud Classic or VPC hosts to your location, refer to those descriptions.
Apps and container platform You can create Red Hat OpenShift clusters to manage your containerized apps. Your app build processes don't differ because of the infrastructure provider, but how you expose the app does. For more information, see Choosing an app exposure service.
App networking All pods that are deployed to a worker node are assigned a private IP address in the 172.30.0.0/16 range by default. You can avoid subnet conflicts with the network that you use to connect to your location by specifying a custom subnet CIDR that provides the private IP addresses for your pods. To expose an app, see Exposing apps in Satellite clusters.
Storage Bring your own storage drivers or deploy one of the supported storage templates. For more information, see Understanding Satellite storage.
User access You can use IBM Cloud IAM access policies to authorize users to create IBM Cloud infrastructure, manage your cluster, and access cluster resources. For more information, see Managing access overview. You can also further control access to your host infrastructure in policies provided by your infrastructure provider.
Integrations For cluster integrations, see Supported IBM Cloud and third-party integrations. For supported Satellite service integrations, see Supported Satellite IBM Cloud services.
Locations and versions Clusters are managed from one of the supported IBM Cloud locations. However, you can deploy worker nodes to your own location, an IBM Cloud data center, or another cloud provider. For more information see Understanding locations and hosts.
Service interface Satellite are supported by the global API [IBM Cloud Kubernetes Service, the IBM Cloud Kubernetes Service CLI and the Satellite CLI. You can also manage your clusters from the console.
Service compliance For clusters, see What standards does the service comply to?. For Satellite, see Security and compliance.
Service limitations See Limitations, default settings, and usage requirements.
Infrastructure overview
Component Description
Overview Create clusters in a classic compute, networking, and storage environment in IBM Cloud infrastructure.
Supported container platforms Red Hat OpenShift or Kubernetes
Compute and worker node resources Virtual, bare metal, and software-defined storage machines are available for your worker nodes. Your worker node instances reside in your IBM Cloud infrastructure account, but you can manage them through IBM Cloud Kubernetes Service. You own the worker node instances.
Security Built-in security features that help you protect your cluster infrastructure, isolate resources, and ensure security compliance. For more information, see the classic Network Infrastructure documentation.
High availability For both classic and VPC clusters, the master includes three replicas for high availability. Further, if you create your cluster in a multizone metro, the master replicas are spread across zones and you can also spread your worker pools across zones. For more information, see High availability for IBM Cloud Kubernetes Service.
Reservations Create a reservation with contracts for 1 or 3 year terms for classic worker nodes to lock in a reduced cost for the life of the contract. Typical savings range between 30-50% compared to regular worker node costs.
Cluster administration Classic clusters support the entire set of v1 API operations, such as resizing worker pools, reloading worker nodes, and updating masters and worker nodes across major, minor, and patch versions. When you delete a cluster, you can choose to remove any attached subnet or storage instances.
Cluster networking Your worker nodes are provisioned on private VLANs that provide private IP addresses to communicate on the private IBM Cloud infrastructure network. For communication on the public network, you can also provision the worker nodes on a public VLAN. Communication to the cluster master can be on the public or private cloud service endpoint. For more information, see Understanding VPC cluster network basics or Understanding Classic cluster network basics.
Apps and container platform You can choose to create community Kubernetes or Red Hat OpenShift clusters to manage your containerized apps. Your app build processes don't differ because of the infrastructure provider, but how you expose the app does. For more information, see Choosing an app exposure service.
App networking All pods that are deployed to a worker node are assigned a private IP address in the 172.30.0.0/16 range and are routed between worker nodes on the worker node private IP address of the private VLAN. To expose the app on the public network, your cluster must have worker nodes on the public VLAN. Then, you can create a NodePort, LoadBalancer (NLB), or Ingress (ALB) service. For more information, see Planning in-cluster and external networking for apps.
Storage You can choose from non-persistent and persistent storage solutions such as file, block, object, and software-defined storage. For more information, see Planning highly available persistent storage.
User access To create classic infrastructure clusters, you must set up infrastructure credentials for each region and resource group. To let users manage the cluster, use IBM Cloud IAM platform access roles. To grant users access to cluster resources, use IBM Cloud IAM service access roles, which correspond with Kubernetes RBAC roles.
Integrations You can extend your cluster and app capabilities with a variety of IBM Cloud services, add-ons, and third-party integrations. For a list, see Supported IBM Cloud and third-party integrations.
Locations and versions Classic clusters are available worldwide.
Service interface Classic clusters are fully supported in the Kubernetes Service v1 API, CLI, and console.
Service compliance See the classic section in What standards does the service comply to?.
Service limitations See Service limitations. Feature-specific limitations are documented by section.

What are the benefits of using the service?

Choice of container platform provider
Deploy clusters with Red Hat OpenShift or community Kubernetes installed as the container platform orchestrator.
Choose the developer experience that fits your company, or run workloads across both Red Hat OpenShift or community Kubernetes clusters.
Built-in integrations from the IBM Cloud console to the Kubernetes dashboard or Red Hat OpenShift web console.
Single view and management experience of all your Red Hat OpenShift or community Kubernetes clusters from IBM Cloud.
Single-tenant Kubernetes clusters with compute, network, and storage infrastructure isolation
Create your own customized infrastructure that meets the requirements of your organization.
Choose between infrastructure providers.
Provision a dedicated and secured Kubernetes master, worker nodes, virtual networks, and storage by using the resources provided by IBM Cloud infrastructure.
Fully managed Kubernetes master that is continuously monitored and updated by IBM to keep your cluster available.
Option to provision worker nodes as bare metal servers for compute-intensive workloads such as data, GPU, and AI.
Store persistent data, share data between Kubernetes pods, and restore data when needed with the integrated and secure volume service.
Benefit from full support for all native Kubernetes APIs.
Multizone clusters to increase high availability
Easily manage worker nodes of the same flavor (CPU, memory, virtual or physical) with worker pools.
Guard against zone failure by spreading nodes evenly across select multizones and by using anti-affinity pod deployments for your apps.
Decrease your costs by using multizone clusters instead of duplicating the resources in a separate cluster.
Benefit from automatic load balancing across apps with the multizone load balancer (MZLB) that is set up automatically for you in each zone of the cluster.
Highly available masters
Reduce cluster downtime such as during master updates with highly available masters that are provisioned automatically when you create a cluster.
Spread your masters across zones in a multizone cluster to protect your cluster from zonal failures.
Image security compliance with Vulnerability Advisor
Set up your own repo in a secured Docker private image registry where images are stored and shared by all users in the organization.
Benefit from automatic scanning of images in your private IBM Cloud registry.
Review recommendations specific to the operating system used in the image to fix potential vulnerabilities.
Continuous monitoring of the cluster health
Use the cluster dashboard to quickly see and manage the health of your cluster, worker nodes, and container deployments.
Find detailed consumption metrics by using IBM Cloud® Monitoring and quickly expand your cluster to meet work loads.
Review logging information by using IBM® Log Analysis to see detailed cluster activities.
Secure exposure of apps to the public
Choose between a public IP address, an IBM provided route, or your own custom domain to access services in your cluster from the internet.
IBM Cloud service integration
Add extra capabilities to your app through the integration of IBM Cloud services, such as Watson APIs, Blockchain, data services, or Internet of Things.

Comparison between Red Hat OpenShift and Kubernetes clusters

Both Red Hat OpenShift on IBM Cloud and IBM Cloud Kubernetes Service clusters are production-ready container platforms that are tailored for enterprise workloads. The following table compares and contrasts some common characteristics that can help you choose which container platform is best for your use case.

Characteristics of Kubernetes and Red Hat OpenShift clusters
Characteristics Kubernetes clusters Red Hat OpenShift clusters
Complete cluster management experience through the IBM Cloud Kubernetes Service automation tools (API, CLI, console) Yes Yes
Worldwide availability in single and multizones Yes Yes
Consistent container orchestration across hybrid cloud providers Yes Yes
Access to IBM Cloud services such as AI Yes Yes
Software-defined storage Portworx solution available for multizone data use cases Yes Yes
Create a cluster in an IBM Virtual Private Cloud (VPC) Yes Yes
Latest Kubernetes distribution Yes
Scope IBM Cloud IAM access policies to access groups for service access roles that sync to cluster RBAC Yes
Classic infrastructure cluster on only the private network Yes
GPU bare metal worker nodes Yes Yes
Integrated IBM Cloud Paks and middleware Yes
Built-in container image streams, builds, and tooling (read more) Yes
Integrated CI/CD with Jenkins Yes
Stricter app security context set up by default Yes
Simplified Kubernetes developer experience, with an app console that is suited for beginners Yes
Supported operating system Kubernetes version information Red Hat OpenShift version information
Preferred external traffic networking Ingress Router
Secured routes encrypted with Hyper Protect Crypto Services Yes

Related resources

Review how you can learn about Kubernetes concepts and the terminology.

  • Learn how Kubernetes and IBM Cloud Kubernetes Service work together by completing this course.