Understanding VPC cluster networking
When you create your cluster, you must choose a networking setup so that certain cluster components can communicate with each other and with networks or services outside of the cluster.
- Worker-to-worker communication: All worker nodes must be able to communicate with each other on the private network through VPC subnets.
- Worker-to-master and user-to-master communication: Your worker nodes and your authorized cluster users can communicate with the Kubernetes master securely over virtual private endpoints or cloud service endpoints.
- Worker communication to other services or networks: Allow your worker nodes to securely communicate with other IBM Cloud services, such as IBM Cloud® Container Registry, to on-premises networks, to other VPCs, or to classic infrastructure resources.
- External communication to apps that run on worker nodes: Allow public or private requests into the cluster as well as requests out of the cluster to a public endpoint.
When you're done with this page, try out the quiz.
Worker-to-worker communication using VPC subnets
Before you create a VPC cluster for the first time, you must create a VPC subnet in each zone where you want to deploy worker nodes. A VPC subnet is a specified private IP address range (CIDR block) and configures a group of worker nodes and pods as if they are attached to the same physical wire.
When you create a cluster, you specify an existing VPC subnet for each zone. Each worker node that you add in a cluster is deployed with a private IP address from the VPC subnet in that zone. After the worker node is provisioned, the worker
node IP address persists after a reboot
operation, but the worker node IP address changes after replace
and update
operations.
Subnets provide a channel for connectivity among the worker nodes within the cluster. Additionally, any system that is connected to any of the private subnets in the same VPC can communicate with workers. For example, all subnets in one VPC can communicate through private layer 3 routing with a built-in VPC router. If you have multiple clusters that must communicate with each other, you can create the clusters in the same VPC. However, if your clusters don't need to communicate, you can achieve better network segmentation by creating the clusters in separate VPCs. You can also create access control lists (ACLs) for your VPC subnets to mediate traffic on the private network. ACLs consist of inbound and outbound rules that define which ingress and egress is permitted for each VPC subnet.
If your worker nodes must access a public endpoint outside of the cluster, you can enable a public gateway on the VPC subnet that the worker nodes are deployed to. A public gateway can be attached to or detached from a subnet at any time.
The default IP address range for VPC subnets is 10.0.0.0 – 10.255.255.255. For a list of IP address ranges per VPC zone, see the VPC default address prefixes.
If you enable classic access when you create your VPC, classic access default address prefixes automatically determine the IP ranges of any subnets that you create. However, the default IP ranges for classic access VPC subnets conflict with the subnets for the IBM Cloud Kubernetes Service control plane. Instead, you must create the VPC without the automatic default address prefixes, and then create your own address prefixes and subnets within those ranges for your cluster.
Need to create your cluster by using custom-range subnets? Check out this guidance on custom address prefixes. If you use custom-range subnets for your worker nodes, you must ensure that your worker node subnets don't overlap with your cluster's pod subnet.
Do not delete the subnets that you attach to your cluster during cluster creation or when you add worker nodes in a zone. If you delete a VPC subnet that your cluster used, any load balancers that use IP addresses from the subnet might experience issues, and you might be unable to create new load balancers.
When you create VPC subnets for your clusters, keep in mind the following features and limitations. For more information about VPC subnets, see Characteristics of subnets in the VPC.
- The default CIDR size of each VPC subnet is
/24
, which can support up to 253 worker nodes. If you plan to deploy more than 250 worker nodes per zone in one cluster, consider creating a subnet of a larger size. - After you create a VPC subnet, you can't resize it or change its IP range.
- Multiple clusters in the same VPC can share subnets.
- VPC subnets are bound to a single zone region and can't span multiple zones or regions.
- After you create a subnet, you can't move it to a different zone, region, or VPC.
- If you have worker nodes that are attached to an existing subnet in a zone, you can't change the subnet for that zone in the cluster.
- The
172.16.0.0/16
,172.18.0.0/16
,172.19.0.0/16
, and172.20.0.0/16
ranges are prohibited.
Worker-to-master and user-to-master communication using Virtual private endpoints or cloud service endpoints
IBM Cloud Kubernetes Service uses different types of service endpoints to establish a connection from authorized cluster users and worker nodes to the Kubernetes master. Authorized cluster users communicate with the Kubernetes master through cloud service endpoints. Depending on your cluster version, worker nodes communicate with the Kubernetes master through cloud service endpoints or VPC virtual private endpoints.
Before you create a cluster, you must enable your account to use service endpoints. To enable service endpoints, run ibmcloud account update --service-endpoint-enable true
.
In VPC clusters in IBM Cloud Kubernetes Service, you can't disable the private cloud service endpoint or set up a cluster with the public cloud service endpoint only.
Worker-to-master communication in VPC clusters
Worker node communication to the Kubernetes master is established differently based on your cluster version.
- Worker node communication to the Kubernetes master is established over the VPC virtual private endpoint (VPE). If the public cloud service endpoint is also enabled, worker-to-master traffic is established half over the public endpoint and half over the VPE for protection from potential outages of the public or private network.
To secure communication over public and private cloud service endpoints or VPEs, IBM Cloud Kubernetes Service automatically sets up a Konnectivity connection between the Kubernetes master and worker nodes when the cluster is created. Worker nodes securely talk to the master through TLS certificates, and the master talks to workers through the Konnectivity connection.
User-to-master communication in VPC clusters
You can allow authorized cluster users to communicate with the Kubernetes master by enabling the public and private cloud service endpoints, or the private cloud service endpoint only.
- Public and private cloud service endpoints: By default, all calls to the master that are initiated by authorized cluster users are routed through the public cloud service endpoint. If authorized cluster users are in your VPC network or are connected through a VPC VPN connection, the master is privately accessible through the private cloud service endpoint.
- Private cloud service endpoint only: To access the master through the private cloud service endpoint, authorized cluster users must either be in your VPC network or connected through a VPC VPN connection.
You can secure network access to your cluster's service endpoints using context based restrictions. Only authorized requests to your cluster master that originate from subnets in the allowlist are permitted through the cluster's service endpoints. Use of context based restrictions helps prevent unauthorized scanning activities. For more information, see Using context based restrictions.
Worker communication to other services or networks
Allow your worker nodes to securely communicate with other IBM Cloud services, on-premises networks, other VPCs, and IBM Cloud classic infrastructure resources.
Communication with other IBM Cloud services over the private or public network
Your worker nodes can automatically and securely communicate with other IBM Cloud services that support private cloud service endpoints, such as IBM Cloud® Container Registry, over the private network. If an IBM Cloud service does not support private cloud service endpoints, your worker nodes must be connected to a subnet that has a public gateway attached to it. The pods on those worker nodes can securely communicate with the services over the public network through the subnet's public gateway.
Note that if you use access control lists (ACLs) for your VPC subnets, you must create inbound or outbound rules to allow your worker nodes to communicate with these services.
- If you don't attach public gateways to your subnets, you can create inbound rules to allow ingress from services that support private cloud service endpoints.
- If you attach public gateways to your subnets, you can create inbound and outbound rules to allow ingress from and egress to services that support public cloud service endpoints only.
Communication with resources in on-premises data centers
To connect your cluster with your on-premises data center, you can use the IBM Cloud® Virtual Private Cloud VPN or IBM Cloud® Direct Link.
- To get started with the Virtual Private Cloud VPN, see Configure an on-prem VPN gateway and Create a VPN gateway in your VPC, and create the connection between the VPC VPN gateway and your local VPN gateway. If you have a multizone cluster, you must create a VPC gateway on a subnet in each zone where you have worker nodes.
- To get started with Direct Link, see Ordering IBM Cloud Direct Link Dedicated. In step 8, you can create a network connection to your VPC to be attached to the Direct Link gateway.
If you plan to connect your cluster to on-premises networks, check out the following helpful information:
- You might have subnet conflicts with the IBM-provided default 172.30.0.0/16 range for pods and 172.21.0.0/16 range for services. You can avoid subnet conflicts when you create a cluster from the CLI by specifying a custom subnet CIDR for pods in the
--pod-subnet
option and a custom subnet CIDR for services in the--service-subnet
option. - If your VPN solution preserves the source IP addresses of requests, you can create custom static routes to ensure that your worker nodes can route responses from your cluster back to your on-premises network.
- Note that the
172.16.0.0/16
,172.18.0.0/16
,172.19.0.0/16
, and172.20.0.0/16
subnet ranges are prohibited because they are reserved for IBM Cloud Kubernetes Service control plane functionality.
Communication with resources in other VPCs
To connect an entire VPC to another VPC in your account, you can use the IBM Cloud VPC VPN or IBM Cloud® Transit Gateway.
- To get started with the IBM Cloud VPC VPN, follow the steps in Connecting two VPCs using VPN to create a VPC gateway on a subnet in each VPC and create a VPN connection between the two VPC gateways. Note that if have customized the access control lists (ACLs) or security groups in your VPC, you must ensure the ACLs and security groups allow your worker nodes to communicate with the worker nodes in the other VPC.
- To get started with IBM Cloud Transit Gateway, see the Transit Gateway documentation. Transit Gateway instances can be configured to route between VPCs that are in the same region (local routing) or VPCs that are in different regions (global routing).
Communication with IBM Cloud classic resources
If you need to connect your cluster to resources in your IBM Cloud classic infrastructure, you can set up a VPC with classic access or use IBM Cloud Transit Gateway.
- To get started with a VPC with classic access, see Setting up access to classic infrastructure. Note that you must enable classic access when you create the VPC, and you can't convert an existing VPC to use classic access. Additionally, you can set up classic infrastructure access for only one VPC per region, and you can't set up more than one VPC with classic infrastructure access in a region.
- To get started with IBM Cloud Transit Gateway, see the Transit Gateway documentation. You can connect multiple VPCs to classic infrastructure, such as using IBM Cloud Transit Gateway to manage access between your VPCs in multiple regions to resources in your IBM Cloud classic infrastructure.
External communication to apps that run on worker nodes
Allow private or public traffic requests from outside the cluster to your apps that run on worker nodes.
Private traffic to cluster apps
When you deploy an app in your cluster, you might want to make the app accessible to only users and services that are on the same private network as your cluster. Private load balancing is ideal for making your app available to requests from outside the cluster without exposing the app to the general public. You can also use private load balancing to test access, request routing, and other configurations for your app before you later expose your app to the public with public network services.
To allow private network traffic requests from outside the cluster to your apps, you can use private Kubernetes networking services, such as creating LoadBalancer
services.
For example, when you create a Kubernetes LoadBalancer
service in your cluster, a load balancer for VPC is automatically created in your VPC outside of your cluster. The VPC load balancer is multizonal and routes requests for
your app through the private NodePorts that are automatically opened on your worker nodes. For VPC ALBs, a security group, named in the form of kube-<vpc-id>
,
is automatically attached to your ALB instance.
The kube-<vpc-id>
security group that is attached to your VPC ALB is the same security group used for the private VPE gateway that communicates with the Kubernetes master. Do not modify this security group, as doing so might
result in disruptions to the network connectivity between the cluster and the Kubernetes master. Instead, you can remove the security group from the VPC ALB and replace it with a security group that you create and manage.
For more information, see Planning private external load balancing.
Public traffic to cluster apps
To make your apps accessible from the public internet, you can use public networking services. Even though your worker nodes are connected to private VPC subnets only, the VPC load balancer that is created for public networking services can route public requests to your app on the private network by providing your app a public URL. When an app is publicly exposed, anyone that has the public URL can send a request to your app.
You can use public Kubernetes networking services, such as creating LoadBalancer
services. For example, when you create a Kubernetes LoadBalancer
service in
your cluster, a load balancer for VPC is automatically created in your VPC outside of your cluster. The VPC load balancer is multizonal and routes requests for your app through the private NodePorts that are automatically opened on your worker
nodes. For VPC ALBs, a security group, named in the form of kube-<vpc-id>
, is automatically attached to your ALB instance.
The kube-<vpc-id>
security group that is attached to your VPC ALB is the same security group used for the private VPE gateway that communicates with the Kubernetes master. Do not modify this security group, as doing so might
result in disruptions to the network connectivity between the cluster and the Kubernetes master. Instead, you can remove the security group from the VPC ALB and replace it with a security group that you create and manage.
Note that a public gateway is not required on your subnets to allow inbound network traffic from the internet to LoadBalancer
services or ALBs. Public gateways are required only to allow worker nodes to make outbound requests to
public endpoints. For more information, see Planning public external load balancing.
Example scenarios for VPC cluster network setups
Now that you understand the basics of cluster networking, check out some example scenarios in which various VPC cluster network setups can meet your workload needs.
Scenario: Run internet-facing app workloads in a VPC cluster
In this scenario, you run workloads in a VPC cluster that are accessible to requests from the Internet. Public access is controlled by security groups so that end users can access your apps while unwanted public requests to your apps are denied. Additionally, your workers have automatic access to any IBM Cloud services that support private cloud service endpoints.
Worker-to-worker communication
To achieve this setup, you create VPC subnets in each zone where you want to deploy worker nodes. No public gateways are required for these subnets. Then, you create a VPC cluster that uses these VPC subnets.
Worker-to-master and user-to-master communication
You can choose to allow worker-to-master and user-to-master communication over the public and private networks, or over the private network only.
- Public and private cloud service endpoints: Communication between worker nodes and master is established over the private network through the private cloud service endpoint. By default, all calls to the master that are initiated by authorized cluster users are routed through the public cloud service endpoint.
- Private service endpoint only: Communication to master from both worker nodes and cluster users is established over the private network through the private cloud service endpoint. Cluster users must either be in your VPC network or connect through a VPC VPN connection.
Worker communication to other services or networks
If your app workload requires other IBM Cloud services, your worker nodes can automatically, securely communicate with IBM Cloud services that support private cloud service endpoints over the private VPC network.
External communication to apps that run on worker nodes
After you test your app, you can expose it to the internet by creating a public Kubernetes LoadBalancer
service or using the default public Ingress application load balancers (ALBs). The VPC load balancer that is automatically
created in your VPC outside of your cluster when you use one of these services routes traffic to your app. You can improve the security of your cluster and control public network traffic to your apps by replacing the kube-<vpc-id>
security group, which is automatically applied to the VPC ALB, with a security group that you create and manage. When applied to ALBs, security groups control which inbound traffic is permitted to your cluster through the ALB.
Ready to get started with a cluster for this scenario? After you plan your high availability setup, see Creating VPC clusters.
Scenario: Run internet-facing app workloads in a VPC cluster with limited public egress
In this scenario, you run workloads in a VPC cluster that are accessible to requests from the Internet. Public access is controlled so that end users can access your apps while unwanted public requests to your apps are denied. However, you might need to also provide limited public egress from your worker nodes to a public endpoint, and want to ensure that this public egress is controlled and isolated in your cluster. For example, you might need your app pods to access an IBM Cloud service that does not support private cloud service endpoints, and must be accessed over the public network.
Worker-to-worker communication
To achieve this setup in, for example, a multizone cluster that has worker nodes in two zones, you create a VPC subnet in one zone that has no public gateway attached, and a VPC subnet in another zone that does have a public gateway attached. Then, you create a VPC cluster that uses these VPC subnets and zones.
Worker-to-master and user-to-master communication
When you create the cluster you can choose to allow worker-to-master and user-to-master communication over the public and private networks, or over the private network only.
- Public and private cloud service endpoints: Communication between worker nodes and master is established over the private network through the private cloud service endpoint. By default, all calls to the master that are initiated by authorized cluster users are routed through the public cloud service endpoint.
- Private service endpoint only: Communication to master from both worker nodes and cluster users is established over the private network through the private cloud service endpoint. Cluster users must either be in your VPC network or connect through a VPC VPN connection.
Worker communication to other services or networks
After the cluster is created, you create a worker pool that is deployed only to the zone where subnet has an attached public gateway. Any app pods that are deployed to the worker nodes in this pool can make requests to a public endpoint through the public gateway. For example, if you want your app pods to send logs to IBM Cloud® Activity Tracker (which does not support private endpoints), you can add an affinity rule for the worker pool ID label to the app deployment. This affinity rule ensures that the app pods which require public egress are confined to only one worker pool on one subnet. If you deploy other apps that don't require public egress, you can instead add anti-affinity rules to the app deployment so that the app pods deploy to only worker pools on subnets without a public gateway.
If your app workload requires other IBM Cloud services that support private cloud service endpoints, your worker nodes can automatically, securely communicate with these services over the private VPC network without using a public gateway.
External communication to apps that run on worker nodes
After you test your app, you can expose it to the internet by creating a public Kubernetes LoadBalancer
service or using the default public Ingress application load balancers (ALBs). The VPC load balancer that is automatically
created in your VPC outside of your cluster when you use one of these services routes traffic to your app. You can improve the security of your cluster and control public traffic apps by modifying the default VPC security group for your
cluster. Security groups consist of rules that define which inbound traffic is permitted for your worker nodes. For example, you can use inbound rules to control incoming public traffic to your apps through the VPC load balancer, and outbound
rules to control outgoing requests from your apps through the public gateway.
Ready to get started with a cluster for this scenario? After you plan your high availability setup, see Creating VPC clusters.
Extend your on-premises data center to a VPC cluster
In this scenario, you run workloads in a VPC cluster. However, you want these workloads to be accessible only to services, databases, or other resources in your private networks in an on-premises data center. Your cluster workloads might need to access a few other IBM Cloud services that support communication over the private network.
Worker-to-worker communication
To achieve this setup, you create VPC subnets in each zone where you want to deploy worker nodes. No public gateways are required for these subnets. Then, you create a VPC cluster that uses these VPC subnets.
Note that you might have subnet conflicts between the default ranges for workers nodes, pods, and services, and the subnets in your on-premises networks. When you create your VPC subnets, you can choose custom address prefixes and then create your cluster by using these subnets. Additionally, you can specify a custom subnet CIDR for pods and services by using the --pod-subnet
and --service-subnet
options in the ibmcloud ks cluster create
command when you create your cluster.
Worker-to-master and user-to-master communication
When you create the cluster, you enable private cloud service endpoint only to allow worker-to-master and user-to-master communication over the private network. Cluster users must either be in your VPC network or connect through a VPC VPN connection.
Worker communication to other services or networks
To connect your cluster with your on-premises data center, you can set up the VPC VPN service. The IBM Cloud VPC VPN connects your entire VPC to an on-premises data center. If your app workload requires other IBM Cloud services that support private cloud service endpoints, your worker nodes can automatically, securely communicate with these services over the private VPC network.
External communication to apps that run on worker nodes
After you test your app, you can expose it to the private network by creating a private Kubernetes LoadBalancer
service or using the default private Ingress application load balancers (ALBs). The VPC load balancer that is automatically
created in your VPC outside of your cluster when you use one of these services routes traffic to your app. Note that the VPC load balancer exposes your app to the private network only so that any on-premises system with a connection to the
VPC subnet can access the app. You can improve the security of your cluster and control public traffic apps by modifying the default VPC security group for your cluster. Security groups consist of rules that define which inbound traffic
is permitted for your worker nodes.
Ready to get started with a cluster for this scenario? After you plan your high availability setup, see Creating VPC clusters.
Next steps
Test your knowledge with a quiz.
To continue the planning process, learn about protecting sensitive information in your cluster by making decisions about the level of encryption you must configure. If you're ready to get started setting up networking, move on to Understanding Secure by Default Cluster VPC Networking.