IBM Cloud Docs
Understanding VPC security groups

Understanding VPC security groups

Virtual Private Cloud 1.29 and earlier

VPC clusters use various security groups to protect cluster components. These security groups are automatically created and attached to cluster workers, load balancers, and cluster-related VPE gateways. You can modify or replace these security groups to meet your specific requirements, however, any modifications must be made according to these guidelines to avoid network connectivity disruptions.

Default behavior
Virtual Private Cloud security groups filter traffic at the hypervisor level. Security group rules are not applied in a particular order. However, requests to your worker nodes are only permitted if the request matches one of the rules that you specify. When you allow traffic in one direction by creating an inbound or outbound rule, responses are also permitted in the opposite direction. Security groups are additive, meaning that if your worker nodes are attached to more than one security group, all rules included in the security groups are applied to the worker nodes. Newer cluster versions might have more rules in the kube-clusterID security group than older cluster versions. Security group rules are added to improve the security of the service and do not break functionality.
Limitations
Because the worker nodes of your VPC cluster exist in a service account and aren't listed in the VPC infrastructure dashboard, you can't create a security group and apply it to your worker node instances. You can only modify the existing security group.

If you modify the default VPC security groups, you must, at minimum, include the required inbound and outbound rules so that your cluster works properly.

Virtual private endpoint (VPE) gateways

When the first VPC cluster at IBM Cloud Kubernetes Service 1.28+ is created in a given VPC, or a cluster in that VPC has its master updated to 1.28+, then several shared VPE Gateways are created for various IBM Cloud services. Only one of each of these types of shared VPE Gateways is created per VPC. All the clusters in the VPC share the same VPE Gateway for these services. These shared VPE Gateways are assigned a single Reserved IP from each zone that the cluster workers are in.

The following VPE gateways are created automatically when you create a VPC cluster.

Table 1. VPE gateways
The table shows the VPE gateways created for VPC clusters. The first column includes name of the gateway. The second column includes a brief description.
VPE Gateway Description
IBM Cloud Container Registry Shared Pull container images from IBM Cloud Container Registry to apps running in your cluster.
IBM Cloud Object Storage s3 gateway Shared Access the COS APIs.
IBM Cloud Object Storage config gateway Shared Backup container images to IBM Cloud Object Storage
IBM Cloud Kubernetes Service Access the IBM Cloud Kubernetes Service APIs to interact with and manage your cluster. †

† All supported VPC clusters have a VPE Gateway for the cluster master that is created at cluster create time in the customer's account. This VPE Gateway is used by the cluster workers, and can be used by other things in the VPC, to connect to the cluster's master API server. This VPE Gateway is assigned a single Reserved IP from each zone that the cluster workers are in, and this IP is created in one of the VPC subnets in that zone that has cluster workers. For example, if the cluster has workers in only a single zone (us-east-1) and single VPC subnet, then a single IP will be created in that subnet and assigned to the VPE Gateway. If a cluster has workers in all three zones like us-east-1, us-east-2, and us-east-3 and these workers are spread out among 4 VPC subnets in each zone, then 12 VPC subnets altogether, three IPs will be created, one in each zone, in one of the four VPC subnets in that zone. Note that the subnet is chosen at random.

Managed security groups

Security groups applied to cluster workers

Do not modify the rules in thekube-<clusterID> security group as doing so might cause disruptions in network connectivity between the workers of the cluster and the control cluster. However, if you don't want the kube-<clusterID>, you can instead add your own security groups during cluster creation.

Table 1. VPC security groups
The table shows the three types of security groups that are automatically created for VPCs. The first column includes the type of security group. The second column includes the naming format of the security group. The third column includes details on when and where the security group is created and what type of traffic it allows.
Security group type Name Details
VPC security group Randomly generated
  • Automatically created when the VPC is created. Automatically attached to each worker node in a cluster created in the VPC.
  • Allows all outbound traffic by default.
VPC cluster security group kube-<clusterID>
  • Automatically created when the VPC cluster is created. Automatically attached to each worker node in a cluster created in the VPC.
  • Allows traffic necessary for the cluster infrastructure to function.

Security groups applied to VPE gateways and VPC ALBs

Do not modify the rules in the kube-<vpcID> security group as doing so might cause disruptions in network connectivity between the workers of the cluster and the control cluster. However, you can remove the default security group from the VPC ALB and replace it with a custom security group that you create and manage.

Table 1. VPC security groups
The table shows the three types of security groups that are automatically created for VPCs. The first column includes the type of security group. The second column includes the naming format of the security group. The third column includes details on when and where the security group is created and what type of traffic it allows.
Security group type Name Details
Kubernetes Service security group kube-<vpcID>
  • Automatically created and attached to any cluster-related VPE gateways in the VPC.
  • Automatically created and attached to each VPC ALB that is created in the VPC.
  • Allows traffic necessary for the cluster infrastructure to function.

Minimum inbound and outbound requirements

The following inbound and outbound rules are covered by the default VPC security groups. Note that you can modify the randomly named VPC security group and the cluster-level kube-<clusterID> security group, but you must make sure that these rules are still met.

Modifying the kube-<vpcID> security group is not recommended as doing so might cause disruptions in network connectivity between the cluster and the Kubernetes master. Instead, you can remove the default security group from the VPC ALB or VPE gateway and replace it with a security group that you create and manage.

Required inbound and outbound rules for cluster workers

By default, traffic rules for cluster workers are covered by the randomly named VPC security group and the kube-<clusterID> cluster security group. If you modify or replace either of these security groups, make sure the following traffic rules are still allowed.

If you have a VPC cluster that runs at version 1.28 or later, you might need to include additional security group rules.

Inbound rules

Required inbound rules for cluster worker security groups
Rule purpose Protocol Port or Value Source
Allow all worker nodes in this cluster to communicate with each other. ALL
VPC security group randomly-generated-sg-name
Allow incoming traffic requests to apps that run on your worker nodes. TCP 30000 - 32767 Any
If you require VPC VPN access or classic infrastructure access into this cluster, allow incoming traffic requests to apps that run on your worker nodes. UDP 30000 - 32767 Any

Outbound rules

Required outbound rules for cluster worker security groups
Rule purpose Protocol Port or Value Destination
Allow worker nodes to be created in your cluster. ALL
CIDR block 161.26.0.0/16
Allow worker nodes to communicate with other IBM Cloud services that support private cloud service endpoints. ALL
CIDR block 166.8.0.0/14
Allow all worker nodes in this cluster to communicate with each other. ALL
Security group kube-<cluster_ID>
Allow outbound traffic to be sent to the Virtual private endpoint gateway which is used to talk to the Kubernetes master. ALL

Virtual private endpoint gateway IP addresses. The Virtual private endpoint gateway is assigned an IP address from a VPC subnet in each of the zones where your cluster has a worker node. For example, if the cluster spans 3 zones, there are up to 3 IP addresses assigned to each Virtual private endpoint gateway. To find the Virtual private endpoint gateway IPs:

  1. Go to the Virtual private cloud dashboard.
  2. Click Virtual private endpoint gateways, then select the Region where your cluster is located.
  3. Find your cluster, then click the IP addresses in the IP Address column to copy them.
Allow the worker nodes to connect to the Ingress LoadBalancer. To find the IPs needed to apply this rule, see Allow worker nodes to connect to the Ingress LoadBalancer. TCP 443 Ingress load balancer IPs

Required rules for VPCs with a cluster that runs at version 1.28 or later

In version 1.27 and earlier, VPC clusters pull images from the IBM Cloud Container Registry through a private cloud service endpoint for the Container Registry. For version 1.28 and later, this network path is updated so that images are pulled through a VPE gateway instead of a private service endpoint. This change affects all clusters in a VPC; when you create or update a single cluster in a VPC to version 1.28, all clusters in that VPC, regardless of their version or type, have their network path updated. For more information, see Networking changes for VPC clusters.

If you update or create a cluster in your VPC that runs at version 1.28 or later, you must make sure that the following security group rules are implemented to allow traffic to the VPE gateway for registry. Each of these rules must be created for each zone in the VPC and must specify the entire VPC address prefix range for the zone as the destination CIDR. To find the VPC address prefix range for each zone in the VPC, run ibmcloud is vpc-address-prefixes <vpc_name_or_id>.

Outbound security group rules to add for version 1.28 Required outbound rules for cluster worker security groups
Rule type Protocol Destination IP or CIDR Destination Port
Outbound TCP Entire VPC address prefix range 443
Outbound TCP Entire VPC address prefix range 4443

Required inbound and outbound rules for VPC ALBs

By default, traffic rules for VPC ALBs are covered by the kube-<vpcID> security group. Note that you should not modify this security group as doing so might cause disruptions in network connectivity between the cluster and the Kubernetes master. However, you can remove the security group from your ALB and replace it with one that you create and manage. If you do so, you must make sure that the following traffic rules are still covered.

Inbound rules

Required inbound rules for VPC ALB security groups
Rule purpose Protocol Port or Value Source
If you use your own security group for the LBaaS for Ingress and you expose any applications in your cluster with Ingress, you must allow TCP protocol to ports 443 or 80, or both. You can allow access for all clients, or you can allow only certain source IPs or subnets to access these applications. TCP 80 or 443 All sources, or specific source IP addresses or subnets
If you use your own security group for any LBaaS instances that you use to expose other applications, then within that security group you must allow any intended sources to access the appropriate ports and protocols. TCP, UDP, or both LBaaS ports Any sources, or specific source IPs or subnets

Outbound rules

Required outbound rules for VPE and VPC ALB security groups
Rule purpose Protocol Port or Value Destination
Allow the ALB to send traffic to the cluster workers on the TCP NodePort range TCP 30000 - 32767 Any
Allow the ALB to send traffic to the cluster workers on the UDP NodePort range UDP 30000 - 32767 Any
If you use your own security group for the LBaaS for Ingress, you must allow TCP traffic from cluster worker nodes to port 443. If you do not already allow all TCP traffic port to 443, such as if you filter traffic by source IP, then allowing traffic from cluster worker nodes is the minimum requirement for the Ingress health checks to succeed. TCP 443 Security group kube-<cluster_ID>

Allow worker nodes to connect to the Ingress LoadBalancer

Follow the steps to allow worker nodes to connect to the Ingress LoadBalancer.

  1. Get the EXTERNAL-IP of the LoadBalancer service.

    kubectl get svc -o wide -n openshift-ingress router-default
    
  2. Run dig on the EXTERNAL-IP to get the IP addresses associated with the LoadBalancer.

    dig <EXTERNAL-IP> +short
    

    Example output

    150.XXX.XXX.XXX
    169.XX.XXX.XXX
    
  3. Create outbound security rules to each of the IP address that you retrieved earlier and port 443.

    ibmcloud is sg-rulec <sg> outbound tcp --port-min 443 --port-max 443 --remote 150.XXX.XXX.XXX
    

If the Ingress or console operators fail their health checks, you can repeat these steps to see if the LoadBalancer IP addresses changed. While rare, if the amount of traffic to your LoadBalancers varies widely, these IP addresses might change to handle the increased or decreased load.