Understanding VPC security groups in version 4.14 and earlier
Virtual Private Cloud 4.14 and earlier
VPC clusters use various security groups to protect cluster components. These security groups are automatically created and attached to cluster workers, load balancers, and cluster-related VPE gateways. You can modify or replace these security groups to meet your specific requirements, however, any modifications must be made according to these guidelines to avoid network connectivity disruptions.
- Default behavior
- Virtual Private Cloud security groups filter traffic at the hypervisor level. Security group rules are not applied in a particular order. However, requests to your worker nodes are only permitted if the request matches one of the rules that
you specify. When you allow traffic in one direction by creating an inbound or outbound rule, responses are also permitted in the opposite direction. Security groups are additive, meaning that if your worker nodes are attached to more than
one security group, all rules included in the security groups are applied to the worker nodes. Newer cluster versions might have more rules in the
kube-clusterID
security group than older cluster versions. Security group rules are added to improve the security of the service and do not break functionality. - Limitations
- Because the worker nodes of your VPC cluster exist in a service account and aren't listed in the VPC infrastructure dashboard, you can't create a security group and apply it to your worker node instances. You can only modify the existing security group.
If you modify the default VPC security groups, you must, at minimum, include the required inbound and outbound rules so that your cluster works properly.
Virtual private endpoint (VPE) gateways
When the first VPC cluster at Red Hat OpenShift on IBM Cloud 4.14+ is created in a given VPC, or a cluster in that VPC has its master updated to 4.14+, then several shared VPE Gateways are created for various IBM Cloud services. Only one of each of these types of shared VPE Gateways is created per VPC. All the clusters in the VPC share the same VPE Gateway for these services. These shared VPE Gateways are assigned a single Reserved IP from each zone that the cluster workers are in.
The following VPE gateways are created automatically when you create a VPC cluster.
VPE Gateway | Description |
---|---|
IBM Cloud Container Registry | Shared Pull container images from IBM Cloud Container Registry to apps running in your cluster. |
IBM Cloud Object Storage s3 gateway | Shared Access the COS APIs. |
IBM Cloud Object Storage config gateway | Shared Backup container images to IBM Cloud Object Storage |
Red Hat OpenShift on IBM Cloud | Access the Red Hat OpenShift on IBM Cloud APIs to interact with and manage your cluster. † |
† All supported VPC clusters have a VPE Gateway for the cluster master that gets created at in the your account when the cluster gets created. This VPE Gateway is used by the cluster workers, and can be used by other things in the VPC, to connect
to the cluster's master API server. This VPE Gateway is assigned a single Reserved IP from each zone that the cluster workers are in, and this IP is created in one of the VPC subnets in that zone that has cluster workers. For example, if the
cluster has workers in only a single zone region (us-east-1
) and single VPC subnet, then a single IP is created in that subnet and assigned to the VPE Gateway. If a cluster has workers in all three zones like us-east-1
,
us-east-2
, and us-east-3
and these workers are spread out among 4 VPC subnets in each zone, then 12 VPC subnets altogether, three IPs are created, one in each zone, in one of the four VPC subnets in that zone. Note
that the subnet is chosen at random.
Managed security groups
Security groups applied to cluster workers
Do not modify the rules in thekube-<clusterID>
security group as doing so might cause disruptions in network connectivity between the workers of the cluster and the control cluster. However, if you don't want the kube-<clusterID>
,
you can instead add your own security groups during cluster creation.
Security group type | Name | Details |
---|---|---|
VPC security group | Randomly generated |
|
VPC cluster security group | kube-<clusterID> |
|
Security groups applied to VPE gateways and VPC ALBs
Do not modify the rules in the kube-<vpcID>
security group as doing so might cause disruptions in network connectivity between the workers of the cluster and the control cluster. However, you can remove the default security group from the VPC ALB and replace it with a custom security group that you create and manage.
Security group type | Name | Details |
---|---|---|
Red Hat OpenShift on IBM Cloud security group | kube-<vpcID> |
|
Minimum inbound and outbound requirements
The following inbound and outbound rules are covered by the default VPC security groups. Note that you can modify the randomly named VPC security group and the cluster-level kube-<clusterID>
security group, but you must make
sure that these rules are still met.
Modifying the kube-<vpcID>
security group is not recommended as doing so might cause disruptions in network connectivity between the cluster and the Kubernetes master. Instead, you can remove the default security group from the VPC ALB or VPE gateway and replace it with a security group that you create and manage.
Required inbound and outbound rules for cluster workers
By default, traffic rules for cluster workers are covered by the randomly named VPC security group and the kube-<clusterID>
cluster security group. If you modify or replace either of these security groups, make sure the
following traffic rules are still allowed.
If you have a VPC cluster that runs at version 4.14 or later, you might need to include additional security group rules.
Inbound rules
Rule purpose | Protocol | Port or Value | Source |
---|---|---|---|
Allow all worker nodes in this cluster to communicate with each other. | ALL |
|
VPC security group randomly-generated-sg-name |
Allow incoming traffic requests to apps that run on your worker nodes. | TCP | 30000 - 32767 |
Any |
If you require VPC VPN access or classic infrastructure access into this cluster, allow incoming traffic requests to apps that run on your worker nodes. | UDP | 30000 - 32767 |
Any |
Outbound rules
Rule purpose | Protocol | Port or Value | Destination |
---|---|---|---|
Allow worker nodes to be created in your cluster. | ALL |
|
CIDR block 161.26.0.0/16 |
Allow worker nodes to communicate with other IBM Cloud services that support private cloud service endpoints. | ALL |
|
CIDR block 166.8.0.0/14 |
Allow all worker nodes in this cluster to communicate with each other. | ALL |
|
Security group kube-<cluster_ID> |
Allow outbound traffic to be sent to the Virtual private endpoint gateway which is used to talk to the Kubernetes master. | ALL |
|
Virtual private endpoint gateway IP addresses. The Virtual private endpoint gateway is assigned an IP address from a VPC subnet in each of the zones where your cluster has a worker node. For example, if the cluster spans 3 zones, there are up to 3 IP addresses assigned to each Virtual private endpoint gateway. To find the Virtual private endpoint gateway IPs:
|
Allow the worker nodes to connect to the public service endpoint IPs for the OAuth service. To find the IPs needed to apply this rule, see Allow the worker nodes to connect to the public service endpoint IPs for the OAuth service. | TCP | OAuth port | OAuth IP addresses |
Allow the worker nodes to connect to the Ingress LoadBalancer. To find the IPs needed to apply this rule, see Allow worker nodes to connect to the Ingress LoadBalancer. | TCP | 443 | Ingress load balancer IPs |
Required rules for VPCs with a cluster that runs at version 4.14 or later
In version 1.27 and earlier, VPC clusters pull images from the IBM Cloud Container Registry through a private cloud service endpoint for the Container Registry. For version 4.14 and later, this network path is updated so that images are pulled through a VPE gateway instead of a private service endpoint. This change affects all clusters in a VPC; when you create or update a single cluster in a VPC to version 4.14, all clusters in that VPC, regardless of their version or type, have their network path updated. For more information, see Networking changes for VPC clusters.
If you update or create a cluster in your VPC that runs at version 4.14 or later, you must make sure that the following security group rules are implemented to allow traffic to the VPE gateway for registry. Each of these rules must be created
for each zone in the VPC and must specify the entire VPC address prefix range for the zone as the destination CIDR. To find the VPC address prefix range for each zone in the VPC, run ibmcloud is vpc-address-prefixes <vpc_name_or_id>
.
Rule type | Protocol | Destination IP or CIDR | Destination Port |
---|---|---|---|
Outbound | TCP | Entire VPC address prefix range | 443 |
Outbound | TCP | Entire VPC address prefix range | 4443 |
Required inbound and outbound rules for VPC ALBs
By default, traffic rules for VPC ALBs are covered by the kube-<vpcID>
security group. Note that you should not modify this security group as doing so might cause disruptions in network connectivity between the cluster and
the Kubernetes master. However, you can remove the security group from your ALB and replace it with one that you create and manage. If you do so, you must make sure that the following traffic rules are still covered.
Inbound rules
Rule purpose | Protocol | Port or Value | Source |
---|---|---|---|
If you use your own security group for the LBaaS for Ingress and you expose any applications in your cluster with Ingress, you must allow TCP protocol to ports 443 or 80, or both. You can allow access for all clients, or you can allow only certain source IPs or subnets to access these applications. Note that this requirement also applies if you want to use the OpenShift Console, which is exposed by the cluster's default Ingress instance and router. To allow access for the OpenShift Console, you must allow TCP protocol to port 443 from all sources, or from specific source IP addresses or subnets. | TCP | 80 or 443 |
All sources, or specific source IP addresses or subnets |
If you use your own security group for any LBaaS instances that you use to expose other applications, then within that security group you must allow any intended sources to access the appropriate ports and protocols. | TCP, UDP, or both | LBaaS ports | Any sources, or specific source IPs or subnets |
Outbound rules
Rule purpose | Protocol | Port or Value | Destination |
---|---|---|---|
Allow the ALB to send traffic to the cluster workers on the TCP NodePort range | TCP | 30000 - 32767 |
Any |
Allow the ALB to send traffic to the cluster workers on the UDP NodePort range | UDP | 30000 - 32767 |
Any |
If you use your own security group for the LBaaS for Ingress, you must allow TCP traffic from cluster worker nodes to port 443. If you do not already allow all TCP traffic port to 443, such as if you filter traffic by source IP, then allowing traffic from cluster worker nodes is the minimum requirement for the Red Hat OpenShift console operator and Ingress health checks to succeed. | TCP | 443 | Security group kube-<cluster_ID> |
Allow the worker nodes to connect to the public service endpoint IPs for the OAuth service
Required for VPC clusters with a public service endpoint.
-
Get the URL used to connect to the oauth service in the form
https://cXXX-e.<region>.containers.cloud.ibm.com:<PORT>
. Make a note of the region and port.oc get --raw /.well-known/oauth-authorization-server | grep issuer
Example output
"issuer": "https://c104-e.us-east.containers.cloud.ibm.com:31062",
-
Run
dig +short
ornslookup
to get the IPs associated with the hostname in the URL. For single-campus multizone regions there is only one public IP associated with the hostname. For multizone regions, there are 3. Look up the individual IPs by replacinge
with 1, 2, and 3.dig +short cXXX-1.<region>.containers.cloud.ibm.com dig +short cXXX-2.<region>.containers.cloud.ibm.com dig +short cXXX-3.<region>.containers.cloud.ibm.com
Example output
169.63.111.82
-
Add a security group rule for each of the IPs that allows an outbound TCP connection to the destination IP and PORT.
ibmcloud is sg-rulec <sg> outbound tcp --port-min 31062 --port-max 31062 --remote 169.63.111.82
Example output
Direction outbound IP version ipv4 Protocol tcp Min destination port 31062 Max destination port 31062 Remote 169.63.111.82
Allow worker nodes to connect to the Ingress LoadBalancer
Follow the steps to allow worker nodes to connect to the Ingress LoadBalancer.
-
Get the
EXTERNAL-IP
of the LoadBalancer service.oc get svc -o wide -n openshift-ingress router-default
-
Run
dig
on theEXTERNAL-IP
to get the IP addresses associated with the LoadBalancer.dig <EXTERNAL-IP> +short
Example output
150.XXX.XXX.XXX 169.XX.XXX.XXX
-
Create outbound security rules to each of the IP address that you retrieved earlier and port 443.
ibmcloud is sg-rulec <sg> outbound tcp --port-min 443 --port-max 443 --remote 150.XXX.XXX.XXX
If the Ingress or console operators fail their health checks, you can repeat these steps to see if the LoadBalancer IP addresses changed. While rare, if the amount of traffic to your LoadBalancers varies widely, these IP addresses might change to handle the increased or decreased load.