IBM Cloud Docs
Controlling traffic with ACLs

Controlling traffic with ACLs

Virtual Private Cloud

Control inbound and outbound traffic to your cluster by creating and applying access control lists (ACLs) to each subnet that your cluster is attached to. Note that these steps outline the minimum ACL rules that are required for basic cluster functionality. You might need to create additional ACL rules based on your use case.

Looking for a simpler security setup? Leave the default ACL for your VPC as-is, and modify the default security group instead.

Level of application
Virtual Private Cloud subnet
Default behavior
When you create a VPC, a default ACL is created in the format allow-all-network-acl-<VPC_ID> for the VPC. The ACL includes an inbound rule and an outbound rule that allow all traffic to and from your subnets. Any subnet that you create in the VPC is attached to this ACL by default. ACL rules are applied in a particular order. When you allow traffic in one direction by creating an inbound or outbound rule, you must also create a rule for responses in the opposite direction because responses are not automatically permitted.
Use case
If you want to specify which traffic is permitted to the worker nodes on your VPC subnets, you can create a custom ACL for each subnet in the VPC. For example, you can create the following set of ACL rules to block most inbound and outbound network traffic of a cluster, while allowing communication that is necessary for the cluster to function.
Limitations
If you create multiple clusters that use the same subnets in one VPC, you can't use ACLs to control traffic between the clusters because they share the same subnets. You can use Calico network policies to isolate your clusters on the private network.

When you use the following steps to create custom ACLs, only network traffic that is specified in the ACL rules is permitted to and from your VPC subnets. All other traffic that is not specified in the ACLs is blocked for the subnets, such as cluster integrations with third party services. If you must allow other traffic to or from your worker nodes, be sure to specify those rules where noted in the following steps.

For more information, see the VPC documentation.

If you have a VPC cluster that runs at version 4.14 or later, you need to include additional ACL rules. For more information, see Required rules for VPCs with a cluster version 4.14 or later.

Creating ACLs from the console

For each subnet that your cluster is attached to, use the IBM Cloud VPC console to create a custom ACL with rules that limit inbound and outbound network traffic to only communication that is necessary for the cluster to function.

Looking for a simpler security setup? Leave the default ACL for your VPC as-is, and modify the default security group instead.

  1. Multizone clusters only: In the Subnets for VPC dashboard, note the IP Range of each subnet that your cluster is attached to.

  2. In the Access control lists for VPC dashboard, click New access control list.

  3. Give your ACL a name and choose the VPC and resource group that your subnets are in.

  4. In the Rules section, delete the default inbound rule and outbound rule that allow all inbound and outbound traffic.

  5. In the Inbound rules section, create the following rules by clicking Create.

    ACL rules are applied to traffic in a specific order. If you must create custom rules to allow other traffic to or from your worker nodes on this subnet, be sure to set the custom rules' Priority before the final rule that denies all traffic. If you add a rule after the deny rule, your rule is ignored, because the packet matches the deny rule and is blocked and removed before it can reach your rule. Note that these steps outline the minimum ACL rules that are required for basic cluster functionality. You might need to create additional ACL rules based on your use case.

    Required inbound rules
    Rule purpose Allow/Deny Protocol Source IP or CIDR Source Port Destination IP or CIDR Destination Port Priority
    Allow worker nodes to be created in your cluster. Allow ALL 161.26.0.0/16
    Any
    Set to top
    Allow worker nodes to communicate with other IBM Cloud services that support private cloud service endpoints, and in clusters that run Red Hat OpenShift version 4.5 or earlier, with the cluster master through the private cloud service endpoint. Allow ALL 166.8.0.0/14
    Any
    After 1
    Multizone clusters: Allow worker nodes in one subnet to communicate with the worker nodes in other subnets within the cluster. Create one rule for each subnet that you want to connect to. Allow ALL Other subnet's CIDR
    Any
    After 2
    Allow incoming traffic requests to apps that run on your worker nodes. Allow TCP Any Any Any 30000 - 32767 After 3
    To expose apps by using load balancers or Ingress, allow traffic through VPC load balancers. For example, for Ingress listening on TCP/443) Allow TCP Any Any Any 443 After 4
    ** For clusters with public and private service endpoints, allow Red Hat OpenShift Console pods to contact the master Allow ALL Master IPs
    Any
    After 5
    * Allow access from the Red Hat OpenShift control plane IP addresses that are used to health check and report the overall status of your Ingress components. Create one rule for each control plane CIDR for the region where your cluster is located Allow TCP Each control plane CIDR for the region where your cluster is located
    Any 80 After 6
    Deny all other traffic that does not match the previous rules. Deny ALL Any
    Any
    Set to bottom

    ** Master IP addresses can be found by looking at the server: entry in the cluster's kubeconfig file (kubectl config view) and then running the host command to determine which IP addresses that it resolves to; for example, host c100-e.eu-gb.containers.cloud.ibm.com. The Master IP address can be from 1 to 3 different IP addresses, depending on the region and the control plane cluster.

    * Alternatively, to allow the inbound traffic for router health checks, you can create a single inbound rule and outbound rule to allow all incoming and outgoing traffic on port 80.

  6. In the Outbound rules section, create the following rules by clicking Create.

    ACL rules are applied to traffic in a specific order. If you must create custom rules to allow other traffic to or from your worker nodes on this subnet, be sure to set the custom rules' Priority before the final rule that denies all traffic. If you add a rule after the deny rule, your rule is ignored, because the packet matches the deny rule and is blocked and removed before it can reach your rule. Note that these steps outline the minimum ACL rules that are required for basic cluster functionality. You might need to create additional ACL rules based on your use case.

    Required outbound rules
    Rule purpose Allow/Deny Protocol Source IP or CIDR Source Port Destination IP or CIDR Destination Port Priority
    Allow worker nodes to be created in your cluster. Allow ALL Any
    161.26.0.0/16
    Set to top
    Allow worker nodes to communicate with other IBM Cloud services that support private cloud service endpoints, and in clusters that run Red Hat OpenShift version 4.5 or earlier, with the cluster master through the private cloud service endpoint. Allow ALL Any
    166.8.0.0/14
    After 1
    Multizone clusters: Allow worker nodes in one subnet to communicate with the worker nodes in all other subnets within the cluster. Create one rule for each subnet that you want to connect to. Allow ALL Any
    Other subnet's CIDR
    After 2
    Allow incoming traffic requests to apps that run on your worker nodes. Allow TCP Any 30000 - 32767 Any Any After 3
    To expose apps by using load balancers or Ingress, allow traffic through VPC load balancers. Allow TCP Any 443 Any Any After 4
    ** For clusters with public and private service endpoints, allow Red Hat OpenShift Console pods to contact the master Allow ALL Master IPs
    Any
    After 5
    * Allow access from the Red Hat OpenShift control plane IP addresses that are used to health check and report the overall status of your Ingress components. Create one rule for each control plane CIDR for the region where your cluster is located. Allow TCP Any 80 Each control plane CIDR for the region where your cluster is located. Any After 6
    Deny all other traffic that does not match the previous rules. Deny ALL Any
    Any
    Set to bottom

    ** Master IP addresses can be found by looking at the server: entry in the cluster's kubeconfig file (kubectl config view) and then running the host command to determine which IP addresses that it resolves to; for example, host c100-e.eu-gb.containers.cloud.ibm.com. The Master IP address can be from 1 to 3 different IP addresses, depending on the region and the control plane cluster.

    * Alternatively, to allow the inbound traffic for router health checks, you can create a single inbound rule and outbound rule to allow all incoming and outgoing traffic on port 80.

  7. In the Attach subnets section, choose the name of the subnet for which you created this ACL.

  8. Click Create access control list.

  9. Multizone clusters: Repeat steps 2 - 8 to create an ACL for each subnet that your cluster is attached to.

Creating ACLs with the CLI

For each subnet that your cluster is attached to, use the IBM Cloud CLI to create a custom ACL with rules that limit inbound and outbound network traffic to only communication that is necessary for the cluster to function. Note that these steps outline the minimum ACL rules that are required to allow a cluster to deploy and to have basic function; they aren't intended to cover all use cases.

Looking for a simpler security setup? Leave the default ACL for your VPC as-is, and modify the default security group instead.

Before you begin

  1. Install the infrastructure-service plug-in. The prefix for running commands is ibmcloud is.
    ibmcloud plugin install infrastructure-service
    
  2. Target the region that your VPC is in.
    ibmcloud target -r <region>
    

To create an ACL for each subnet that your cluster is attached to,

  1. List your VPC subnets. For each subnet that your cluster is attached to, get the ID and Subnet CIDR.

    If you can't remember which subnets your cluster is attached to, you can run ibmcloud oc worker get -c <cluster_name_or_ID> -w <worker_node_ID> for one worker node in each zone of your cluster, and get the ID and CIDR of the subnet that the worker is attached to.

    ibmcloud is subnets
    

    Example output

    ID                                          Name          Status      Subnet CIDR        Addresses   ACL                                                  Public Gateway                            VPC       Zone         Resource group
    0717-2224d664-d435-425e-b5ec-f324af2df445   mysubnet1     available   10.240.0.0/28      11/16       armored-never-chitchat-gangly-skylight-prototype     -                                  myvpc     us-south-1   default
    0717-1eff410a-a47e-4bc2-b4a3-5f742f320008   mysubnet2     available   10.240.1.0/24      251/256     armored-never-chitchat-gangly-skylight-prototype     pgw-ed8f6970-9b71-11ea-b94a-956de1af1ccd        myvpc     us-south-2   default
    
  2. Create an ACL. After you create the ACL, you can add rules to your ACL and apply the ACL to one subnet. Because the rules that you add to your ACL are specific to one subnet, consider naming the ACL in the format <cluster>-<subnet>-acl, such as mycluster-mysubnet1-acl, for easy identification. When you create the ACL, two rules are automatically created that allow all inbound and all outbound network traffic. In the output, note the ACL ID and the IDs of the two default rules.

    ibmcloud is network-acl-create <cluster>-<subnet>-acl
    

    Example output

    Creating network ACL mycluster-mysubnet1-acl under account Account as user user@email.com...
    
    ID        740b07cb-4e69-4ef2-b667-42ed27d8b29e
    Name      mycluster-mysubnet1-acl
    Created   2019-09-18T15:09:45-05:00
    
    Rules
    
    inbound
    ID                                     Name                                                          Action   IPv*   Protocol   Source      Destination   Created
    e3caad9c-68b7-4a4a-8188-d239fbc724df   allow-all-inbound-rule-740b07cb-4e69-4ef2-b667-42ed27d8b29e   allow    ipv4   all        0.0.0.0/0   0.0.0.0/0     2019-09-18T15:09:45-05:00
    
    outbound
    ID                                     Name                                                           Action   IPv*   Protocol   Source      Destination   Created
    7043ac95-cc4e-42e9-a490-22177237f083   allow-all-outbound-rule-740b07cb-4e69-4ef2-b667-42ed27d8b29e   allow    ipv4   all        0.0.0.0/0   0.0.0.0/0     2019-09-18T15:09:45-05:00
    
  3. Export the ACL ID as an environment variable.

    export ACL_ID=<acl_id>
    
  4. Delete the default rules that allow all inbound and outbound traffic. After, your ACL still exists, but does not contain any networking rules.

    ibmcloud is network-acl-rule-delete $acl_id <default_inbound_rule_ID> -f
    
    ibmcloud is network-acl-rule-delete $acl_id <default_outbound_rule_ID> -f
    
  5. Multizone clusters: Add rules to your ACL to allow worker nodes in one subnet to communicate with the worker nodes in all other subnets within the cluster. The subnet for which you create the ACL rule is defined as 0.0.0.0/0 and you use the CIDRs of the other subnets as your destination CIDR. Make sure to create one inbound and one outbound rule for each of the subnets that you want to connect to.

    ibmcloud is network-acl-rule-add $acl_id allow outbound all 0.0.0.0/0 <other_zone_subnet_CIDR> --name allow-workers-outbound
    
    ibmcloud is network-acl-rule-add $acl_id allow inbound all <other_zone_subnet_CIDR> 0.0.0.0/0 --name allow-workers-inbound
    
  6. Create rules to allow inbound traffic from and outbound traffic to the 161.26.0.0/16 and 166.8.0.0/14 IBM Cloud private subnets. The 161.26.0.0/16 rules allow you to create worker nodes in your cluster. The 166.8.0.0/14 rules allow worker nodes to communicate with other IBM Cloud services that support private cloud service endpoints, and in clusters that run Red Hat OpenShift version 4.5 or earlier, with the cluster master through the private cloud service endpoint.

    Need to connect your worker nodes to IBM Cloud services that support only public cloud service endpoints? Attach a public gateway to the subnet so that worker nodes can connect to a public endpoint outside of your cluster. Then, create inbound and outbound rules to allow ingress from and egress to the services' public cloud service endpoints.

    ibmcloud is network-acl-rule-add $acl_id allow outbound all 0.0.0.0/0 161.26.0.0/16 --name allow-ibm-private-network-outbound1
    ibmcloud is network-acl-rule-add $acl_id allow outbound all 0.0.0.0/0 166.8.0.0/14 --name allow-ibm-private-network-outbound2
    ibmcloud is network-acl-rule-add $acl_id allow inbound all 161.26.0.0/16 0.0.0.0/0 --name allow-ibm-private-network-inbound1
    ibmcloud is network-acl-rule-add $acl_id allow inbound all 166.8.0.0/14 0.0.0.0/0 --name allow-ibm-private-network-inbound2
    
  7. To allow incoming traffic requests to apps that run on your worker nodes through router services or load balancers, allow traffic to the VPC load balancer on port 443 and to worker nodes on ports 30000 - 32767.

    ibmcloud is network-acl-rule-add $acl_id allow outbound tcp 0.0.0.0/0 0.0.0.0/0 --name allow-lb-outbound --source-port-min 443 --source-port-max 443
    ibmcloud is network-acl-rule-add $acl_id allow inbound tcp 0.0.0.0/0 0.0.0.0/0 --name allow-lb-inbound --destination-port-min 443 --destination-port-max 443
    ibmcloud is network-acl-rule-add $acl_id allow outbound tcp 0.0.0.0/0 0.0.0.0/0 --name allow-ingress-outbound --source-port-min 30000 --source-port-max 32767
    ibmcloud is network-acl-rule-add $acl_id allow inbound tcp 0.0.0.0/0 0.0.0.0/0 --name allow-ingress-inbound --destination-port-min 30000 --destination-port-max 32767  
    
  8. Optional: Allow access to and from the Red Hat OpenShift control plane IP addresses that are used to health check and report the overall status of your Ingress components. Create one inbound and one outbound rule for each control plane CIDR for the region where your cluster is located. Alternatively, you can create a single inbound rule and outbound rule to allow all incoming and outgoing traffic on port 80.

    ibmcloud is network-acl-rule-add $acl_id allow outbound tcp 0.0.0.0/0 <IP_address> --name allow-hc-outbound --source-port-min 80 --source-port-max 80
    ibmcloud is network-acl-rule-add $acl_id allow inbound tcp <IP_address> 0.0.0.0/0 --name allow-hc-inbound --destination-port-min 80 --destination-port-max 80
    
  9. Optional: If you must allow other traffic to or from your worker nodes on this subnet, add rules for that traffic.

    When you refer to the VPC subnet that your worker nodes are on, you must use 0.0.0.0/0. For more tips on how to create your rule, see the VPC CLI reference documentation.

    ibmcloud is network-acl-rule-add $acl_id <allow|deny> <inbound|outbound> <protocol> <source_CIDR> <destination_CIDR> --name <new_rule_name>
    

    For example, say that you want your worker nodes to communicate with a subnet in your organization's network, 207.42.8.0/24. Your worker nodes must be able to both send and receive information from devices or services on this subnet. You can create an outbound rule for traffic to and an inbound rule for traffic from your organization's subnet:

    ibmcloud is network-acl-rule-add $acl_id allow outbound all 0.0.0.0/0 207.42.8.0/24 --name corporate-network-outbound
    ibmcloud is network-acl-rule-add $acl_id allow inbound all 207.42.8.0/24 0.0.0.0/0 --name corporate-network-inbound
    
  10. Create rules to deny all other egress from and ingress to worker nodes that is not permitted by the previous rules that you created. Because these rules are created last in the chain of rules, they deny an incoming or outgoing connection only if the connection does not match any other rule that is earlier in the rule chain.

    ibmcloud is network-acl-rule-add $acl_id deny outbound all 0.0.0.0/0 0.0.0.0/0 --name deny-all-outbound
    ibmcloud is network-acl-rule-add $acl_id deny inbound all 0.0.0.0/0 0.0.0.0/0 --name deny-all-inbound
    
  11. Apply this ACL to the subnet. When you apply this ACL, the rules that you defined are immediately applied to the worker nodes on the subnet.

    ibmcloud is subnet-update <subnet_ID> --network-acl-id $acl_id
    

    Example output

    ID                  a1b2c3d4-f560-471b-b6ce-20067ac93439
    Name                mysubnet1
    IPv*                ipv4
    IPv4 CIDR           10.240.0.0/24
    IPv6 CIDR           -
    Address Available   244
    Address Total       256
    ACL                 mycluster-mysubnet1-acl(b664769d-514c-407f-a9f3-d44d72706121)
    Gateway             pgw-18a3ebb0-b539-11e9-9838-f3f4efa02374(f8b95e43-a408-4dc8-a489-ed649fc4cfec)
    Created             2019-08-02T10:20:17-05:00
    Status              available
    Zone                us-south-1
    VPC                 myvpc(ff537d43-a5a4-4b65-9627-17eddfa5237b)
    
  12. Repeat steps 2 - 11 for each subnet that you found in step 1.

ACL rules are applied to traffic in a specific order. If you want to add a rule after you complete these steps, ensure that you add the rule before the deny-all-inbound or deny-all-outbound rule. If you add a rule after these rules, your rule is ignored, because the packet matches the deny-all-inbound and deny-all-outbound rules and is blocked and removed before it can reach your rule. Create your rule in the proper order by including the --before-rule-name deny-all-(inbound|outbound) option.

Required rules for VPCs with a cluster that runs at version 4.14 or later

In version 1.27 and earlier, VPC clusters pull images from the IBM Cloud Container Registry through a private cloud service endpoint for the Container Registry. For version 4.14 and later, this network path is updated so that images are pulled through a VPE gateway instead of a private service endpoint. This change affects all clusters in a VPC; when you create or update a single cluster in a VPC to version 4.14, all clusters in that VPC, regardless of their version or type, have their network path updated. For more information, see Networking changes for VPC clusters.

If you update or create a cluster in your VPC, you must make sure that the following ACL rules, or equivalent rules, are implemented to allow connections to and from the VPE Gateway for Registry. If the rules are not already implemented, add them. Each of these rules must be created for each zone in the VPC and must specify the entire VPC address prefix range for the zone as the source (for outbound rules) or destination (for inbound rules) CIDR. To find the VPC address prefix range for each zone in the VPC, run ibmcloud is vpc-address-prefixes <vpc_name_or_id>. The priority for each rule must be higher than any rule that otherwise denies this traffic, such as a rule that denies all traffic.

Add the following rules to your custom ACLs.

Outbound and inbound ACL rules to add for version 4.14
Rule type Protocol Source IP or CIDR Source Port Destination IP or CIDR Destination Port
Outbound/Allow TCP Entire VPC address prefix range Any Entire VPC address prefix range 443
Outbound/Allow TCP Entire VPC address prefix range Any Entire VPC address prefix range 4443
Inbound/Allow TCP Entire VPC address prefix range 443 Entire VPC address prefix range Any
Inbound/Allow TCP Entire VPC address prefix range 4443 Entire VPC address prefix range Any