Customizing your network setup in Satellite locations and clusters
Satellite Red Hat CoreOS
There are several features that you can use to customize your Satellite network setup to better isolate and segment the services and workloads running in your location. Review the following sections for more information.
These customizations are available only for Red Hat CoreOS-enabled locations.
Depending on the networking customizations you want to apply, you might need to specify certain options in the CLI when creating your location, when creating your cluster, or after you have set up your location and cluster. The following tags indicate when to apply the customizations.
- During location creation: These customizations must be applied from the CLI during location creation.
- During cluster creation: These customizations can be applied from the CLI during cluster creation.
- After location and cluster creation: These customizations can be applied after creating your location and clusters.
Defining custom subnets when creating your location
During location creation
When you create your location in the CLI, you can define the following parameters to customize networking in your location. For more information, see the ibmcloud sat location create
command reference.
You can specify the --pod-subnet
option to specify a custom subnet CIDR to provide private IP addresses for pods. This option can be used only if you also enable Red Hat CoreOS with the --coreos-enabled
flag. The subnet
must be at least /23
or larger. The default value is 172.16.0.0/16
.
You can also specify the --service-subnet
option to specify a custom subnet CIDR to provide private IP addresses for services. This option can be used only if you also enable Red Hat CoreOS with the --coreos-enabled
flag. The subnet must be at least /24
or larger. The default value is 172.20.0.0/16
.
Defining the pod network interface when creating your location
During location creation
When you create your location in the CLI, you can define the --pod-network-interface
to set the pod network interface. The available methods are can-reach
and interface
.
- To provide a direct URL or IP address, specify
can-reach=<url>
orcan-reach=<ip_address>
. If the network interface can reach the provided URL or IP address, this option is used. For example, usecan-reach=www.exampleurl.com
for specifying a URL andcan-reach=172.19.0.0
for specifying an IP address. - To choose an interface with a Regex string, specify
interface=<regex_string>
; for example,interface=eth.*
.
For more information, see the ibmcloud sat location create
command reference.
Defining the pod network interface when creating your cluster
During cluster creation
When you create your cluster in the CLI, you can define the --pod-network-interface
to set the pod network interface. The available methods are can-reach
and interface
.
- To provide a direct URL or IP address, specify
can-reach=<url>
orcan-reach=<ip_address>
. If the network interface can reach the provided URL or IP address, this option is used. For example, usecan-reach=www.exampleurl.com
for specifying a URL andcan-reach=172.19.0.0
for specifying an IP address. - To choose an interface with a Regex string, specify
interface=<regex_string>
; for example,interface=eth.*
.
For more information, see the ibmcloud oc cluster create satellite
command reference.
Limiting access to your Satellite cluster
After location and cluster creation
After you create your location and cluster, you can use the ibmcloud ks cluster master satellite-service-endpoint allowlist add
command to add a subnet to a Satellite cluster's service endpoint allowlist. Authorized requests to the cluster master that originate from the subnet are permitted through the Satellite service endpoint. The allowlist must be enabled for the restrictions to apply.
Creating network policies by using Calico host endpoints
After location and cluster creation
If you create a Satellite cluster at version 4.12 and later, there are Calico Hostendpoint
instances that are deployed to the cluster for every worker node’s network interface.
You can use these Hostendpoint
instances to define global network policies with the help of the “ibm-cloud.kubernetes.io/interface-name: <network_interface_name>”
label that is added to each Hostendpoint
instance.
In addition to this label, all the worker node’s labels are added for additional customization options.
These Hostendpoints
have the “projectcalico-default-allow"
profile, which means these Hostendpoints
might change the previously expected behavior when you update to 4.12.
Before updating to 4.12, make sure all the previously expected networking rules, policies, Hostendpoints
work the same after the update as well.
For more information, see the Calico documentation.
Restricting NodePort service access
After location and cluster creation
By default, NodePort services are accessible on all network interfaces that are available to the cluster, for example 0.0.0.0
.
However, in Satellite locations where multiple networks are available to the hosts, you can limit the available network interfaces for your services.
To limit the range, you restrict the listening addresses for NodePort services at the cluster level. This restriction allows the cluster administrator to limit access to a specific network interface by using the IP subnet as the allowed listening
address range. Complete the following steps to reconfigure the kube-proxy
component to limit the listening address range for your NodePort services.
Incorrectly configuring the node-port-addresses
might isolate your services from valid sources. Make sure you plan for all the required subnets your service needs. IBM Cloud doesn't require access to any subnet to manage your clusters.
-
Prepare your planned source subnet CIDR list that you want to allow to access the NodePort Services.
-
Run the following command to get the
network.operator.openshift.io
configuration and save a copy in case you need to revert the changes.kubectl get network.operator.openshift.io cluster -o yaml
-
Edit the
network.operator.openshift.io
configuration and set the subnet list under thespec
section and include the required subnets for your NodePort service.spec: kubeProxyConfig: proxyArguments: node-port-addresses: - 192.0.2.0/24 - 198.51.100.0/24
-
Save your changes and apply them to the cluster.
oc apply -f updated-network-config.yaml
-
For cluster version 4.10.x and earlier, set the management state of the cluster network operator to
Unmanaged
.oc patch network.operator.openshift.io cluster --type=merge --patch '{"spec": {"managementState": "Unmanaged"}}'
-
Restart the
kube-proxy
DaemonSet to apply the changes. This operation is not disruptive.oc rollout restart ds -n openshift-kube-proxy openshift-kube-proxy
-
Wait until all your
kube-proxy
pods are restarted. Check the status by running the following command.oc get po -n openshift-kube-proxy --selector app=kube-proxy
-
For cluster version of 4.10.x and earlier, reset the management state of the cluster network operator to
Managed
. Note that this action might restart the proxy pods.oc patch network.operator.openshift.io cluster --type=merge --patch '{"spec": {"managementState": "Managed"}}'
After all pods are restarted, your cluster is configured with the restricted subnets. You can repeat these steps to update or remove the subnet list as needed.
You can further restrict the traffic by using NetworkPolicies
on a per-service basis.