IBM Cloud Docs
Choosing an app exposure service

Choosing an app exposure service

Securely expose your apps to external traffic by using Red Hat OpenShift Ingress controller or IBM Cloud® Kubernetes Service NodePort, or network load balancer.

Understanding options for exposing apps

To securely expose your apps to external traffic, you can use choose from the following services.

Red Hat OpenShift Ingress controller

Expose multiple apps in a cluster by setting up routing with the Red Hat OpenShift Ingress controller. The Ingress controller uses the Ingress subdomain as a secured and unique public or private entry point to route incoming requests. You can use one subdomain to expose multiple apps in your cluster as services. The Ingress controller solution uses three components.

  • The Ingress operator that manages the Ingress controllers in your cluster.
  • The Ingress controller is a HAProxy-based Kubernetes service that manages all incoming traffic for the apps in your cluster by implementing routing rules for the apps. This controller is managed by the Ingress operator. The Ingress controller listens for incoming HTTP, or HTTPS service requests, and then forwards requests to the pods for that app only according to the rules defined in the Ingress resource and implemented by the Ingress controller.
  • The Route resource defines the rules for how to route and load balance incoming requests for an app.

A Route exposes a service as a hostname in the format <service_name>-<project>.<cluster_name>-<random_hash>-0000.<region>.containers.appdomain.cloud. An Ingress controller is deployed by default to your cluster, which enable Routes to be used by external clients. The Ingress controller uses the service selector to find the service and the endpoints that back the service. You can configure the service selector to direct traffic through one Route to multiple services. You can also create either unsecured or secured Routes by using the TLS certificate that is assigned by the Ingress controller for your hostname. Note that the Ingress controller supports only the HTTP and HTTPS protocols.

NodePort

When you expose apps with a NodePort service, a NodePort in the range of 30000 - 32767 and an internal cluster IP address is assigned to the service. To access the service from outside the cluster, you use the public or private IP address of any worker node and the NodePort in the format <IP_address>:<nodeport>. However, the public and private IP addresses of the worker node are not permanent. When a worker node is removed or re-created, a new public and a new private IP address are assigned to the worker node. NodePorts are ideal for testing public or private access or providing access for only a short amount of time.

LoadBalancer

The LoadBalancer service type is implemented differently depending on your cluster's infrastructure provider.

  • Classic clusters: Network load balancer (NLB). Every standard cluster is provisioned with four portable public and four portable private IP addresses that you can use to create a layer 4 TCP/UDP network load balancer (NLB) for your app. You can customize your NLB by exposing any port that your app requires. The portable public and private IP addresses that are assigned to the NLB are permanent and don't change when a worker node is re-created in the cluster. If you create public NLBs, you can create a subdomain for your app that registers the public NLB IP addresses with a DNS entry. You can also enable health check monitors on the NLB IPs for each subdomain.
  • VPC clusters: Load Balancer for VPC. When you create a Kubernetes LoadBalancer service for an app in your cluster, a layer 7 VPC load balancer is automatically created in your VPC outside of your cluster. The VPC load balancer is multizonal and routes requests for your app through the private NodePorts that are automatically opened on your worker nodes. By default, the load balancer is also created with a hostname that you can use to access your app, but you can also create a subdomain for your app that creates a DNS entry.
Ingress

You can use Ingress to expose your app to external traffic via the Red Hat OpenShift Ingress controller. The Red Hat OpenShift Controller Manager converts your Ingress resources to Route resources and the Red Hat OpenShift Ingress controller processes those Routes.

Choosing among load balancing solutions

Now that you understand what options you have to expose apps in your Red Hat OpenShift cluster, choose the best solution for your workload.

The following table compares the features of each app exposure method.

Comparison of external networking for apps in Red Hat OpenShift clusters.
Characteristics NodePort LoadBalancer (Classic - NLB) LoadBalancer (VPC load balancer) Ingress controller
Stable external IP Yes Yes
External hostname Yes Yes Yes
SSL termination Yes* Yes* Yes
HTTP(S) load balancing Yes
Custom routing rules Yes
Multiple apps per route or service Yes
Consistent hybrid multicloud deployment Yes

* SSL termination is provided by ibmcloud oc nlb-dns commands. In classic clusters, these commands are supported for public NLBs only.

Planning public external load balancing

Publicly expose an app in your cluster to the internet.

In classic clusters, your worker nodes are connected to a public VLAN. The public VLAN determines the public IP address that is assigned to each worker node, which provides each worker node with a public network interface. Public networking services connect to this public network interface by providing your app with a public IP address and, optionally, a public URL.

In VPC clusters, your worker nodes are connected to private VPC subnets only. However, when you create public networking services, a VPC load balancer is automatically created. The VPC load balancer can route public requests to your app by providing your app a public URL. When an app is publicly exposed, anyone that has the public URL can send a request to your app.

When an app is publicly exposed, anyone that has the public service IP address or the URL that you set up for your app can send a request to your app. For this reason, expose as few apps as possible. Expose an app to the public only when your app is ready to accept traffic from external web clients or users.

The public network interface for worker nodes is protected by predefined Calico network policy settings that are configured on every worker node during cluster creation. By default, all outbound network traffic is allowed for all worker nodes. Inbound network traffic is blocked except for a few ports. These ports are opened so that IBM can monitor network traffic and automatically install security updates for the Kubernetes master, and so that connections can be established to public networking services. For more information about these policies, including how to modify them, see Network policies.

Public app networking for classic clusters

To make an app publicly available to the internet in a classic cluster, choose an app exposure method that uses routes, NodePorts, NLBs, or setting up Ingress. The following table describes each possible method, why you might use it, and how to set it up. For basic information about the networking services that are listed, see Understanding Kubernetes service types.

You can't use multiple app exposure methods for one app.

Characteristics of public app exposure methods
Name Load-balancing method Use case Implementation
Route HTTP(S) load balancing that exposes the app with a subdomain and uses custom routing rules

Implement custom routing rules and SSL termination for multiple apps. Choose this method to remain Red Hat OpenShift-native; for example, you can use the Red Hat OpenShift web console to create and manage routes.

  1. Create a ClusterIP service to assign an internal IP address to your app.
  2. Set up a Red Hat OpenShift route.
  3. Customize routing rules with optional configurations.
NodePort Port on a worker node that exposes the app on the worker's public IP address Test public access to one app or provide access for only a short amount of time. Create a public NodePort service.
NLB v1.0 (+ subdomain) Basic load balancing that exposes the app with an IP address or a subdomain. Quickly expose one app to the public with an IP address or a subdomain that supports SSL termination. Create a public network load balancer (NLB) 1.0 in a single or multizone cluster. Optionally register a subdomain and health checks.
NLB v2.0 (+ subdomain) DSR load balancing that exposes the app with an IP address or a subdomain.

Expose an app that might receive high levels of traffic to the public with an IP address or a subdomain that supports SSL termination.

  1. Complete the prerequisites.
  2. Create a public NLB 2.0 in a single or multizone cluster.
  3. Optionally register a subdomain and health checks.
Ingress controller HTTP(S) load balancing that exposes the app with a subdomain and uses custom routing rules Implement custom routing rules and SSL termination for multiple apps. Create an Ingress resource. for the default public Ingress controller.

Public app networking for VPC clusters

To make an app publicly available to the internet in a VPC cluster, choose an app exposure method that uses routes, VPC load balancers, or setting up Ingress. The following table describes each possible method, why you might use it, and how to set it up. For basic information about the networking services that are listed, see Understanding Kubernetes service types.

You can't use multiple app exposure methods for one app.

Characteristics of public app exposure methods
Name Load-balancing method Use case Implementation
Route HTTP(S) load balancing that exposes the app with a subdomain and uses custom routing rules Implement custom routing rules and SSL termination for multiple apps. Choose this method to remain Red Hat OpenShift-native; for example, you can use the Red Hat OpenShift web console to create and manage routes. Create a route by using the default public Ingress controller in clusters with a public cloud service endpoint, or create a route by using a custom public Ingress controller in clusters with a private cloud service endpoint only.
VPC load balancer Basic load balancing that exposes the app with a hostname. Quickly expose one app to the public with a VPC load balancer-assigned hostname. Create a public LoadBalancer service in your cluster. A multizonal VPC load balancer is automatically created in your VPC that assigns a hostname to your LoadBalancerservice for your app.
Ingress HTTP(S) load balancing that exposes the app with a subdomain and uses custom routing rules. Implement custom routing rules and SSL termination for multiple apps. Create an Ingress resource for the default public Ingress controller in clusters with a public cloud service endpoint, or create an Ingress resource for a custom public Ingress controller in clusters with a private cloud service endpoint only.

Planning private external load balancing

Privately expose an app in your cluster to the private network only.

When you deploy an app in a Kubernetes cluster in IBM Cloud Kubernetes Service, you might want to make the app accessible to only users and services that are on the same private network as your cluster. Private load balancing is ideal for making your app available to requests from outside the cluster without exposing the app to the general public. You can also use private load balancing to test access, request routing, and other configurations for your app before you later expose your app to the public with public network services.

As an example, say that you create a private load balancer for your app. This private load balancer can be accessed by:

  • Any pod in that same cluster.
  • Any pod in any cluster in the same IBM Cloud account.
  • If you're not in the IBM Cloud account but still behind the company firewall, any system through a VPN connection to the subnet that the load balancer IP is on.
  • If you're in a different IBM Cloud account, any system through a VPN connection to the subnet that the load balancer IP is on.
  • In classic clusters, if you have VRF or VLAN spanning enabled, any system that is connected to any of the private VLANs in the same IBM Cloud account.
  • In VPC clusters:
    • If traffic is permitted between VPC subnets, any system in the same VPC.
    • If traffic is permitted between VPCs, any system that has access to the VPC that the cluster is in.

Private app networking for classic clusters

When your worker nodes are connected to both a public and a private VLAN, you can make your app accessible from a private network only by creating private routes, NodePorts, NLBs, or setting up Ingress. Then, you can create Calico policies to block public traffic to the services.

The public network interface for worker nodes is protected by predefined Calico network policy settings that are configured on every worker node during cluster creation. By default, all outbound network traffic is allowed for all worker nodes. Inbound network traffic is blocked except for a few ports. These ports are opened so that IBM can monitor network traffic and automatically install security updates for the Kubernetes master, and so that connections can be established to NodePort, LoadBalancer, and Ingress services.

Because the default Calico network policies allow inbound public traffic to these services, you can create Calico policies to instead block all public traffic to the services. For example, a NodePort service opens a port on a worker node over both the private and public IP address of the worker node. An NLB service with a portable private IP address opens a public NodePort on every worker node. You must create a Calico preDNAT network policy to block public NodePorts.

Check out the following methods for private app networking:

Characteristics of network deployment patterns for a public and a private VLAN setup
Name Load-balancing method Use case Implementation
Route HTTP(S) load balancing that exposes the app with a subdomain and uses custom routing rules Implement custom routing rules and SSL termination for multiple apps. Choose this method to remain Red Hat OpenShift-native; for example, you can use the Red Hat OpenShift web console to create and manage routes.
  1. Create a ClusterIP service to assign an internal IP address to your app.
  2. Create an Ingress controller that is exposed by a private load balancer.
  3. Set up a Red Hat OpenShift route.
  4. Customize routing rules with optional configurations.
NodePort Port on a worker node that exposes the app on the worker's private IP address Test private access to one app or provide access for only a short amount of time.
  1. Create a NodePort service.
  2. A NodePort service opens a port on a worker node over both the private and public IP address of the worker node. You must use a Calico preDNAT network policy to block traffic to the public NodePorts.
NLB 1.0 Basic load balancing that exposes the app with a private IP address Quickly expose one app to a private network with a private IP address.
  1. Create a private NLB service. An NLB with a portable private IP address still has a public node port open on every worker node.
  2. Create a Calico preDNAT network policy to block traffic to the public NodePorts.
NLB v2.0 DSR load balancing that exposes the app with a private IP address Expose an app that might receive high levels of traffic to a private network with an IP address.
  1. Complete the prerequisites.
  2. Create a private NLB 2.0 in a single or multizone cluster. An NLB with a portable private IP address still has a public node port open on every worker node.
  3. Create a Calico preDNAT network policy to block traffic to the public NodePorts.
Ingress HTTP(S) load balancing that exposes the app with a subdomain and uses custom routing rules Implement custom routing rules and SSL termination for multiple apps. See Publicly exposing apps with Ingress

Private app networking for VPC clusters

To make an app available over a private network only in a VPC cluster, choose a load balancing deployment pattern based on your cluster's service endpoint setup: public and private cloud service endpoint, or private cloud service endpoint only. For each service endpoint setup, the following table describes each possible app exposure method, why you might use it, and how to set it up.

Private network deployment patterns for a VPC cluster
Name Load-balancing method Use case Implementation
Route HTTP(S) load balancing that exposes the app with a subdomain and uses custom routing rules Implement custom routing rules and SSL termination for multiple apps. Choose this method to remain Red Hat OpenShift-native; for example, you can use the Red Hat OpenShift web console to create and manage routes. Create an Ingress controller by using the default private Ingress controller in clusters with a private cloud service endpoint only, or create a route by using a custom private Ingress controller in clusters with a public cloud service endpoint.
NodePort Port on a worker node that exposes the app on the worker's private IP address Test private access to one app or provide access for only a short amount of time. Create a private NodePort service.
VPC load balancer Basic load balancing that exposes the app with a private hostname Quickly expose one app to a private network with a VPC load balancer-assigned private hostname. Create a private LoadBalancer service in your cluster. A multizonal VPC load balancer is automatically created in your VPC that assigns a hostname to your LoadBalancer service for your app.
Ingress HTTP(S) load balancing that exposes the app with a subdomain and uses custom routing rules Implement custom routing rules and SSL termination for multiple apps. Create an Ingress resource for the default private Ingress controller in clusters with a private cloud service endpoint only, or create an Ingress resource for a custom private Ingress controller in clusters with a public cloud service endpoint.