Connecting Locations with IBM Cloud using Direct Link
- Supported location types
- Red Hat CoreOS (RHCOS)-enabled Locations and Connectors
- Supported host operating systems
- Red Hat CoreOS (RHCOS) and RHEL
Use a secure IBM Cloud® Direct Link connection for Satellite Link communications between your services running in an IBM Cloud Satellite® Location and IBM Cloud®.
In this tutorial, you set up your Satellite Link to use a Direct Link connection. The Link tunnel client at your Location sends traffic over the Direct Link connection to a Relay that you create in your IBM Cloud account. This Relay proxies the traffic to Link tunnel server's IP address in the IBM Cloud private network.
FAQ
- Is the cost of the relay compute resources included in the Satellite service costs?
- The IBM Cloud resources used for the relay and Satellite are billed separately.
- Are there additional charges to access IBM Cloud services over Direct Link?
- No, there are not additional charges for accessing services over Direct Link.
- Why do I need Direct Link?
- Normally, outbound traffic from your Location to IBM Cloud services might flow over the public internet. When you use Direct Link, outbound traffic from your Location flows through the Direct Link, rather than using the public Internet.
- My organization disables Internet access by design. Can I create and maintain Locations and hosts attached to the Location with Direct Link?
- If you have Direct Link, you can use it for Satellite services. With Direct Link, you can create Locations and attach hosts without access to public Internet.
- Can I use RHEL hosts to set up my Direct Link?
- No. You must have both an RHCOS-enabled location and you must use RHCOS hosts in your location to use Direct Link.
- Can I redirect all traffic to IBM Cloud over Direct Link instead of Internet?
- Currently, not all services support Direct Link. So, depending on the services you use it might or might not be possible for all traffic to use Direct Link.
- What IBM Cloud services can I access over Direct Link to avoid accessing them over Internet?
- After following these instructions, Satellite and Openshift on Satellite will work across Direct Link. Additional services deployed into a Satellite location might have features that require public Internet access. It is recommended to consult the documentation for each service running in a location to verify their connectivity requirements.
- If I have two Locations that use Direct Link, can I use them for Direct Link to fail over from one Location to the other?
- This functionality is not yet available.
- How do I size Direct Link capacity for my Location?
- There are no additional sizing requirements for using Direct Link. So, you can size your Location like a normal Location, meaning based on the services you will use.
- Can I have one-click deployment of everything needed to enable Direct Link to avoid manual errors?
- Currently, a one-click deployment for Direct Link is not available. It might be available at a future time.
Target use case
Customers who are currently using Direct Link between IBM and on-prem or other public clouds, can continue to use it for Satellite Link. This allows customers to:
- Access services on IBM Cloud from a Satellite location over Direct Link; examples are backups in IBM Cloud® Object Storage, sending metrics to IBM Cloud Monitoring, tracking events in IBM Cloud Activity Tracker, or sending logs to IBM Cloud Log Analysis.
- Access services running in a Satellite location from IBM Cloud.
- Access public cloud services outside of IBM Cloud.
These can be accessed using Satellite endpoints addresses created to route traffic over Direct Link instead of the internet.
This prevents customer’s sensitive data going over the public internet such as logging, backups or data between integrated services across the hybrid cloud landscape. This also helps optimize ingress/egress charges.
Overview
By default, two Satellite Link components, the tunnel server and the connector, proxy network traffic between IBM Cloud and resources in your Satellite location over a secure TLS connection. This document covers the use case of using a TLS connection over Direct Link.
This setup uses the tunnel server's private cloud service endpoint to route traffic over the IBM Cloud private network(166.9.0.0/8
, see Service network.
However, communication to the tunnel server's private cloud service endpoint must go through the 166.9.X.X/16 IP address range on the IBM Cloud private network, which is not routable from IBM Cloud Direct Link.
To enable access to the 166.9.X.X/16
range, create a Relay in your IBM Cloud account, which will reverse proxy incoming traffic to the tunnel server's private cloud service endpoint. By default, the Relay Ingress has an IP address
in the internal 10.X.X.X/8
IP address range, which is accessible via a Direct Link connection.
The following diagram demonstrates the flow of traffic.
- Network traffic originating at your Location, such as a request from an IBM Cloud Satellite cluster to an IBM Cloud service, is routed via Link Service over Direct Link to the Relay Private Ingress, which has a Direct link routable address.
- The Relay initiates a new session to forward the request to the private cloud service endpoint of the tunnel server, which terminates to an IP address in the
166.9.X.X/16
range (Link private address).
Objectives
You can create Red Hat CoreOS enabled Locations without accessing public Internet. All traffic is handled by Direct Link and stays internal.
High level steps include:
- Create a Red Hat CoreOS enabled Satellite Location with your IBM Cloud account that terminates your Direct Link.
- Create a relay, which is a reverse proxy that supports http/https and secure websocket.
- Provision Red Hat CoreOS hosts. Customize the hosts by using ignition script that are downloaded as attach script for the Location created in Step 1.
Prerequisites
- You must have a Red Hat CoreOS enabled Satellite Location. If you don't have one already, follow the instructions in Creating a Red Hat CoreOS enabled Satellite Location to create it.
- Direct Link is available between target Satellite location and IBM Cloud specific VPC or classic clusters.
- Ensure that your IBM Cloud Direct Link connection can access the
10.X.X.X/8
IP address range. Review network design to avoid IP conflicts between two ends of Direct Link. - Install the IBM Cloud CLI and plug-ins and install the Kubernetes CLI (
kubectl
). - Ensure that your IBM Cloud account is Virtual Router Function (VRF) enabled to use service endpoints.
- Ensure you have the following access policies. For more information, see Checking user permissions.
- Administrator IBM Cloud IAM platform access role for IBM Cloud Kubernetes Service
- Administrator IBM Cloud IAM platform access role for IBM Cloud Container Registry
- Writer or Manager IBM Cloud IAM service access role for IBM Cloud Kubernetes Service
- Administrator IBM Cloud IAM platform access role for IBM Cloud Container Registry
- Administrator IBM Cloud IAM platform access role for IBM Cloud Satellite
- Manager IBM Cloud IAM service access role for IBM Cloud Satellite
- Administrator IBM Cloud IAM platform access role for Object Storage
- Writer or Manager IBM Cloud IAM platform access role for IBM Cloud Object Storage
- Administrator IBM Cloud IAM platform access role for IBM Cloud Certificate Manager
- Writer or Manager IBM Cloud IAM platform access role for IBM Cloud Certificate Manager
- Viewer IBM Cloud IAM platform access role for the resource group that you plan to use with Satellite
- Manager IBM Cloud IAM service access role for IBM Cloud Schematics
- Specifically provision a Kubernetes cluster and deploy NGINX reverse proxy in it to forward to the Direct Link endpoints.
Creating a Red Hat CoreOS enabled Satellite Location
You can skip this step if you already have a Red Hat CoreOS enabled Satellite Location.
Log in to your IBM Cloud account that has Direct Link and create a Red Hat CoreOS enabled Satellite Location. For more information, see Creating a Satellite location.
Creating a relay
The relay is an http/https reverse proxy that supports secure Websocket connections. It can run on VSI, Red Hat OpenShift or IBM Cloud Kubernetes Service as Classic or VPC. The following steps demonstrates an example for deploying NGINX reverse proxy on a private-only VPC Red Hat OpenShift cluster (on VPC private nodes).
One essential requirement is to have a valid name that can be assigned to the cluster private ingress (Relay Ingress) and a valid certificate on IBM Cloud. On IBM Cloud, VPC Red Hat OpenShift clusters on private nodes come with default private host name and certificate. You can use them or bring your custom host name and certificate. This example uses the default private host name and certificates.
VPC clusters considerations for this scenario:
- Zone: Any multizone-capable VPC zone
- Worker node flavor: Any VPC infrastructure flavor
- Version: 4.x.x
- Worker pool: At least 2 worker nodes
- Subnets: Include Ingress Load Balancer IP subnets if the default ranges conflict with the
--pod-subnet
and--service-subnet
values of the Red Hat OpenShift cluster on Satellite or the network CIDR where the Satellite or Red Hat OpenShift hosts are deployed on-premise. - Cloud service endpoints: Do not specify the
--disable-public-service-endpoint
option if you want both public and private endpoints. - Spread the default worker pool across zones to increase the availability of your classic or VPC cluster.
- Ensure that at least 2 worker nodes exist in each zone, so that the private ALBs that you configure in subsequent steps are highly available and can properly receive version updates.
In the following example, a private-only VPC cluster and private Ingress controller are created by default. However, you can also use a Red Hat OpenShift cluster with a public cloud service endpoint enabled, but in this case your cluster is created with only a public Ingress controller by default. If you want to set up your relay by using a cluster with a public service endpoint, you must first enable the private Ingress controller and register it with a subdomain and certificate by following the steps in Setting up Ingress.
-
Create a private-only Red Hat OpenShift cluster on VPC. For more information, see Creating VPC clusters.
There are many ways to expose apps in Red Hat OpenShift cluster in a VPC. In this example, the app will be privately exposed with private endpoints only, which is the most common use case for Direct Link customers. Red Hat OpenShift clusters that are privately exposed with private endpoints only come with default private name and certificate. They will be used in this example to expose the NGINX reverse proxy pods. You can use the default ones or bring your custom host name and certificate. For more details, see Privately exposing apps in VPC clusters with a private cloud service endpoint only.
-
Create a Secret Manager instance and register it to the Red Hat OpenShift cluster that was created in the previous step. For more information, see Creating a Secrets Manager service instance.
-
Get the Ingress details from Direct Link.
ibmcloud oc cluster get --cluster CLUSTER_NAME_OR_ID | grep Ingress
Example output:
Ingress Subdomain: mycluster-i000.us-south.containers.appdomain.cloud Ingress Secret: mycluster-i000
In this scenario, if you run the
nslookup
command to the Ingress Subdomain, it resolves to IBM service private IP address (10.0.0.0/8
). Adding routes to make the Ingress IP address (10.0.0.0/8
) reachable from customer on-prem is not covered in this document. You are responsible for facilitating routing between on-prem and the Ingress relay on IBM Cloud. -
Get the secret CRN.
ibmcloud oc ingress secret get -c CLUSTER --name SECRET_NAME --namespace openshift-ingress
-
Create a namespace for the NGINX reverse proxy.
kubectl create ns dl-reverse-proxy
-
Copy the default TLS secret from
openshift-ingress
to the project where NGINX is going to deployed.ibmcloud oc ingress secret create --cluster CLUSTER_NAME_OR_ID --cert-crn CRN --name SECRET_NAME --namespace dl-reverse-proxy
-
Copy the following Ingress resource file content into your local directory. Replace
VALUE_FROM_INGRESS_SUBDOMAIN
andVALUE_FROM_INGRESS_SECRET
with your own values.apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: dl-ingress-resource annotations: kubernetes.io/ingress.class: "public-iks-k8s-nginx" nginx.ingress.kubernetes.io/proxy-read-timeout: "3600" nginx.ingress.kubernetes.io/proxy-send-timeout: "3600" spec: tls: - hosts: - satellite-dl.VALUE_FROM_INGRESS_SUBDOMAIN secretName: VALUE_FROM_INGRESS_SECRET rules: - host: satellite-dl.VALUE_FROM_INGRESS_SUBDOMAIN http: paths: - path: / pathType: Prefix backend: service: name: nginxsvc port: number: 80
-
Create the Ingress.
oc apply -f myingressresource.yaml -n <dl-reverse-proxy>
-
Get the tunnel server Direct Link internal Ingress host name by running the following command.
ibmcloud sat endpoint ls --location LOCATION_ID
-
From the output, take a note of the Location endpoint. Replace
c-01
,c-02
, orc-03
withd-01-ws
,d-02-ws
, ord-03-ws
and remove the port. For example,c-01.private.us-south.link.satellite.cloud.ibm.com:40934
becomesd-01-ws.private.us-south.link.satellite.cloud.ibm.com
. This value can be used as the value forproxy_pass https
in the ConfigMap file. -
Copy the NGINX ConfigMap file content into your local directory. This configuration either applies ws-reverse proxy or https reverse proxy to the tunnel server Direct Link endpoint. Replace
VALUE_FROM_INGRESS_SUBDOMAIN
andVALUE_FOR_PROXY_PASS
with your own values.apiVersion: v1 kind: ConfigMap metadata: name: confnginx data: nginx.conf: | user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; events { worker_connections 4096; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; keepalive_timeout 65; server_names_hash_bucket_size 128; server { listen 80; server_name VALUE_FROM_INGRESS_SUBDOMAIN; proxy_connect_timeout 180; proxy_send_timeout 180; proxy_read_timeout 180; location /ws { proxy_pass https://VALUE_FOR_PROXY_PASS; proxy_ssl_server_name on; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } location / { proxy_pass https://VALUE_FOR_PROXY_PASS; } } }
-
Copy the NGINX deployment file.
apiVersion: apps/v1 kind: Deployment metadata: name: nginx labels: app: nginx spec: selector: matchLabels: app: nginx replicas: 2 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:alpine ports: - containerPort: 80 volumeMounts: - name: nginx-config mountPath: /etc/nginx/nginx.conf subPath: nginx.conf volumes: - name: nginx-config configMap: name: confnginx --- apiVersion: v1 kind: Service metadata: name: nginxsvc labels: app: nginx spec: type: NodePort ports: - port: 80 protocol: TCP name: http - port: 443 protocol: TCP name: https - port: 8080 protocol: TCP name: tcp selector: app: nginx
-
Create the ConfigMap of the NGINX (
dl-reverse-proxy
).oc apply -f confnginx.yaml -n dl-reverse-proxy
-
Set the correct
scc
profile and create the NGINX (dl-reverse-proxy
).oc adm policy add-scc-to-user anyuid system:serviceaccount:dl-reverse-proxy:default oc apply -f nginx-app.yaml -n dl-reverse-proxy
-
Double check that the NGINX is running by listing pods.
oc get pods
NAME READY STATUS RESTARTS AGE nginx-757fbc9f85-gv2p6 1/1 Running 0 53s nginx-757fbc9f85-xvmrj 1/1 Running 0 53s
-
Check logs.
oc logs -f nginx-757fbc9f85-gv2p6
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/ /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf 10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh /docker-entrypoint.sh: Configuration complete; ready for start up
-
Check Ingress.
oc get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE dl-ingress-resource <none> mysatellite-dl.myname-cluster10-22bfd3cd491bdeb5a0f661fb1e2b0c44-0000.us-south.containers.appdomain.cloud router-default.myname-cluster10-22bfd3cd491bdeb5a0f661fb1e2b0c44-0000.us-south.containers.appdomain.cloud 80, 443 19m
-
Connect to the reverse proxy URL.
curl -k https://mysatellite-dl.myname-cluster10-22bfd3cd491bdeb5a0f661fb1e2b0c44-0000.us-south.containers.appdomain.cloud
{"status":"UP"}
Redirect the traffic to use the Direct Link Path
Now that the relay is ready to proxy incoming traffic to the tunnel server internal Ingress, you can set up your Location host or Connector to redirect its traffic through the relay. This ensures that all the traffic will stay on the Direct Link path in your private network and no traffic uses the public internet.
Redirect traffic for your Connector agent or Location host by following the applicable instructions below.
Using a Connector agent (Docker or Windows)
Follow the instructions in Configuring a Tunnel server Ingress host for your Satellite Connector agent but set the SATELLITE_CONNECTOR_DIRECT_LINK_INGRESS
parameter to the relay Ingress host created in step 2 (mysatellite-dl.myname-cluster10-22bfd3cd491bdeb5a0f661fb1e2b0c44-0000.us-south.containers.appdomain.cloud
) instead of to the internal Ingress host itself. For example:
-
On a container platform, in your
env.txt
file.SATELLITE_CONNECTOR_DIRECT_LINK_INGRESS=mysatellite-dl.myname-cluster10-22bfd3cd491bdeb5a0f661fb1e2b0c44-0000.us-south.
-
On Windows, in your
config.json
file."SATELLITE_CONNECTOR_DIRECT_LINK_INGRESS": "mysatellite-dl.myname-cluster10-22bfd3cd491bdeb5a0f661fb1e2b0c44-0000.us-south.containers.appdomain.cloud"
Using a Location Host (CoreOS or RHEL)
-
Run the following CLI command to download the host attachment script for your Location.
ibmcloud sat host attach --location LOCATION --operating-system SYSTEM --host-link-agent-endpoint ENDPOINT
--location LOCATION
- The name or ID of the Satellite location.
--operating-system SYSTEM
- The operating system of the hosts you want to attach to your location (RHEL or RHCOS).
--host-link-agent-endpoint ENDPOINT
- The endpoint that the link agent uses to connect to the link tunnel server. In this case, the relay Ingress host created in step 2 (
mysatellite-dl.myname-cluster10-22bfd3cd491bdeb5a0f661fb1e2b0c44-0000.us-south.containers.appdomain.cloud
).
-
Attach the host agent by following the applicable instructions for your host operating system in Attaching on-prem hosts to your location.