Setting up your first cluster in your Virtual Private Cloud (VPC)
Virtual Private Cloud
Create an Red Hat® OpenShift® on IBM Cloud® cluster in your Virtual Private Cloud (VPC).
- Red Hat OpenShift on IBM Cloud gives you all the advantages of a managed offering for your cluster infrastructure environment, while using the Red Hat OpenShift tooling and catalog that runs on Red Hat Enterprise Linux for your app deployments.
- VPC gives you the security of a private cloud environment with the dynamic scalability of a public cloud. VPC uses the next version of Red Hat OpenShift on IBM Cloud infrastructure providers, with a select group of v2 API, CLI, and console functionality.
Audience
This tutorial is for administrators who are creating a cluster in Red Hat OpenShift on IBM Cloud in VPC compute for the first time.
Objectives
In the tutorial lessons, you create a Red Hat OpenShift on IBM Cloud cluster in a Virtual Private Cloud (VPC). Then, you access built-in Red Hat OpenShift components, deploy an app in a Red Hat OpenShift project, and expose the app with a VPC load balancer so that external users can access the service.
What you'll get
In this tutorial, you will create the following resources. There are optional steps to delete these resources if you do not want to keep them after completing the tutorial.
- A VPC cluster
- A simple Hello World app deployed to your cluster
- A VPC load balancer to expose your app
Prerequisites
Complete the following prerequisite steps to set up permissions and the command-line environment.
Permissions: If you are the account owner, you already have the required permissions to create a cluster and can continue to the next step. Otherwise, ask the account owner to set up the API key and assign you the minimum user permissions in IBM Cloud IAM.
Command-line tools: For quick access to your resources from the command line, try the IBM Cloud Shell. Otherwise, set up your local command-line environment by completing the following steps.
- Install the IBM Cloud CLI (
ibmcloud
), Kubernetes Service plug-in (ibmcloud oc
), and IBM Cloud Container Registry plug-in (ibmcloud cr
). - Install the Red Hat OpenShift (
oc
) and Kubernetes (kubectl
) CLIs. - To work with VPC, install the
infrastructure-service
plug-in. The prefix for running commands isibmcloud is
.ibmcloud plugin install infrastructure-service
- Update your Kubernetes Service plug-in to the latest version.
ibmcloud plugin update kubernetes-service
Create a cluster in a VPC
Create an IBM Cloud Virtual Private Cloud (VPC) environment. Then, create a Red Hat OpenShift on IBM Cloud cluster on the VPC infrastructure. For more information about VPC, see Getting Started with Virtual Private Cloud.
-
Log in to the account, resource group, and IBM Cloud region where you want to create your VPC environment. The VPC must be set up in the same multizone metro region where you want to create your cluster. In this tutorial you create a VPC in
us-south
. For other supported regions, see Multizone metros for VPC clusters. If you have a federated ID, include the--sso
option.ibmcloud login -r us-south [-g <resource_group>] [--sso]
-
Create a VPC for your cluster. For more information, see the docs for creating a VPC in the console or CLI.
- Create a VPC called
myvpc
and note the ID in the output. VPCs provide an isolated environment for your workloads to run within the public cloud. You can use the same VPC for multiple clusters, such as if you plan to have different clusters host separate microservices that need to communicate with each other. If you want to separate your clusters, such as for different departments, you can create a VPC for each cluster.ibmcloud is vpc-create myvpc
- Create a public gateway and note the ID in the output. In the next step, you attach the public gateway to a VPC subnet, so that your worker nodes can communicate on the public network. Default Red Hat OpenShift components,
such as the web console and OperatorHub, require public network access. If you skip this step, you must instead be connected to your VPC private network, such as through a VPN connection, to access the Red Hat OpenShift web console or
access your cluster with
kubectl
commands.ibmcloud is public-gateway-create gateway-us-south-1 <vpc_ID> us-south-1
- Create a subnet for your VPC, and note its ID. Consider the following information when you create the VPC subnet:
-
Zones: You must have one VPC subnet for each zone in your cluster. The available zones depend on the region that you created the VPC in. To list available zones in the region, run
ibmcloud is zones
. -
IP addresses: VPC subnets provide private IP addresses for your worker nodes and load balancer services in your cluster, so make sure to create a subnet with enough IP addresses, such as 256. You can't change the number of IP addresses that a VPC subnet has later.
-
Public gateways: Include the public gateway that you previously created. You must have one public gateway for each zone in your cluster.
ibmcloud is subnet-create mysubnet1 <vpc_ID> --zone us-south-1 --ipv4-address-count 256 --public-gateway-id <gateway_ID>
-
- Create a VPC called
-
Create a standard IBM Cloud Object Storage instance to back up the internal registry in your cluster. In the output, note the instance ID.
ibmcloud resource service-instance-create myvpc-cos cloud-object-storage standard global
-
Create a cluster in your VPC in the same zone as the subnet. The following command creates a version 4.16 cluster in Dallas with the minimum configuration of 2 worker nodes that have at least 4 cores and 16 GB memory so that default Red Hat OpenShift components can deploy. For more information about the command options, see the
cluster create vpc-gen2
CLI reference docs.ibmcloud oc cluster create vpc-gen2 --name myvpc-cluster --zone us-south-1 --version 4.16_openshift --flavor bx2.4x16 --workers 2 [--operating-system REDHAT_8_64] --vpc-id <vpc_ID> --subnet-id <vpc_subnet_ID> --cos-instance <cos_crn> --disable-outbound-traffic-protection
-
List your cluster details. Review the cluster State, check the Ingress Subdomain, and note the Master URL. Your cluster creation might take some time to complete. After the cluster state shows Normal, the cluster network and Ingress components take about 10 more minutes to deploy and update the cluster domain that you use for the Red Hat OpenShift web console and other routes. Before you continue, wait until the cluster is ready by checking that the Ingress Subdomain follows a pattern of
<cluster_name>.<globally_unique_account_HASH>-0001.<region>.containers.appdomain.cloud
.ibmcloud oc cluster get --cluster myvpc-cluster
-
Add yourself as a user to the Red Hat OpenShift cluster by setting the cluster context.
ibmcloud oc cluster config --cluster myvpc-cluster --admin
-
In your browser, navigate to the address of your Master URL and append
/console
. For example,https://c0.containers.cloud.ibm.com:23652/console
. If time permits, you can explore the different areas of the Red Hat OpenShift web console. -
From the Red Hat OpenShift web console menu bar, click your profile IAM#user.name@email.com > Copy Login Command. Display and copy the
oc login
token command into your command line to authenticate via the CLI.Save your cluster master URL to access the Red Hat OpenShift console later. In future sessions, you can skip the
cluster config
step and copy the login command from the console instead. -
Verify that the
oc
commands run properly with your cluster by checking the version.oc version
Example output
Client Version: v4.16.0 Kubernetes Version: v1.31.2.2
If you can't perform operations that require Administrator permissions, such as listing all the worker nodes or pods in a cluster, download the TLS certificates and permission files for the cluster administrator by running the
ibmcloud oc cluster config --cluster myvpc-cluster --admin
command.
Deploy an app to your cluster
Quickly deploy a new sample app that is available to requests from inside the cluster only.
-
Create a Red Hat OpenShift project for your Hello World app.
oc new-project hello-world
-
Build the sample app from the source code. With the Red Hat OpenShift
new-app
command, you can refer to a directory in a remote repository that contains the Dockerfile and app code to build your image. The command builds the image, stores the image in the local Docker registry, and creates the app deployment configurations (dc
) and services (svc
). For more information about creating new apps, see the Red Hat OpenShift docs.oc new-app --name hello-world https://github.com/IBM/container-service-getting-started-wt --context-dir="Lab 1"
-
Verify that the sample Hello World app components are created.
- List the hello-world services and note the service name. So far, your app listens for traffic on these internal cluster IP addresses only. In the next lesson, you create a load balancer for the service so that the load
balancer can forward external traffic requests to the app.
Example outputoc get svc -n hello-world
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-world ClusterIP 172.21.xxx.xxx <none> 8080/TCP 31m
- List the pods. Pods with
build
in the name are jobs that Completed as part of the new app build process. Make sure that the hello-world pod status is Running.
Example outputoc get pods -n hello-world
NAME READY STATUS RESTARTS AGE hello-world-1-9cv7d 1/1 Running 0 30m hello-world-1-build 0/1 Completed 0 31m hello-world-1-deploy 0/1 Completed 0 31m
- List the hello-world services and note the service name. So far, your app listens for traffic on these internal cluster IP addresses only. In the next lesson, you create a load balancer for the service so that the load
balancer can forward external traffic requests to the app.
Set up a VPC load balancer to expose your app publicly
Set up a VPC load balancer to expose your app to external requests on the public network.
When you create a Kubernetes LoadBalancer
service in your cluster, a VPC load balancer is automatically created in your VPC outside of your cluster. The VPC load balancer is multizonal and routes requests for your app through the
private NodePorts that are automatically opened on your worker nodes. The following diagram illustrates how a user accesses an app's service through the VPC load balancer, even though your worker node is connected to only a private subnet.
-
Create a Kubernetes
LoadBalancer
service in your cluster to publicly expose the hello world app.oc expose deployment/hello-world --type=LoadBalancer --name=hw-lb-svc --port=8080 --target-port=8080 -n hello-world
Example output
service "hw-lb-svc" exposed
More about the expose parameters Parameter Description expose
Expose a Kubernetes resource, such as a deployment, as a service so that users can access the resource by using the VPC load balancer hostname. dc/<hello-world-deployment>
The resource type and the name of the resource to expose with this service. --name=<hello-world-service>
The name of the service. --type=LoadBalancer
The service type to create. In this lesson, you create a LoadBalancer
service.--port=<8080>
The port on which the service listens for external network traffic. --target-port=<8080>
The port that your app listens on and to which the service directs incoming network traffic. In this example, the target-port
is the same as theport
, but other apps that you create might use a different port.-n <hello-world>
The namespace that your deployment is in. -
Verify that the Kubernetes
LoadBalancer
service is created successfully in your cluster. When you create the KubernetesLoadBalancer
service, a VPC load balancer is automatically created for you. The VPC load balancer assigns a hostname to your KubernetesLoadBalancer
service that you can see in the LoadBalancer Ingress field of your CLI output. In VPC, services in your cluster are assigned a hostname because the external IP address for the service is not stable. The VPC load balancer takes a few minutes to provision in your VPC. Until the VPC load balancer is ready, you can't access the KubernetesLoadBalancer
service through its hostname.oc describe service hw-lb-svc -n hello-world
Example CLI output:
NAME: hw-lb-svc Namespace: default Labels: app=hello-world-deployment Annotations: <none> Selector: app=hello-world-deployment Type: LoadBalancer IP: 172.21.xxx.xxx LoadBalancer Ingress: 1234abcd-us-south.lb.appdomain.cloud Port: <unset> 8080/TCP TargetPort: 8080/TCP NodePort: <unset> 32040/TCP Endpoints: Session Affinity: None External Traffic Policy: Cluster Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal EnsuringLoadBalancer 1m service-controller Ensuring load balancer Normal EnsuredLoadBalancer 1m service-controller Ensured load balancer
-
Verify that the VPC load balancer is created successfully in your VPC. In the output, verify that the VPC load balancer has a Provision Status of
active
and an Operating Status ofonline
.The VPC load balancer is named in the format
kube-<cluster_ID>-<kubernetes_lb_service_UID>
. To see your cluster ID, runibmcloud oc cluster get --cluster <cluster_name>
. To see the KubernetesLoadBalancer
service UID, runkubectl get svc hw-lb-svc -o yaml
and look for the metadata.uid field in the output.ibmcloud is load-balancers
In the following example CLI output, the VPC load balancer that is named
kube-bsaucubd07dhl66e4tgg-1f4f408ce6d2485499bcbdec0fa2d306
is created for the KubernetesLoadBalancer
service:ID Name Family Subnets Is public Provision status Operating status Resource group r006-d044af9b-92bf-4047-8f77-a7b86efcb923 kube-bsaucubd07dhl66e4tgg-1f4f408ce6d2485499bcbdec0fa2d306 Application mysubnet-us-south-3 true active online default
-
Send a request to your app by curling the hostname and port of the Kubernetes
LoadBalancer
service that is assigned by the VPC load balancer that you found in step 2. Example:curl 1234abcd-us-south.lb.appdomain.cloud:8080
Example output
Hello world from hello-world-deployment-5fd7787c79-sl9hn! Your app is up and running in a cluster!
-
Optional: To clean up the resources that you created in this lesson, you can use the labels that are assigned to each app.
- List all the resources for each app in the
hello-world
project.
Example outputoc get all -l app=hello-world -o name -n hello-world
pod/hello-world-1-dh2ff replicationcontroller/hello-world-1 service/hello-world deploymentconfig.apps.openshift.io/hello-world buildconfig.build.openshift.io/hello-world build.build.openshift.io/hello-world-1 imagestream.image.openshift.io/hello-world imagestream.image.openshift.io/node
- Delete all the resources that you created.
oc delete all -l app=hello-world -n hello-world
- List all the resources for each app in the
What's next?
Now that you have a VPC cluster, learn more about what you can do.
- Setting up block storage for your apps
- Backing up your internal image registry to IBM Cloud Object Storage
- VPC cluster limitations
- About the v2 API
Need help, have questions, or want to give feedback on VPC clusters? Try posting in the Slack channel.