Using Terraform on IBM Cloud to manage your own Red Hat OpenShift Container Platform on IBM Cloud classic infrastructure
Use this tutorial to create your own highly available Red Hat® OpenShift Container Platform 3.11 environment on IBM® Cloud classic infrastructure by using Terraform on IBM Cloud.
Instead of manually installing Red Hat® OpenShift Container Platform on IBM Cloud classic infrastructure, check out Red Hat OpenShift on IBM Cloud. This offering lets you create an IBM Cloud Kubernetes Service cluster with worker nodes that come installed with the OpenShift Container Platform software. You get all the advantages of managed IBM Cloud Kubernetes Service for your cluster infrastructure environment, while the OpenShift tooling and catalog that runs on Red Hat Enterprise Linux for your app deployments.
Red Hat® OpenShift Container Platform is built around a core of containers, with orchestration and management provided by Kubernetes, on a foundation of Atomic Host and Red Hat® Enterprise Linux. OpenShift Origin is the community distribution of Kubernetes that is optimized for
continuous app development and multi-tenant deployment. The community project provides developer and operations-centric tools that are based on Kubernetes to enable rapid app development, deployment, scaling, and long-term app lifecycle maintenance.
This tutorial shows how you can set up OpenShift Container Platform 3.11 on IBM Cloud classic infrastructure with Terraform on IBM Cloud to try out the high availability capabilities of native Kubernetes and IBM Cloud. Review the following image to find an architectural overview of the classic infrastructure components that are needed for the Red Hat OpenShift Container Platform to work properly.
![Infrastructure components for the Red Hat® OpenShift Container Platform on IBM Cloud](../images/infra-diagram.png)
When you complete this tutorial, the following classic infrastructure components are provisioned for you:
- 1 OpenShift Container Platform master node
- 1 OpenShift Container Platform infrastructure node
- 1 OpenShift Container Platform application node
- 1 OpenShift Container Platform Bastion node
- 3 or more GlusterFS storage nodes if you decide to set up your cluster with GlusterFS
- Native IBM Cloud classic infrastructure services, such as VLANs and security groups
Objectives
In this tutorial, you set up Red Hat OpenShift Container Platform version 3.11 on IBM Cloud classic infrastructure and deploy your first nginx
app in the OpenShift cluster. In particular, you will:
- Set up your environment and all the software that you need for your Red Hat OpenShift Container Platform installation, such as Terraform on IBM Cloud, IBM Cloud Provider plug-in, and the Terraform on IBM Cloud OpenShift project.
- Retrieve IBM Cloud credentials, configure the IBM Cloud Provider plug-in, and define your Red Hat OpenShift Container Platform classic infrastructure components.
- Provision IBM Cloud classic infrastructure for your Red Hat OpenShift Container Platform components by using Terraform on IBM Cloud.
- Install Red Hat OpenShift Container Platform on IBM Cloud classic infrastructure.
- Deploy the
nginx
app in your OpenShift cluster and expose this app to the public.
Audience
This tutorial is intended for network administrators who want to deploy Red Hat OpenShift Container Platform on IBM Cloud classic infrastructure.
Prerequisites
- If you do not have one, create an IBM Cloud Pay-As-You-Go or Subscription IBM Cloud account.
- Make sure that you have an existing Red Hat account that has an active OpenShift subscription.
- Install Docker and the IBM Cloud CLI.
Lesson 1: Configure your environment
In this tutorial, you provision IBM Cloud classic infrastructure for the Red Hat OpenShift Container Platform by using Terraform on IBM Cloud. Before you can start the classic infrastructure provisioning process, you must ensure that you set up Terraform on IBM Cloud, the IBM Cloud Provider plug-in, and the Terraform on IBM Cloud OpenShift project.
-
Create a Docker container that installs Terraform on IBM Cloud and the IBM Cloud Provider plug-in. To execute Terraform on IBM Cloud commands, you must be logged in to the container. You can also install Terraform on IBM Cloud and the IBM Cloud Provider plug-in on your local machine to run Terraform on IBM Cloud commands without a Docker container.
- Download the latest version of the Docker image for Terraform on IBM Cloud to your local machine.
Example output:docker pull ibmterraform/terraform-provider-ibm-docker
Using default tag: latest latest: Pulling from ibmterraform/terraform-provider-ibm-docker 911c6d0c7995: Pull complete fed331e93a76: Pull complete 82a1ea1a0cd7: Pull complete a4b4f00ab356: Pull complete 78858415d97d: Pull complete 515c9be5f236: Pull complete 94021f117e26: Pull complete a50b454f6bba: Pull complete dd63d43987e3: Pull complete a098dba94337: Pull complete Digest: sha256:df316f5ed26cbec1bc1ad7a6f6d2c978f767408080a4a4db954c94c91e8271e5 Status: Downloaded newer image for ibmterraform/terraform-provider-ibm-docker:latest
- Create a container from your image and log in to your container. When the container is created, Terraform on IBM Cloud and the IBM Cloud Provider plug-in are automatically installed and you are automatically logged in to the container.
The working directory is set to
/go/bin
.docker run -it ibmterraform/terraform-provider-ibm-docker:latest
- Download the latest version of the Docker image for Terraform on IBM Cloud to your local machine.
-
From within your container, set up the IBM Terraform on IBM Cloud OpenShift Project.
- Install OpenSSH inside the container that you created in the previous step.
Example output:apk add --no-cache openssh
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz (1/6) Installing openssh-keygen (7.5_p1-r9) (2/6) Installing openssh-client (7.5_p1-r9) (3/6) Installing openssh-sftp-server (7.5_p1-r9) (4/6) Installing openssh-server-common (7.5_p1-r9) (5/6) Installing openssh-server (7.5_p1-r9) (6/6) Installing openssh (7.5_p1-r9) Executing busybox-1.27.2-r11.trigger OK: 326 MiB in 50 packages
- Download the Terraform on IBM Cloud configuration files to deploy the Red Hat OpenShift Container Platform.
Example output:git clone https://github.com/IBM-Cloud/terraform-ibm-openshift.git
Cloning into 'terraform-ibm-openshift'... remote: Enumerating objects: 375, done. remote: Total 375 (delta 0), reused 0 (delta 0), pack-reused 375 Receiving objects: 100% (375/375), 681.75 KiB | 1.22 MiB/s, done. Resolving deltas: 100% (190/190), done.
- Navigate into the installation directory.
cd terraform-ibm-openshift
- Install OpenSSH inside the container that you created in the previous step.
-
Generate an SSH key. The SSH key is used to access IBM Cloud classic infrastructure resources during provisioning.
- Create an SSH key inside the container that you created earlier. Enter the email address that you want to associate with your SSH key. Make sure to accept the default file name, file location, and missing passphrase by pressing Enter.
Example output:ssh-keygen -t rsa -b 4096 -C "<email_address>"
Generating public/private RSA key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Created directory '/root/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: SHA256:67LDt8zjbPoX+uKFGrVs2CrsyNk1izzBOkDf8tBb3Xc myemail@example.com The key's randomart image is: +---[RSA 4096]----+ | | | | | | | . | |. ..o S . | |. +oo * =.. . E | | . o+oB B.... . | | .o=+=+%*.. | | +o=o*@X*. | +----[SHA256]-----+
- Verify that the SSH key is created successfully. The creation is successful if you can see one id_rsa and one id_rsa.pub file.
Example output:cd /root/.ssh && ls
id_rsa id_rsa.pub
- Create an SSH key inside the container that you created earlier. Enter the email address that you want to associate with your SSH key. Make sure to accept the default file name, file location, and missing passphrase by pressing Enter.
-
Navigate back to your OpenShift installation directory.
cd /go/bin/terraform-ibm-openshift
-
Open the Terraform on IBM Cloud
variables.tf
file and review the default values that are set in the file. Thevariables.tf
file specifies all information that you want to pass on to Terraform on IBM Cloud during the provisioning of your infrastructure resources. You can change the default values, but do not add sensitive data, such as your infrastructure user name and API key, to this file. Thevariables.tf
file is usually stored under version control and shared across users.vi variables.tf
Variable Description Default value app_count
Enter the number of app nodes that you want to create. App nodes are used to run your app pods. 1 bastion_flavor
Enter the flavor that you want to use for your Bastion virtual machine. The Bastion host is the only ingress point for SSH in the OpenShift cluster from external entities. When you connect to the OpenShift Container Platform infrastructure, the Bastion host forwards the request to the infrastructure or app server. For more information, see Bastion instance .
B1_4X16X100 datacenter
Enter the zone where you want to provision your IBM Cloud classic infrastructure. To find existing zones, run ibmcloud ks zones
.dal12 ibm_sl_api_key
The IBM Cloud classic infrastructure API key to access classic infrastructure resources. Do not enter this information in this file. Instead, you are prompted to enter this information when you create the classic infrastructure resources. To retrieve your API key, see Managing classic infrastructure API keys. n/a ibm_sl_username
The IBM Cloud classic infrastructure user name to access classic infrastructure resources. Do not enter this information in this file. Instead, you are prompted to enter this information when you create the classic infrastructure resources. To retrieve your user name, see Managing classic infrastructure API keys. n/a infra_count
Enter the number of infrastructure nodes that you want to create. Infrastructure nodes are used to run infrastructure-related pods, such as router or registry pods. 1 master_count
Enter the number of master nodes that you want to create. The master runs the API server, controller manager server, and etcd database instance. n/a pool_id
The Red Hat pool ID that is linked to the subscription that you set up with Red Hat. Do not enter this information here. Instead, follow the steps in Lesson 3 to retrieve your pool ID and provide the pool ID during the OpenShift installation. n/a rhn_password
The Red Hat Network password to access the OpenShift project. You can enter this information here or provide it as part of your OpenShift deployment in Lesson 2.
n/a rhn_username
The Red Hat Network user name with OpenShift subscription to access the OpenShift project. You can enter this information here or provide it as part of your OpenShift deployment in Lesson 2.
n/a private_vlanid
Enter the VLAN ID of your existing private VLAN that you want to use. To find existing VLAN IDs, run ibmcloud sl vlan list
and review the ID column.n/a public_vlanid
Enter the VLAN ID of your existing public VLAN that you want to use. To find existing VLAN IDs, run ibmcloud sl vlan list
and review the ID column.n/a ssh_label
Enter a label to assign to your SSH key. ssh_key_openshift ssh_private_key
Enter the path to the SSH private key that you created earlier. ~/.ssh/id_rsa
ssh_public_key
Enter the path to the SSH public key that you created earlier. ~/.ssh/id_rsa.pub
storage_count
Decide whether you want to configure your OpenShift cluster with GlusterFS. Enter 0 to configure your OpenShift cluster without GlusterFS, and 3 or more to set up your OpenShift cluster with GlusterFS. 0 storage_flavor
If you configure your OpenShift cluster with GlusterFS, enter the flavor that you want to use for your storage virtual machine. Each storage node mounts three block storage devices that host the Red Hat Gluster Storage. You can use the combination of compute capacity and local Gluster storage to run a hyper-converged deployment where your apps are placed on the same node as the app's persistent storage. B1_4X16X100 subnet_size
Enter the number of subnets that you want to be able to create with your public and private VLAN. This value is required only if you decide to create a new private and public VLAN pair. 64 vlan_count
Enter 1
to automatically create a new private and public VLAN, or0
if you want to use existing VLANs. To find existing VLANs, runibmcloud sl vlan list
. The zone where your existing VLAN routers are provisioned is included in the primary_router column of your command line output.1 vm_domain
Enter the domain name that you want to use for your virtual machine nodes. ibm.com
Lesson 2: Provision the IBM Cloud classic infrastructure for your Red Hat OpenShift cluster
Now that you prepared your environment, you can go ahead and provision IBM Cloud classic infrastructure resources by using Terraform on IBM Cloud.
Before you begin, make sure that you are logged in to the container that you created in the previous lesson.
-
Retrieve your IBM Cloud classic infrastructure user name and API key.
-
From the OpenShift installation directory
/go/bin/terraform-ibm-openshift
inside your container, create the IBM Cloud classic infrastructure components for your Red Hat OpenShift cluster. When you run the command, Terraform on IBM Cloud evaluates what components must be provisioned and presents an execution plan. You must confirm that you want to provision the classic infrastructure resources by entering yes. During the provisioning, Terraform on IBM Cloud creates another execution plan that you must approve to continue. When prompted, enter the classic infrastructure user name and API key that you retrieved earlier. The provisioning of your resources takes about 40 minutes.make rhn_username=<rhn_username> rhn_password=<rhn_password> infrastructure
Example output:
... Apply complete! Resources: 63 added, 0 changed, 0 destroyed.
The following resources are created for you.
Nodes:
Resource Flavor Description Master node B1_4X16X100 Three disks that are arranged as SAN with a total capacity of 100 GB - Disk 1: 50 GB
- Disk 2: 25 GB
- Disk 3: 25 GB
Infrastructure node B1_2X4X100 Three disks that are arranged as SAN with a total capacity of 100 GB - Disk 1: 50 GB
- Disk 2: 25 GB
- Disk 3: 25 GB
App nodes B1_2X4X100 Three disks that are arranged as SAN with a total capacity of 100 GB - Disk 1: 50 GB
- Disk 2: 25 GB
- Disk 3: 25 GB
Bastion node B1_2X2X100 Two disks with the following capacity: - Disk 1: 100 GB
- Disk 2: 50 GB
Storage nodes B1_4X16X100 Two disks with the following capacity: - Disk 1: 100 GB
- Disk 2: 50 GB
Security groups:
The default security groups that are created assume the following setup:
- All outbound traffic from all nodes to the internet is allowed.
- The Bastion server is the only node that allows inbound SSH access.
- The Bastion server is connected to both the public and the private VLAN.
- All OpenShift nodes (master, infrastructure, and app nodes) are connected to a private VLAN only.
Group VLAN Inbound/ outbound Port From To ose_bastion_sq
Public Inbound 22/ TCP Internet gateway - ose_bastion_sq
Private Outbound All - All ose_master_sg
Private Inbound 443/ TCP Internet gateway - ose_master_sg
Private Inbound 80/ TCP Internet gateway - ose_master_sg
Private Inbound 22/ TCP ose_bastion_sg - ose_master_sg
Private Inbound 443/ TCP ose_master_sg & ose_node_sg - ose_master_sg
Private Inbound 8053/ TCP ose_node_sg - ose_master_sg
Private Inbound 8053/ UDP ose_node_sg - ose_master_sg
Private Outbound All - All ose_master_sg (for etcd)
Private Inbound 2379/ TCP ose_master_sg - ose_master_sg
Private Inbound 2380/ TCP ose_master_sg - ose_node_sg
Private Inbound 443/ TCP ose_bastion_sg - ose_node_sg
Private Inbound 22/ TCP ose_bastion_sg - ose_node_sg
Private Inbound 10250/ TCP ose_master_sg & ose_node_sg - ose_node_sg
Private Inbound 4789/ TCP ose_node_sg - ose_node_sg
Private Outbound All - All -
Validate your deployment.
terraform show
Lesson 3: Deploy Red Hat OpenShift Container Platform on your classic infrastructure
Deploy Red Hat OpenShift Container Platform on the IBM Cloud classic infrastructure resources that you created earlier.
During the deployment the following cluster components are set up and configured:
- 1 OpenShift Container Platform master node
- 3 OpenShift Container Platform infrastructure nodes
- 2 OpenShift Container Platform application nodes
- 1 OpenShift Container Platform Bastion node
For more information, about Red Hat OpenShift Container Platform components, see the Architecture Overview .
-
Retrieve the pool ID for your Red Hat account.
-
From the OpenShift installation directory
/go/bin/terraform-ibm-openshift
inside your container, log in to your Bastion node by using a secure shell.ssh root@$(terraform output bastion_public_ip)
-
Enter yes to all security questions to proceed. You are now logged in to your Bastion node.
Example output:
root@bastion-ose-1a2b3c1234 #
-
Remove any previous registration of the Bastion node.
subscription-manager unregister
-
Import the
gpg
public key for Red Hat by using the Red Hat Package Manager.rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
-
Register your Bastion node with the Red Hat Network. Enter the user name and password for your Red Hat account.
subscription-manager register --serverurl subscription.rhsm.redhat.com:443/subscription --baseurl cdn.redhat.com --username <redhat_username> --password <redhat_password>
-
Find your OpenShift Pool ID. For example, the pool ID in the following example is
1a2345bcd6789098765abcde43219bc3
.subscription-manager list --available --matches '*OpenShift Container Platform*'
Example output:
+-------------------------------------------+ Available Subscriptions +-------------------------------------------+ Subscription Name: 30 Day Self-Supported Red Hat OpenShift Container Platform, 2-Core Evaluation Provides: Red Hat Ansible Engine Red Hat Software Collections (for RHEL Server for IBM Power LE) Red Hat OpenShift Enterprise Infrastructure Red Hat JBoss Core Services Red Hat Enterprise Linux Fast Data path Red Hat OpenShift Container Platform for Power JBoss Enterprise Application Platform : Red Hat OpenShift Container Platform Client Tools for Power Red Hat Enterprise Linux Fast Datapath (for RHEL Server for IBM Power LE) Red Hat OpenShift Enterprise JBoss EAP add-on Red Hat OpenShift Container Platform Red Hat Gluster Storage Management Console (for RHEL Server) Red Hat OpenShift Enterprise JBoss A-MQ add-on Red Hat Enterprise Linux for Power, little endian Beta Red Hat OpenShift Enterprise Client Tools : Red Hat OpenShift Enterprise Application Node : Red Hat OpenShift Service Mesh : Red Hat OpenShift Enterprise JBoss FUSE add-on SKU: SER0419 Contract: 123456789 Pool ID: 1a2345bcd6789098765abcde43219bc3 Provides Management: Yes Available: 10 Suggested: 1 Service Level: Self-Support Service Type: L1-L3 Subscription Type: Stackable Starts: 12/03/2018 Ends: 01/02/2019 System Type: Physical
-
Exit the secure shell to return to your OpenShift installation directory inside your container.
exit
Example output:
logout Connection to 169.47.XXX.XX closed. /go/bin/terraform-ibm-openshift #
-
-
Finish setting up and registering the nodes with the Red Hat Network.
make rhn_username=<rhn_username> rhn_password=<rhn_password> pool_id=<pool_ID> rhn_register
When you create the nodes, the following software and Red Hat subscriptions are automatically downloaded and installed on the nodes for you:
Software packages:
Software Version Red Hat® Enterprise Linux 7.4 x86_64 kernel-3.11.0.x Atomic-OpenShift ( master/clients/node/sdn-ovs/utils
)3.11.x.x Docker 1.12.x Ansible 2.3.2.x Red Hat subscription channels and
rpm
packages:Channel Repository Name Red Hat® Enterprise Linux 7 Server (RPMs) rhel-7-server-rpms
Red Hat® OpenShift Enterprise 3.11 (RPMs) rhel-7-server-ose-3.11-rpms
Red Hat® Enterprise Linux 7 Server - Extras (RPMs) rhel-7-server-extras-rpms
Red Hat® Enterprise Linux 7 Server - Fast Datapath (RPMs) rhel-7-fast-datapath-rpms
-
Prepare the master, infrastructure, and application nodes for the OpenShift installation.
make openshift
If the installation fails with the error
module.post_install.null_resource.post_install: error executing "/tmp/terraform_1700732344.sh": wait: remote command exited without exit status or exit signal
, go to the IBM Cloud classic infrastructure console, and click Devices > Device List. Then, find the affected virtual server and from the actions menu, perform a soft reboot.Example output:
Outputs: app_hostname = [ IBM-OCP-93735f7b3d-app-0 ] app_private_ip = [ 10.72.12.13 ] app_public_ip = [ 158.123.12.126 ] bastion_hostname = IBM-OCP-93735f7b3d-bastion bastion_private_ip = 10.72.12.14 bastion_public_ip = 158.123.12.125 infra_hostname = [ IBM-OCP-12345a1a2b-infra-0 ] infra_private_ip = [ 10.72.12.13 ] infra_public_ip = [ 158.123.12.123 ] master_hostname = [ IBM-OCP-12345a1a2b-master-0 ] master_private_ip = [ 10.72.12.12 ] master_public_ip = [ 158.123.12.124 ]
-
Note the master_hostname and master_public_ip that was assigned to your master node. To show all your resources with the assigned host names and IP addresses, run
terraform show
. -
Exit your container.
exit
-
On your local machine, add the master node as a host to your local
/etc/hosts
file.- Open the
/etc/hosts
file.sudo vi /etc/hosts
- Insert the following line to the end of your file.
<master_public_ip> <master_hostname>
- Open the
-
Open the OpenShift console.
open https://$(terraform output master_public_ip):8443/console
-
Set up users and authentication for your OpenShift cluster. The OpenShift Container Platform master includes a built-in
OAuth
server. By default, thisOAuth
server is set up to deny all authentication. To let developers and administrators authenticate with the cluster, follow the steps in Configuring access and authenticationto set up access for your cluster.
-
Configure your Docker registry. During the creation of your cluster, an internal, integrated Docker registry is automatically set up for you. You can use the registry to build container images from your source code, deploy them, and manage their lifecycle. For more information, see Registry Overview
.
-
Configure your cluster router to enable incoming non-SSH network traffic for your cluster. For more information, see Router Overview
.
Lesson 4: Deploy an app in your Red Hat OpenShift cluster
With your OpenShift cluster up and running, you can now deploy your first app in the cluster.
- Log in to the master node.
ssh -t -A root@$(terraform output master_public_ip)
- Log in to the OpenShift client. Enter admin as your user name and test123 as your password, or use any other user name and password that you set up earlier.
oc login https://$(terraform output master_public_ip):8443
- Create a project directory where you can store all your app files and configurations.
oc new-project <project_name>
- Deploy the app. In this example,
nginx
is deployed to your cluster.oc new-app --name=nginx --docker-image=bitnami/nginx
- Create a service for your
nginx
app to expose your app inside the cluster.oc expose svc/nginx
- Edit your service and change the service type to NodePort.
oc edit svc/nginx
- Access your
nginx
app from the internet.- Get the public route that was assigned to your
nginx
app. You can find the route in the HOST/PORT column of your command line output.
Example output:oc get routes
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nginx-example nginx-example-new.apps.158.123.12.123.xip.io nginx-example all None
- In your preferred web browser, open your app by using the public route.
http://nginx-example-new.apps.158.176.91.200.xip.io
- Get the public route that was assigned to your
What's next?
Great! You successfully installed Red Hat OpenShift Container Platform on IBM Cloud classic infrastructure and deployed your first app to your OpenShift cluster. Now you can try out one of the following features:
- Explore other features in Red Hat OpenShift Container Platform
.
- Remove your OpenShift cluster by running the
make destroy
command.