IBM Cloud Docs
Accessing Red Hat OpenShift clusters

Accessing Red Hat OpenShift clusters

After your Red Hat® OpenShift® on IBM Cloud® cluster is created, you can begin working with your cluster by accessing the cluster.

Prerequisites

  1. Install the required CLI tools. For quick access to test features in your cluster, you can also use the IBM Cloud Shell.
  2. Create your Red Hat OpenShift cluster.
  3. If your network is protected by a company firewall, allow access to the IBM Cloud and Red Hat OpenShift on IBM Cloud API endpoints and ports. For VPC clusters with only the private cloud service endpoint enabled, you can't test the connection to your cluster until you configure a VPC VPN with the cloud service endpoint subnet.
  4. Check that your cluster is in a healthy state by running ibmcloud oc cluster get -c <cluster_name_or_ID>. If your cluster is not in a healthy state, review the Debugging clusters guide for help. For example, if your cluster is provisioned in an account that is protected by a firewall gateway appliance, you must configure your firewall settings to allow outgoing traffic to the appropriate ports and IP addresses.
  5. Find your cluster's service endpoint.
  6. If any users in your account use multifactor authentication (MFA) like TOTP, make sure that you enable it for all users at the account level. To use MFA, it must be enabled at the account level to avoid authentication errors.

Accessing clusters through the public cloud service endpoint

For Red Hat OpenShift clusters with a public cloud service endpoint, you can log in to your cluster from the console or CLI.

Connecting to the cluster from the console

You can quickly access your Red Hat OpenShift on IBM Cloud cluster from the console.

  1. In the Red Hat OpenShift clusters console, click the cluster that you want to access.
  2. Click Red Hat OpenShift web console.
  3. To continue working in the command line, click your profile name, such as IAM#name@email.com, and then click Copy Login Command. Depending on your cluster version, log in to your cluster from the command line as follows.
    • Version 4: Click Display Token, copy the oc login command, and paste the command into your command line.

For security reasons, first log out of the IBM Cloud console and then log out of the Red Hat OpenShift web console before you close your browser. You must complete both steps in the specified order to successfully log out of the Red Hat OpenShift web console.

What's next?
Try Deploying apps through the console.

Connecting to the cluster from the CLI

Usually, you can use the Red Hat OpenShift web console to get the oc login token to access your cluster. If you can't or don't want to open the Red Hat OpenShift console, choose among the following options to log in to your Red Hat OpenShift on IBM Cloud cluster by using the CLI.

Choose from the following options.

  • Log in as admin:
    1. Make sure that you have the Administrator platform access role for the cluster.
    2. Set your command line context for the cluster and download the TLS certificates and permission files for the administrator.
      ibmcloud oc cluster config -c <cluster_name_or_ID> --admin
      
  • Log in with an API key: See Using an API key to log in to Red Hat OpenShift.
  • Log in with IBM Cloud passcode:
    1. Get the Master URL of your cluster in the output of the following command.
      ibmcloud oc cluster get -c <cluster_name_or_ID>
      
    2. In your browser, open the following IBM Cloud IAM passcode website.
      https://iam.cloud.ibm.com/identity/passcode
      
    3. Log in with your IBMid and copy the passcode.
    4. Log in to your cluster with the passcode.
      oc login -u passcode -p <iam_passcode> --server=<master_URL>
      

Accessing clusters through the private cloud service endpoint

Allow authorized cluster users to access your VPC or classic cluster through the private cloud service endpoint.

Want to set up a VPN to connect to your cluster from your local machine? Check out Accessing private clusters by using the WireGuard VPN.

Accessing VPC clusters through the private cloud service endpoint

The Red Hat OpenShift master is accessible through the private cloud service endpoint if authorized cluster users are in your IBM Cloud private network or are connected to the private network, such as through a VPC VPN connection. However, communication with the Kubernetes master over the private cloud service endpoint must go through the 166.X.X.X IP address range, which you must configure in your VPN gateway and connection setup.

  1. Set up your IBM Cloud VPC VPN and connect to your private network through the VPN.

    1. Configure a VPN gateway on your local machine. For example, you might choose to set up StrongSwan on your machine.
    2. Create a VPN gateway in your VPC, and create the connection between the VPC VPN gateway and your local VPN gateway. In the New VPN connection for VPC section, add the 166.8.0.0/14 subnet to the Local subnets field. If you have a multizone cluster, repeat this step to configure a VPC gateway on a subnet in each zone where you have worker nodes.
    3. Verify that you are connected to the private network through your IBM Cloud VPC VPN connection.
  2. To log in to your cluster, choose from the following options.

    • Log in as admin:
      1. Make sure that you have the Administrator platform access role for the cluster.
      2. Set your command line context for the cluster and download the TLS certificates and permission files for the administrator.
        ibmcloud oc cluster config -c <cluster_name_or_ID> --admin --endpoint private
        
    • Log in with an API key: See Using an API key to log in to Red Hat OpenShift.
    • Log in with IBM Cloud passcode:
      1. Get the Private Service Endpoint URL of your cluster in the output of the following command.
        ibmcloud oc cluster get -c <cluster_name_or_ID>
        
      2. In your browser, open the following IBM Cloud IAM passcode website.
        https://iam.cloud.ibm.com/identity/passcode
        
      3. Log in with your IBMid and copy the passcode.
      4. Log in to your cluster with the passcode.
        oc login -u passcode -p <iam_passcode> --server=<private_service_endpoint_URL>
        
  3. Verify that the oc commands run properly with your cluster through the private cloud service endpoint by checking the version.

    oc version
    

    Example output

    Client Version: 4.5.0-0.okd-2020-09-04-180756
    Server Version: 4.5.35
    Kubernetes Version: v1.18.3+cdb0358
    

Accessing classic clusters through the private cloud service endpoint

The Red Hat OpenShift master is accessible through the private cloud service endpoint if authorized cluster users are in your IBM Cloud private network or are connected to the private network such as through a classic VPN connection or IBM Cloud Direct Link. However, communication with the Kubernetes master over the private cloud service endpoint must go through the 166.X.X.X IP address range, which is not routable from a classic VPN connection or through IBM Cloud Direct Link. You can expose the private cloud service endpoint of the master for your cluster users by using a private network load balancer (NLB). The private NLB exposes the private cloud service endpoint of the master as an internal 10.X.X.X IP address range that users can access with the VPN or IBM Cloud Direct Link connection. If you enable only the private cloud service endpoint, you can use the Red Hat OpenShift web console to create the private NLB.

  1. Log in to your Red Hat OpenShift cluster by using the public cloud service endpoint.

  2. Get the private cloud service endpoint URL and port for your cluster.

    ibmcloud oc cluster get -c <cluster_name_or_ID>
    

    In this example output, the Private Service Endpoint URL is https://c1.private.us-east.containers.cloud.ibm.com:31144.

    NAME:                           setest
    ID:                             b8dcc56743394fd19c9f3db7b990e5e3
    State:                          normal
    Status:                         healthy cluster
    Created:                        2019-04-25T16:03:34+0000
    Location:                       wdc04
    Pod Subnet:                     172.30.0.0/16
    Service Subnet:                 172.21.0.0/16
    Master URL:                     https://c1-e.us-east.containers.cloud.ibm.com:31144
    Public Service Endpoint URL:    https://c1-e.us-east.containers.cloud.ibm.com:31144
    Private Service Endpoint URL:   https://c1.private.us-east.containers.cloud.ibm.com:31144
    Master Location:                Washington D.C.
    ...
    
  3. Create a YAML file that is named oc-api-via-nlb.yaml. This YAML creates a private LoadBalancer service and exposes the private cloud service endpoint through that NLB. Replace <private_service_endpoint_port> with the port you found in the previous step.

    apiVersion: v1
    kind: Service
    metadata:
      name: oc-api-via-nlb
      annotations:
        service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: private
      namespace: default
    spec:
      type: LoadBalancer
      ports:
      - protocol: TCP
        port: <private_service_endpoint_port>
        targetPort: <private_service_endpoint_port>
    ---
    kind: Endpoints
    apiVersion: v1
    metadata:
      name: oc-api-via-nlb
      namespace: default
    subsets:
      - addresses:
          - ip: 172.20.0.1
        ports:
          - port: 2040
          
          
    
  4. To create the private NLB and endpoint:

    1. Apply the configuration file that you previously created.
      oc apply -f oc-api-via-nlb.yaml
      
    2. Verify that the oc-api-via-nlb NLB is created. In the output, note the 10.x.x.x EXTERNAL-IP address. This IP address exposes the private cloud service endpoint for the cluster master on the port that you specified in your YAML file.
      oc get svc -o wide
      
      In this example output, the IP address for the private cloud service endpoint of the master is 10.186.92.42.
      NAME                     TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)          AGE   SELECTOR
      oc-api-via-nlb           LoadBalancer   172.21.150.118   10.186.92.42     443:32235/TCP    10m   <none>
      ...
      
  5. On the client machines where you or your users run oc commands, add the NLB IP address and the private cloud service endpoint URL to the /etc/hosts file. Do not include any ports in the IP address and URL and don't include https:// in the URL.

    • For macOS and Linux users:

      sudo nano /etc/hosts
      
    • For Windows users:

      notepad C:\Windows\System32\drivers\etc\hosts
      

      Depending on your local machine permissions, you might need to run Notepad as an administrator to edit the hosts file.

      Example text to add:

      10.186.92.42      c1.private.us-east.containers.cloud.ibm.com
      
  6. Verify that you are connected to the private network through a VPN or IBM Cloud Direct Link connection.

  7. Log in to your cluster by choosing from one of the following options.

    • Log in as admin:
      1. Make sure that you have the Administrator platform access role for the cluster.
      2. Set your command line context for the cluster and download the TLS certificates and permission files for the administrator.
        ibmcloud oc cluster config -c <cluster_name_or_ID> --admin --endpoint private
        
    • Log in with an API key: See Using an API key to log in to Red Hat OpenShift.
    • Log in with IBM Cloud passcode:
      1. Get the Private Service Endpoint URL of your cluster in the output of the following command.
        ibmcloud oc cluster get -c <cluster_name_or_ID>
        
      2. In your browser, open the following IBM Cloud IAM passcode website.
        https://iam.cloud.ibm.com/identity/passcode
        
      3. Log in with your IBMid and copy the passcode.
      4. Log in to your cluster with the passcode.
        oc login -u passcode -p <iam_passcode> --server=<private_service_endpoint_URL>
        
  8. Verify that the oc commands run properly with your cluster through the private cloud service endpoint by checking the version.

    oc version
    

    Example output

    Client Version: 4.5.0-0.okd-2020-09-04-180756
    Server Version: 4.5.35
    Kubernetes Version: v1.18.3+cdb0358
    

Creating an allowlist for the private cloud service endpoint

Private service endpoint allowlists are deprecated and support ends on 10 February 2025. Migrate from allowlists to context based restrictions as soon as possible. For more information, see Migrating from a private service endpoint allowlist to context based restrictions (CBR).

Control access to your private cloud service endpoint by creating a subnet allowlist.

After you grant users access to your cluster through IBM Cloud IAM, you can add a secondary layer of security by creating an allowlist for the private cloud service endpoint. Only authorized requests to your cluster master that originate from subnets in the allowlist are permitted through the cluster's private cloud service endpoint.

If you want to allow requests from a different VPC than the one your cluster is in, you must include the cloud service endpoint for that VPC in the allowlist.

For example, to access your cluster's private cloud service endpoint, you must connect to your IBM Cloud classic network or your VPC network through a VPN or IBM Cloud Direct Link. You can add the subnet for the VPN or Direct Link tunnel so that only authorized users in your organization can access the private cloud service endpoint from that subnet.

A private cloud service endpoint allowlist can also help prevent users from accessing your cluster after their authorization is revoked. When a user leaves your organization, you remove their IBM Cloud IAM permissions that grant them access to the cluster. However, the user might have copied the API key that contains a functional ID's credentials, which contain the necessary IAM permissions for your cluster. That user can still use those credentials and the private cloud service endpoint address to access your cluster from a different subnet, such as from a different IBM Cloud account. If you create an allowlist that includes only the subnets for your VPN tunnel in your organization's IBM Cloud account, the user's attempted access from another IBM Cloud account is denied.

Worker node subnets are automatically added to and removed from your allowlist so that worker nodes can always access the master through the private cloud service endpoint.

Private cloud service endpoint allowlists are limited to 20 subnets and will be unsupported soon. Context based restriction rules are the replacement for this and can contain up to 200 subnets, so if you need more than 20 subnets in your allowlist you should use Migrating from a private service endpoint allowlist to context based restrictions (CBR).

If the public cloud service endpoint is enabled for your cluster, authorized requests are still permitted through the public cloud service endpoint. Therefore, the private cloud service endpoint allowlist is most effective for controlling access to clusters that have only the private cloud service endpoint enabled.

Before you begin:

To create a private cloud service endpoint allowlist:

  1. Get the subnets that you want to add to the allowlist. For example, you might get the subnet for the connection through your VPN or Direct Link tunnel to your IBM Cloud private network.

  2. Enable the subnet allowlist feature for a cluster's private cloud service endpoint. Now, access to the cluster via the private cloud service endpoint is blocked for any requests that originate from a subnet that is not in the allowlist. Your worker nodes continue to run and have access to the master.

    ibmcloud oc cluster master private-service-endpoint allowlist enable --cluster <cluster_name_or_ID>
    
  3. Add subnets from which authorized users can access your private cloud service endpoint to the allowlist.

    ibmcloud oc cluster master private-service-endpoint allowlist add --cluster <cluster_name_or_ID> --subnet <subnet_CIDR> [--subnet <subnet_CIDR> ...]
    
  4. Verify that the subnets in your allowlist are correct. The allowlist includes subnets that you manually added and subnets that are automatically added and managed by IBM, such as worker node subnets.

    ibmcloud oc cluster master private-service-endpoint allowlist get --cluster <cluster_name_or_ID>
    

Your authorized users can now continue with Accessing clusters through the private cloud service endpoint.

Accessing Red Hat OpenShift clusters on Satellite

After you create a Red Hat OpenShift cluster in your Satellite location, you can begin working with your cluster by accessing the cluster.

When you access your cluster and run oc get nodes or oc describe node <worker_node> commands, you might see that the worker nodes are assigned master,worker roles. In OpenShift Container Platform clusters, operators use the master role as a nodeSelector so that OCP can deploy default components that are controlled by operators, such as the internal registry, in your cluster. The Satellite hosts that you assigned to your cluster function as worker nodes only, and no master node processes, such as the API server or Kubernetes scheduler, run on your worker nodes.

Accessing clusters through the cluster service URL

Connect to your cluster through its service URL. This URL is one of your Satellite location subdomains and a node port, which is formatted such as https://p1iuql40jam23qiuxt833-q9err0fiffbsar61e78vv6e7ds8ne1tx-ce00.us-east.satellite.appdomain.cloud:30710.

If your location hosts have private network connectivity only, or if you use Amazon Web Services, Google Cloud Platform, and Microsoft Azure hosts, you must be connected to your hosts' private network, such as through VPN access, to connect to your cluster and access the Red Hat OpenShift web console. Alternatively, if your hosts have public network connectivity, you can test access to your cluster by changing your cluster's and location's DNS records to use your hosts' public IP addresses.

You can quickly access your Red Hat OpenShift on IBM Cloud cluster from the console.

  1. In the Red Hat OpenShift clusters console, click the cluster that you want to access.
  2. Click Red Hat OpenShift web console.
  3. Click your profile name, such as IAM#name@email.com, and then click Copy Login Command.
  4. Click Display Token, and copy the oc login command.
  5. Paste the command into your command line.

For security reasons, first log out of the IBM Cloud console and then log out of the Red Hat OpenShift web console before you close your browser. You must complete both steps in the specified order to successfully log out of the Red Hat OpenShift web console.

If you can't or don't want to open the Red Hat OpenShift console, choose among the following options to log in to your Red Hat OpenShift on IBM Cloud cluster by using the CLI.

Accessing clusters from the public network

If your hosts have public network connectivity, and you want to access your cluster from your local machine without being connected to your hosts' private network, you can optionally update your cluster's subdomain and location's DNS record to use the public IP addresses of your hosts.

For most location setups, the private IP addresses of your hosts are registered for the location's DNS record so that you can access your cluster only if you are connected to your cloud provider's private network.

For example, if you use Amazon Web Services, Google Cloud Platform, or Microsoft Azure hosts, or if your hosts' default network interface is private, your location's DNS record is accessible only on the private network.

To run kubectl or oc commands against your cluster or access the Red Hat OpenShift web console, you must be connected to your hosts' private network, such as through VPN access. However, if you want to access your cluster from the public network, such as to test access to your cluster from your local machine, you can change the DNS records for your location and cluster subdomains to use your hosts' public IPs instead.

Making your location and cluster subdomains available outside of your hosts' private network to your authorized cluster users is not recommended for production-level workloads.

  1. Review the location subdomains and check the Records for the private IP addresses of the hosts that are registered in the DNS for the subdomain.
    ibmcloud sat location dns ls --location <location_name_or_ID>
    
  2. Retrieve the matching public IP addresses of your hosts.
    ibmcloud sat host ls --location <location_name_or_ID>
    
  3. Update the location subdomain DNS record with the public IP addresses of each host in the control plane.
    ibmcloud sat location dns register --location <location_name_or_ID> --ip <host_IP> --ip <host_IP> --ip <host_IP>
    
  4. Verify that the public IP addresses are registered with your location DNS record.
    ibmcloud sat location dns ls --location <location_name_or_ID>
    
  5. Get the Hostname for your cluster in the format <service_name>-<project>.<cluster_name>-<random_hash>-0000.upi.containers.appdomain.cloud and note the private IP(s) that were automatically registered.
    ibmcloud oc nlb-dns ls --cluster <cluster_name_or_ID>
    
  6. Add the public IP addresses of the hosts that are assigned as worker nodes to this cluster to your cluster's subdomain. Repeat this command for each host's public IP address.
    ibmcloud oc nlb-dns add --ip <public_IP> --cluster <cluster_name_or_ID> --nlb-host <hostname>
    
  7. Remove the private IP addresses from your cluster's subdomain. Repeat this command for all private IP addresses that you retrieved earlier.
    ibmcloud oc nlb-dns rm classic --ip <private_IP> --cluster <cluster_name_or_ID> --nlb-host <hostname>
    
  8. Verify that the public IP addresses are registered with your cluster subdomain.
    ibmcloud oc nlb-dns ls --cluster <cluster_name_or_ID>
    

Accessing VPC clusters through the Virtual Private Endpoint Gateway

Virtual Private Endpoint Gateway is created for VPC clusters automatically. The Kubernetes master is accessible through this Virtual Private Endpoint gateway if authorized cluster users are connected to the same VPC where the cluster is deployed, such as through a IBM Cloud VPC VPN. In this case, the kubeconfig is configured with the Virtual Private Endpoint (VPE) URL which is a private DNS name and can be resolved only by the IBM Cloud VPC Private DNS service. The IBM Cloud VPC Private DNS server addresses are 161.26.0.7 and 161.26.0.8.

For clusters that run version 4.13: If you enabled only the private cloud service endpoint during cluster creation, the virtual private endpoint of your VPC is used by default to access Red Hat OpenShift components such as the Red Hat OpenShift web console or OperatorHub. You must be connected to the private VPC network, such as through a VPN connection, to access these components or run kubectl commands on your cluster. Note that to access the console through the VPE, you must be able to access cloud.ibm.com, which requires public connectivity.

  1. Set up your IBM Cloud VPC VPN and connect to your VPC through VPN.

    1. Configure a client-to-site or site-to-site VPN to your VPC. For example, you might choose to set up a client-to-site connection with a VPN Client.
    2. For client-to-site VPN setups, you must specify the IBM Cloud VPC Private DNS service addresses when you provision the VPN server as mentioned in the considerations. You must also create a VPN route after the VPN server is provisioned, with the destination 161.26.0.0/16 and action translate.
    3. For site-to-site VPN setups, you must follow the Accessing service endpoints through VPN guide and configure the IBM Cloud VPC Private DNS service addresses.
    4. Verify that you are connected to the VPC through your IBM Cloud VPC VPN connection.
  2. To log in to your cluster, choose from the following options.

    • Log in as admin:
      1. Make sure that you have the Administrator platform access role for the cluster.
      2. Set your command line context for the cluster and download the TLS certificates and permission files for the administrator.
        ibmcloud oc cluster config -c <cluster_name_or_ID> --admin --endpoint vpe
        
    • Log in with an API key: See Using an API key to log in to Red Hat OpenShift.
    • Log in with IBM Cloud passcode:
      1. Get the VPE address and the master URL in the output of the following command.
        ibmcloud oc cluster get -c <cluster_name_or_ID>
        
      2. In your browser, open the following IBM Cloud IAM passcode website.
        https://iam.cloud.ibm.com/identity/passcode
        
      3. Log in with your IBMid and copy the passcode.
      4. Log in to your cluster with the passcode.
        oc login -u passcode -p <iam_passcode> --server=https://<VPE_URL>:<port_from_the_master_URL>
        
        The login procedure will notify that certificate signed by an unknown authority as it is a self-signed certificate generated by IBM.
  3. Verify that the oc commands run properly with your cluster through the private cloud service endpoint by checking the version.

    oc version
    

    Example output

    Client Version: 4.5.0-0.okd-2020-09-04-180756
    Server Version: 4.5.35
    Kubernetes Version: v1.18.3+cdb0358
    

Accessing clusters from automation tools by using an API key

Red Hat OpenShift on IBM Cloud is integrated with IBM Cloud Identity and Access Management (IAM). With IAM, you can authenticate users and services by using their IAM identities and authorize actions with access roles and policies. When you authenticate as a user through the Red Hat OpenShift console, your IAM identity is used to generate a Red Hat OpenShift login token that you can use to log in to the command line. You can automate logging in to your cluster by creating an IAM API key or service ID to use for the oc login command.

Using an API key to log in to clusters

You can create an IBM Cloud IAM API key and then use the API key to log in to a Red Hat OpenShift cluster. With API keys, you can use the credentials of one user or shared account to access a cluster, instead of logging in individually. You might also create an API key for a service ID. For more information, see Understanding API keys.

  1. Create an IBM Cloud API key. Save your API key in a secure location. You can't retrieve the API key again. If you want to export the output to a file on your local machine, include the --file <path>/<file_name> option.

    ibmcloud iam api-key-create <name>
    
  2. Configure your cluster to add the API key user to your cluster RBAC policies and to set your session context to your cluster server.

    1. Log in to IBM Cloud with the API key credentials.
      ibmcloud login --apikey <API_key>
      
    2. Download and add the kubeconfig configuration file for your cluster to your existing kubeconfig in ~/.kube/config or the last file in the KUBECONFIG environment variable. Note: If you enabled the private cloud service endpoint and want to use it for the cluster context, include the --endpoint private option. To use the private cloud service endpoint to connect to your cluster, you must be in your IBM Cloud private network or connected to the private network through a VPC VPN connection, or for classic infrastructure, a classic VPN connection or IBM Cloud Direct Link.
      ibmcloud oc cluster config -c <cluster_name_or_ID> [--endpoint private]
      
  3. Exchange your IBM Cloud IAM API key credentials for a Red Hat OpenShift access token. You can log in from the CLI or API. For more information, see the Red Hat OpenShift docs.

    Log in by using the oc CLI: Log in to your cluster with the oc login command. The username (-u) is apikey and the password (-p) is your IBM Cloud IAM API key value. To use the private cloud service endpoint, include the --server=<private_service_endpoint> option.

    oc login -u apikey -p <API_key> [--server=<private_service_endpoint>]
    

    Log in by running Red Hat OpenShift API requests directly against your cluster: Log in to your cluster with the API such as via a curl request.

    1. Get the Master URL of your cluster.

      ibmcloud oc cluster get -c <cluster_name_or_ID>
      

      Example output

      NAME:                           mycluster
      ID:                             1234567
      State:                          normal
      Created:                        2020-01-22T19:22:16+0000
      Location:                       dal10
      Master URL:                     https://c100-e.<region>.containers.cloud.ibm.com:<port>
      ...
      
      
    2. Get the token endpoint of the Red Hat OpenShift oauth server.

      curl <master_URL>/.well-known/oauth-authorization-server | jq -r .token_endpoint
      

      Example output

      <token_endpoint>/oauth/token
      

      : screen}

    3. Log in to the cluster with the endpoint that you previously retrieved. Replace <URL> with the <token_endpoint> of the oauth server.

      Example curl request:

      curl -u 'apikey:<API_key>' -H "X-CSRF-Token: a" '<URL>/oauth/authorize?client_id=openshift-challenging-client&response_type=token' -vvv
      
    4. In the Location response, find the access_token, such as in the following example.

      < HTTP/1.1 302 Found
      < Cache-Control: no-cache, no-store, max-age=0, must-revalidate
      < Cache-Control: no-cache, no-store, max-age=0, must-revalidate
      < Expires: 0
      < Expires: Fri, 01 Jan 1990 00:00:00 GMT
      < Location: <token_endpoint>/oauth/token/implicit#access_token=<access_token>&expires_in=86400&scope=user%3Afull&token_type=Bearer
      ...
      
    5. Use your cluster master URL and the access token to access the Red Hat OpenShift API, such as to list all the pods in your cluster. For more information, see the Red Hat OpenShift API documentation.

      Example curl request:

      curl -H "Authorization: Bearer <access_token>" '<master_URL>/api/v1/pods'
      

Using a service ID to log in to clusters

You can create an IBM Cloud IAM service ID, make an API key for the service ID, and then use the API key to log in to a Red Hat OpenShift cluster. You might use service IDs so that apps that are hosted in other clusters or clouds can access your cluster's services. Because service IDs are not tied to a specific user, your apps can authenticate if individual users leave your account. For more information, see Creating and working with service IDs.

  1. Create an IBM Cloud IAM service ID for your cluster that is used for the IAM policies and API key credentials. Be sure to give the service ID a description that helps you retrieve the service ID later, such as including the cluster name.

    ibmcloud iam service-id-create <cluster_name>-id --description "Service ID for Red Hat OpenShift on IBM Cloud cluster <cluster_name>"
    

    Example output

    NAME          <cluster_name>-id
    Description   Service ID for Red Hat OpenShift on IBM Cloud cluster <cluster_name>
    CRN           crn:v1:bluemix:public:iam-identity::a/1aa111aa1a11111aaa1a1111aa1aa111::serviceid:ServiceId-bbb2b2b2-2bb2-2222-b222-b2b2b2222b22
    Bound To      crn:v1:bluemix:public:::a/1aa111aa1a11111aaa1a1111aa1aa111:::
    Version       1-c3c333333333ccccc33333c33cc3cc33
    Locked        false
    UUID          ServiceId-bbb2b2b2-2bb2-2222-b222-b2b2b2222b22
    
  2. Create a custom IBM Cloud IAM policy for your cluster service ID that grants access to Red Hat OpenShift on IBM Cloud.

    ibmcloud iam service-policy-create <cluster_service_ID> --service-name containers-kubernetes --roles <service_access_role> --service-instance <cluster_ID>
    
    Understanding this command's components
    Parameter Description
    <cluster_service_ID> Required. Enter the service ID that you previously created for your Red Hat OpenShift cluster.
    --service-name containers-kubernetes Required. Enter containers-kubernetes so that the IAM policy is for Red Hat OpenShift on IBM Cloud clusters.
    --roles <service_access_role> Required. Enter the access role that you want the service ID to have to your Red Hat OpenShift cluster. Platform access roles permit cluster management activities such as creating worker nodes. Service access roles correspond to RBAC roles that permit Red Hat OpenShift management activities within the cluster, such as for Kubernetes resources like pods and namespaces. For multiple roles, include a comma-separated list. Possible values are Administrator, Operator, Editor, and Viewer (platform access roles); and Reader, Writer, and Manager (service access roles).
    --service-instance <cluster_ID> To restrict the policy to a particular cluster, enter the cluster's ID. To get your cluster ID, run ibmcloud oc clusters. If you don't include the service instance, the access policy grants the service ID access to all your clusters, Kubernetes and Red Hat OpenShift. You can also scope the access policy to a region (--region) or resource group (--resource-group-name).
  3. Create an API key for the service ID. Name the API key similar to your service ID, and include the service ID that you previously created, <cluster_name>-id. Be sure to give the API key a description that helps you retrieve the key later. Save your API key in a secure location. You can't retrieve the API key again. If you want to export the output to a file on your local machine, include the --file <path>/<file_name> option.

    ibmcloud iam service-api-key-create <cluster_name>-key <service_ID> --description "API key for service ID <service_ID> in Red Hat OpenShift cluster <cluster_name>"
    

    Example output

    Please preserve the API key! It can't be retrieved after it's created.
    
    Name          <cluster_name>-key
    Description   API key for service ID <service_ID> in Red Hat OpenShift cluster <cluster_name>
    Bound To      crn:v1:bluemix:public:iam-identity::a/1bb222bb2b33333ddd3d3333ee4ee444::serviceid:ServiceId-ff55555f-5fff-6666-g6g6-777777h7h7hh
    Created At    2019-02-01T19:06+0000
    API Key       i-8i88ii8jjjj9jjj99kkkkkkkkk_k9-llllll11mmm1
    Locked        false
    UUID          ApiKey-222nn2n2-o3o3-3o3o-4p44-oo444o44o4o4
    
  4. Configure your cluster to add the service ID user to your cluster RBAC policies and to set your session context to your cluster server.

    1. Log in to IBM Cloud with the service ID's API key credentials.

      ibmcloud login --apikey <API_key>
      
    2. Download and add the kubeconfig configuration file for your cluster to your existing kubeconfig in ~/.kube/config or the last file in the KUBECONFIG environment variable. Note: If you enabled the private cloud service endpoint and want to use it for the cluster context, include the --endpoint private option. To use the private cloud service endpoint to connect to your cluster, you must be in your IBM Cloud private network or connected to the private network through a VPC VPN connection, or for classic infrastructure, a classic VPN connection or IBM Cloud Direct Link.

      ibmcloud oc cluster config -c <cluster_name_or_ID> [--endpoint private]
      
  5. Use the service ID's API key to log in to your Red Hat OpenShift cluster. The username (-u) is apikey and the password (-p) is your API key value. To use the private cloud service endpoint, include the --server=<private_service_endpoint> option.

    oc login -u apikey -p <API_key> [--server=<private_service_endpoint>]
    
  6. Verify that the service ID can perform the actions that you authorized.

    Example: If you assigned a Reader service access role, the service ID can list pods in your Red Hat OpenShift project.

    oc get pods
    

    Example: If you assigned a Manager service access role, the service ID can list the users in your Red Hat OpenShift cluster. The ID of your IAM service ID is in the Identities output. Other individual users might be identified by their email address and IBMid.

    oc get users
    

    Example output

    NAME                           UID                                    FULL NAME   IDENTITIES
    IAM#                           dd44ddddd-d4dd-44d4-4d44-4d44d444d444              IAM:iam-ServiceId-bbb2b2b2-2bb2-2222-b222-b2b2b2222b22
    IAM#first.last@email.com       55555ee5-e555-55e5-e5e5-555555ee55ee               IAM:IBMid-666666FFF6