IBM Cloud Docs
Deploying Kubernetes-native apps in clusters

Deploying Kubernetes-native apps in clusters

You can use Kubernetes techniques in IBM Cloud® Kubernetes Service to deploy apps in containers and ensure that those apps are up and running. For example, you can perform rolling updates and rollbacks without downtime for your users.

For more information about creating a configuration file for your application, see Configuration Best Practices.

Launching the Kubernetes dashboard

Open a Kubernetes dashboard on your local system to view information about a cluster and its worker nodes. In the IBM Cloud console, you can access the dashboard with a convenient one-click button. With the CLI, you can access the dashboard or use the steps in an automation process such as for a CI/CD pipeline.

Do you have so many resources and users in your cluster that the Kubernetes dashboard is a little slow? Your cluster admin can scale the kubernetes-dashboard deployment by running kubectl -n kube-system scale deploy kubernetes-dashboard --replicas=3.

To check the logs for individual app pods, you can run kubectl logs <pod name>. Do not use the Kubernetes dashboard to stream logs for your pods, which might cause a disruption in your access to the Kubernetes dashboard.

Before you begin

You can use the default port or set your own port to launch the Kubernetes dashboard for a cluster.

Launching the Kubernetes dashboard from the IBM Cloud console

  1. Log in to the IBM Cloud console.
  2. From the menu bar, select the account that you want to use.
  3. From the menu Menu icon, click Kubernetes.
  4. On the Clusters page, click the cluster that you want to access.
  5. From the cluster detail page, click the Kubernetes Dashboard button.

Launching the Kubernetes dashboard from the CLI

Before you begin, install the CLI.

  1. Get your credentials for Kubernetes.

    kubectl config view -o jsonpath='{.users[0].user.auth-provider.config.id-token}'
    
  2. Copy the id-token value that is shown in the output.

  3. Set the proxy with the default port number.

    kubectl proxy
    

    Example output

    Starting to serve on 127.0.0.1:8001
    
  4. Sign in to the dashboard.

    1. In your browser, navigate to the following URL:

      http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
      
    2. In the sign-on page, select the Token authentication method.

    3. Then, paste the id-token value that you previously copied into the Token field and click SIGN IN.

When you are done with the Kubernetes dashboard, use CTRL+C to exit the proxy command. After you exit, the Kubernetes dashboard is no longer available. Run the proxy command to restart the Kubernetes dashboard.

Next, you can run a configuration file from the dashboard..

Deploying apps with the Kubernetes dashboard

When you deploy an app to your cluster by using the Kubernetes dashboard, a deployment resource automatically creates, updates, and manages the pods in your cluster. For more information about using the dashboard, see the Kubernetes docs.

Do you have so many resources and users in your cluster that the Kubernetes dashboard is a little slow? Your cluster admin can scale the kubernetes-dashboard deployment by running kubectl -n kube-system scale deploy kubernetes-dashboard --replicas=3.

Before you begin

To deploy your app,

  1. Open the Kubernetes dashboard and click + Create.

  2. Enter your app details in 1 of 2 ways.

    • Select Specify app details and enter the details.
    • Select Upload a YAML or JSON file to upload your app configuration file.

    Need help with your configuration file? Check out this example YAML file. In this example, a container is deployed from the ibmliberty image in the US-South region. Learn more about securing your personal information when you work with Kubernetes resources.

  3. Verify that you successfully deployed your app in one of the following ways.

    • In the Kubernetes dashboard, click Deployments. A list of successful deployments is displayed.
    • If your app is publicly available, navigate to the cluster overview page in your IBM Cloud® Kubernetes Service dashboard. Copy the subdomain, which is located in the cluster summary section and paste it into a browser to view your app.

Deploying apps with the CLI

After a cluster is created, you can deploy an app into that cluster by using the Kubernetes CLI.

Before you begin

To deploy your app,

  1. Create a configuration file based on Kubernetes best practices. Generally, a configuration file contains configuration details for each of the resources you are creating in Kubernetes. Your script might include one or more of the following sections:

    • Deployment: Defines the creation of pods and replica sets. A pod includes an individual containerized app and replica sets control multiple instances of pods.

    • Service: Provides front-end access to pods by using a worker node or load balancer public IP address, or a public Ingress route.

    • Ingress: Specifies a type of load balancer that provides routes to access your app publicly.

    Learn more about securing your personal information when you work with Kubernetes resources.

  2. Run the configuration file in a cluster's context.

    kubectl apply -f config.yaml
    
  3. If you made your app publicly available by using a NodePort service, a load balancer service, or Ingress, verify that you can access the app.

Deploying apps to specific worker nodes by using labels

When you deploy an app, the app pods indiscriminately deploy to various worker nodes in your cluster. Sometimes, you might want to restrict the worker nodes that the app pods to deploy to. For example, you might want app pods to deploy to only worker nodes in a certain worker pool because those worker nodes are on bare metal machines. To designate the worker nodes that app pods must deploy to, add an affinity rule to your app deployment.

Before you begin

To deploy apps to specific worker nodes,

  1. Get the ID of the worker pool that you want to deploy app pods to.

    ibmcloud ks worker-pool ls --cluster <cluster_name_or_ID>
    
  2. List the worker nodes that are in the worker pool, and note one of the Private IP addresses.

    ibmcloud ks worker ls --cluster <cluster_name_or_ID> --worker-pool <worker_pool_name_or_ID>
    
  3. Describe the worker node. In the Labels output, note the worker pool ID label, ibm-cloud.kubernetes.io/worker-pool-id.

    The steps in this topic use a worker pool ID to deploy app pods only to worker nodes within that worker pool. To deploy app pods to specific worker nodes by using a different label, note this label instead. For example, to deploy app pods only to worker nodes on a specific private VLAN, use the privateVLAN= label.

    kubectl describe node <worker_node_private_IP>
    

    Example output

    NAME:               10.xxx.xx.xxx
    Roles:              <none>
    Labels:             arch=amd64
                        beta.kubernetes.io/arch=amd64
                        beta.kubernetes.io/instance-type=b3c.4x16.encrypted
                        beta.kubernetes.io/os=linux
                        failure-domain.beta.kubernetes.io/region=us-south
                        failure-domain.beta.kubernetes.io/zone=dal10
                        ibm-cloud.kubernetes.io/encrypted-docker-data=true
                        ibm-cloud.kubernetes.io/ha-worker=true
                        ibm-cloud.kubernetes.io/iaas-provider=softlayer
                        ibm-cloud.kubernetes.io/machine-type=b3c.4x16.encrypted
                        ibm-cloud.kubernetes.io/sgx-enabled=false
                        ibm-cloud.kubernetes.io/worker-pool-id=00a11aa1a11aa11a1111a1111aaa11aa-11a11a
                        ibm-cloud.kubernetes.io/worker-version=1.29_1534
                        kubernetes.io/hostname=10.xxx.xx.xxx
                        privateVLAN=1234567
                        publicVLAN=7654321
    Annotations:        node.alpha.kubernetes.io/ttl=0
    ...
    
  4. Add an affinity rule for the worker pool ID label to the app deployment.

    Example YAML

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: with-node-affinity
    spec:
      template:
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                - matchExpressions:
                  - key: ibm-cloud.kubernetes.io/worker-pool-id
                    operator: In
                    values:
                    - <worker_pool_ID>
    ...
    

    In the affinity section of the example YAML, ibm-cloud.kubernetes.io/worker-pool-id is the key and <worker_pool_ID> is the value.

  5. Apply the updated deployment configuration file.

    kubectl apply -f with-node-affinity.yaml
    
  6. Verify that the app pods deployed to the correct worker nodes.

    1. List the pods in your cluster.

      kubectl get pods -o wide
      

      Example output

      NAME                   READY     STATUS              RESTARTS   AGE       IP               NODE
      cf-py-d7b7d94db-vp8pq  1/1       Running             0          15d       172.30.xxx.xxx   10.176.48.78
      
    2. In the output, identify a pod for your app. Note the NODE private IP address of the worker node that the pod is on.

      In the previous example output, the app pod cf-py-d7b7d94db-vp8pq is on a worker node with the IP address 10.xxx.xx.xxx.

    3. List the worker nodes in the worker pool that you designated in your app deployment.

      ibmcloud ks worker ls --cluster <cluster_name_or_ID> --worker-pool <worker_pool_name_or_ID>
      

      Example output

      ID                                                 Public IP       Private IP     Machine Type      State    Status  Zone    Version
      kube-dal10-crb20b637238bb471f8b4b8b881bbb4962-w7   169.xx.xxx.xxx  10.176.48.78   b3c.4x16          normal   Ready   dal10   1.8.6_1504
      kube-dal10-crb20b637238bb471f8b4b8b881bbb4962-w8   169.xx.xxx.xxx  10.176.48.83   b3c.4x16          normal   Ready   dal10   1.8.6_1504
      kube-dal12-crb20b637238bb471f8b4b8b881bbb4962-w9   169.xx.xxx.xxx  10.176.48.69   b3c.4x16          normal   Ready   dal12   1.8.6_1504
      

      If you created an app affinity rule based on another factor, get that value instead. For example, to verify that the app pod deployed to a worker node on a specific VLAN, view the VLAN that the worker node is on by running ibmcloud ks worker get --cluster <cluster_name_or_ID> --worker <worker_ID>.

    4. In the output, verify that the worker node with the private IP address that you identified in the previous step is deployed in this worker pool.

Deploying an app on a GPU machine

If you have a GPU machine type, you can speed up the processing time required for compute intensive workloads such as AI, machine learning, inferencing, and more.

In IBM Cloud Kubernetes Service, the required GPU drivers are automatically installed for you.

In the following steps, you learn how to deploy workloads that require the GPU. However, you can also deploy apps that don't need to process their workloads across both the GPU and CPU.

You can also try mathematically intensive workloads such as the TensorFlow machine learning framework with this Kubernetes demo.

Prerequisites

Before you begin

  • Create a cluster or worker pool that uses a GPU flavor. Keep in mind that setting up a bare metal machine can take more than one business day to complete. For a list of available flavors, see the following links.

  • Make sure that you are assigned a service access role that grants the appropriate Kubernetes RBAC role so that you can work with Kubernetes resources in the cluster.

Deploying a workload

  1. Create a YAML file. In this example, a Job YAML manages batch-like workloads by making a short-lived pod that runs until the command completes and successfully terminates.

    For GPU workloads, you must specify the resources: limits: nvidia.com/gpu field in the job YAML.

    apiVersion: batch/v1
    kind: Job
    metadata:
      name: nvidia-devicequery
      labels:
        name: nvidia-devicequery
    spec:
      template:
        metadata:
          labels:
            name: nvidia-devicequery
        spec:
          containers:
          - name: nvidia-devicequery
            image: nvcr.io/nvidia/k8s/cuda-sample:devicequery-cuda11.7.1-ubuntu20.04
            imagePullPolicy: IfNotPresent
            resources:
              limits:
                nvidia.com/gpu: 2
          restartPolicy: Never
    
    Table 1. Understanding your YAML components
    Component Description
    Metadata and label names Enter a name and a label for the job, and use the same name in both the file's metadata and the spec template metadata. For example, nvidia-devicequery.
    containers.image Provide the image that the container is a running instance of. In this example, the value is set to use the DockerHub CUDA device query image:nvcr.io/nvidia/k8s/cuda-sample:devicequery-cuda11.7.1-ubuntu20.04.
    containers.imagePullPolicy To pull a new image only if the image is not currently on the worker node, specify IfNotPresent.
    resources.limits

    For GPU machines, you must specify the resource limit. The Kubernetes Device Plug-in sets the default resource request to match the limit.

    • You must specify the key as nvidia.com/gpu.
    • Enter the whole number of GPUs that you request, such as 2. Note that container pods don't share GPUs and GPUs can't be overcommitted. For example, if you have only 1 mg1c.16x128 machine, then you have only 2 GPUs in that machine and can specify a maximum of 2.
  2. Apply the YAML file. For example:

    kubectl apply -f nvidia-devicequery.yaml
    
  3. Check the job pod by filtering your pods by the nvidia-devicequery label. Verify that the STATUS is Completed.

    kubectl get pod -a -l 'name in (nvidia-devicequery)'
    

    Example output

    NAME                  READY     STATUS      RESTARTS   AGE
    nvidia-devicequery-ppkd4      0/1       Completed   0          36s
    
  4. Describe the pod to see how the GPU device plug-in scheduled the pod.

    • In the Limits and Requests fields, see that the resource limit that you specified matches the request that the device plug-in automatically set.

    • In the events, verify that the pod is assigned to your GPU worker node.

      kubectl describe pod nvidia-devicequery-ppkd4
      

      Example output

      NAME:           nvidia-devicequery-ppkd4
      Namespace:      default
      ...
      Limits:
          nvidia.com/gpu:  1
      Requests:
          nvidia.com/gpu:  1
      ...
      Events:
      Type    Reason                 Age   From                     Message
      ----    ------                 ----  ----                     -------
      Normal  Scheduled              1m    default-scheduler        Successfully assigned nvidia-devicequery-ppkd4 to 10.xxx.xx.xxx
      ...
      
  5. To verify that the job used the GPU to compute its workload, you can check the logs.

    kubectl logs nvidia-devicequery-ppkd4
    

    Example output

    /cuda-samples/sample Starting...
    
    CUDA Device Query (Runtime API) version (CUDART static linking)
    
    Detected 1 CUDA Capable device(s)
    
    Device 0: "Tesla P100-PCIE-16GB"
    CUDA Driver Version / Runtime Version          11.4 / 11.7
    CUDA Capability Major/Minor version number:    6.0
    Total amount of global memory:                 16281 MBytes (17071734784 bytes)
    (056) Multiprocessors, (064) CUDA Cores/MP:    3584 CUDA Cores
    GPU Max Clock rate:                            1329 MHz (1.33 GHz)
    Memory Clock rate:                             715 Mhz
    Memory Bus Width:                              4096-bit
    L2 Cache Size:                                 4194304 bytes
    Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
    Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
    Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
    Total amount of constant memory:               65536 bytes
    Total amount of shared memory per block:       49152 bytes
    Total shared memory per multiprocessor:        65536 bytes
    Total number of registers available per block: 65536
    Warp size:                                     32
    Maximum number of threads per multiprocessor:  2048
    Maximum number of threads per block:           1024
    Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
    Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
    Maximum memory pitch:                          2147483647 bytes
    Texture alignment:                             512 bytes
    Concurrent copy and kernel execution:          Yes with 2 copy engine(s)
    Run time limit on kernels:                     No
    Integrated GPU sharing Host Memory:            No
    Support host page-locked memory mapping:       Yes
    Alignment requirement for Surfaces:            Yes
    Device has ECC support:                        Enabled
    Device supports Unified Addressing (UVA):      Yes
    Device supports Managed Memory:                Yes
    Device supports Compute Preemption:            Yes
    Supports Cooperative Kernel Launch:            Yes
    Supports MultiDevice Co-op Kernel Launch:      Yes
    Device PCI Domain ID / Bus ID / location ID:   0 / 175 / 0
    Compute Mode:
    < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
    
    deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 11.4, CUDA Runtime Version = 11.7, NumDevs = 1
    Result = PASS
    

    In this example, you see a GPU was used to execute the job because the GPU was scheduled in the worker node. If the limit is set to 2, only 2 GPUs are shown.

Now that you deployed a test GPU workload, you might want to set up your cluster to run a tool that relies on GPU processing, such as IBM Maximo Visual Inspection.