IBM Cloud Docs
Scalable web application on Kubernetes

Scalable web application on Kubernetes

This tutorial may incur costs. Use the Cost Estimator to generate a cost estimate based on your projected usage.

This tutorial walks you through how to run a web application locally in a container, and then deploy it to a Kubernetes cluster created with Kubernetes Service. As an optional step you can build a container image and push the image to a private registry. Additionally, you will learn how to bind a custom subdomain, monitor the health of the environment, and scale the application.

Containers are a standard way to package apps and all their dependencies so that you can seamlessly move the apps between environments. Unlike virtual machines, containers do not bundle the operating system. Only the app code, run time, system tools, libraries, and settings are packaged inside containers. Containers are more lightweight, portable, and efficient than virtual machines.

Objectives

  • Deploy a web application to the Kubernetes cluster.
  • Bind a custom subdomain.
  • Monitor the logs and health of the cluster.
  • Scale Kubernetes pods.

Architecture
Architecture diagram

  1. A developer downloads or clones a sample web application.
  2. Optionally build the application to produce a container image.
  3. Optionally the image is pushed to a namespace in the IBM Cloud Container Registry.
  4. The application is deployed to a Kubernetes cluster.
  5. Users access the application.

Before you begin

This tutorial requires:

  • IBM Cloud CLI,
    • IBM Cloud Kubernetes Service plugin (kubernetes-service),
  • kubectl to interact with Kubernetes clusters,
  • Helm 3 to deploy charts.

You will find instructions to download and install these tools for your operating environment in the Getting started with tutorials guide.

To avoid the installation of these tools you can use the Cloud Shell from the IBM Cloud console.

In addition:

  • You will need a Secrets Manager instance. With Secrets Manager, you can create, lease, and centrally manage secrets that are used in IBM Cloud services or your custom-built applications. Secrets are stored in a dedicated Secrets Manager instance and you can use built in features to monitor for expiration, schedule or manually rotate your secrets. In this tutorial, you will use a Kubernetes Operator to retrieve a TLS certificate from Secrets Manager and inject into a Kubernetes secret. You can use an existing instance if you already have one or create a new one by following the steps outlined in Creating a Secrets Manager service instance.
  • Optionally set up a registry namespace. It is only needed if you will build your own custom container image.
  • Understand the basics of Kubernetes.

Enable service-to-service communication with Secrets Manager

Integrating Secrets Manager with your IBM Cloud Kubernetes Service cluster requires service-to-service communication authorization. Follow these steps to set up the authorization. For more information, see Integrations for Secrets Manager.

  1. In the IBM Cloud console, click Manage > Access (IAM).
  2. Click Authorizations.
  3. Click Create.
  4. In the Source service list, select Kubernetes Service.
  5. Select the option to scope the access to All resources.
  6. In the Target service list, select Secrets Manager.
  7. Select the option to scope the access to All resources.
  8. In the Service access section, check the Manager option.
  9. Click Authorize.

Create a Kubernetes cluster

The IBM Cloud Kubernetes Service is a managed offering to create your own Kubernetes cluster of compute hosts to deploy and manage containerized apps on IBM Cloud. A minimal cluster with one (1) zone, one (1) worker node and the smallest available size (Flavor) is sufficient for this tutorial.

  1. Open the Kubernetes clusters and click Create cluster.

  2. Create a cluster on your choice of Infrastructure.

    • The following steps are if you select VPC for Kubernetes on VPC infrastructure. You are required to create a VPC and subnet(s) before creating the Kubernetes cluster. Reference the Creating VPC clusters documentation for more details.

      1. Click Create VPC.
      2. Under the Location section, select a Geography and Region, for example Europe and London.
      3. Enter a Name of your VPC, select a Resource group and optionally, add Tags to organize your resources.
      4. Uncheck Allow SSH and Allow ping from the Default security group.
      5. Uncheck Create subnet in every zone.
      6. Click on Create.
      7. Under Worker zones and subnets, uncheck the two zones for which the subnet wasn't created.
      8. Set the Worker nodes per zone to 1 and click on Change flavor to explore and change to the worker node size of your choice.
      9. Under Ingress, enable Ingress secrets management and select your existing Secrets Manager instance.
      10. Enter a Cluster name and select the same Resource group that you used for the VPC.
      11. Logging or Monitoring aren't required in this tutorial, disable those options and click on Create.
      12. While you waiting for the cluster to become active, attach a public gateway to the VPC. Navigate to the Virtual private clouds.
      13. Click on the name for the VPC used by the cluster and scroll down to subnets section.
      14. Click on the name of the subnet created earlier and in the Public Gateway section, click on Detached to change the state to Attached.
    • The following steps are if you select Classic for Kubernetes on Classic infrastructure. Reference the Creating a standard classic cluster documentation for more details.

      1. Under the Location section, select a Geography, multizone Availability, and Metro for example Europe and London.
      2. Under Worker zones and VLANs, uncheck all zones except for one.
      3. Set the Worker nodes per zone to 1 and click on Change flavor to explore and change to the worker node size of your choice.
      4. Under Master service endpoint, select Both private & public endpoints.
      5. Under Ingress, enable Ingress secrets management and select your existing Secrets Manager instance.
      6. Enter a Cluster name and select the Resource group to create these resources under.
      7. Logging or Monitoring aren't required in this tutorial, disable those options and click on Create.

Clone a sample application

In this section, you will clone a GitHub repo with a simple Helm-based NodeJS sample application with a landing page and two endpoints to get started. You can always extend the sample application based on your requirements.

  1. On a terminal, run the below command to clone the GitHub repository:
    git clone https://github.com/IBM-Cloud/kubernetes-node-app
    
  2. Change to the application directory:
    cd kubernetes-node-app
    

This sample application code contains all the necessary configuration files for local development and deployment to Kubernetes.

Deploy application to cluster using helm chart

Deploy the application with Helm 3

The container image for the application as already been built and pushed to a public registry in the IBM Cloud Container Registry. In this section you will deploy the sample application using Helm. Helm helps you manage Kubernetes applications through Helm Charts, which helps define, install, and upgrade even the most complex Kubernetes application.

Note: If you want to build and push the application to your own container registry you can use the Docker CLI to do so. The Dockerfile is provided in the repository and images can be pushed to the IBM Cloud Container Registry or any other container registry.

  1. Define an environment variable named MYAPP and set the name of the application by replacing the placeholder with your initials:

    export MYAPP=<your-initials>kubenodeapp
    
  2. Identify your cluster:

    ibmcloud ks cluster ls
    
  3. Initialize the variable with the cluster name:

    export MYCLUSTER=<CLUSTER_NAME>
    
  4. Initialize the kubectl cli environment:

    ibmcloud ks cluster config --cluster $MYCLUSTER
    

    Make sure the CLI is configured for the region and resource group where your created your cluster using ibmcloud target -r <region> -g <resource_group>. For more information on gaining access to your cluster and to configure the CLI to run kubectl commands, check the CLI configure section

  5. You can either use the default Kubernetes namespace or create a new namespace for this application.

    1. If you want to use the default Kubernetes namespace, run the below command to set an environment variable:
      export KUBERNETES_NAMESPACE=default
      
    2. If you want to create a new Kubernetes namespace, follow the steps mentioned under Copying an existing image pull secret and Storing the image pull secret in the Kubernetes service account for the selected namespace sections of the Kubernetes service documentation. Once completed, run the below command:
      export KUBERNETES_NAMESPACE=<KUBERNETES_NAMESPACE_NAME>
      
  6. Change to the chart directory under your sample application directory:

    cd chart/kubernetesnodeapp
    
  7. Install the Helm chart:

    helm install $MYAPP --namespace $KUBERNETES_NAMESPACE . --set image.repository=icr.io/solution-tutorials/tutorial-scalable-webapp-kubernetes
    
  8. Change back to the sample application directory:

    cd ../..
    

View the application

  1. List the Kubernetes services in the namespace:
    kubectl get services -n $KUBERNETES_NAMESPACE
    
  2. List the Kubernetes pods in the namespace:
    kubectl get pods -n $KUBERNETES_NAMESPACE
    

Use the IBM-provided domain for your cluster

Clusters come with an IBM-provided domain. This gives you a better option to expose applications with a proper URL and on standard HTTP/S ports.

Use Ingress to set up the cluster inbound connection to the service.

Ingress
Ingress

  1. Identify your IBM-provided Ingress subdomain and Ingress secret:

    ibmcloud ks cluster get --cluster $MYCLUSTER
    

    to find

    Ingress subdomain: mycluster.us-south.containers.appdomain.cloud
    Ingress secret:    mycluster
    
  2. Define environment variable INGRESS_SUBDOMAIN to hold the value of the Ingress subdomain:

    export INGRESS_SUBDOMAIN=<INGRESS_SUBDOMAIN>
    
  3. Define environment variable INGRESS_SECRET to hold the value of the Ingress secret:

    export INGRESS_SECRET=<INGRESS_SECRET>
    
  4. In the sample application directory run the below bash command to create an Ingress file ingress-ibmsubdomain.yaml pointing to the IBM-provided domain with support for HTTP and HTTPS:

    ./ingress.sh ibmsubdomain_https
    

    The file is generated from a template file ingress-ibmsubdomain-template.yaml under yaml-templates folder by replacing all the values wrapped in the placeholders ($) with the appropriate values from the environment variables.

  5. Deploy the Ingress:

    kubectl apply -f ingress-ibmsubdomain.yaml
    
  6. Open your application in a browser at https://<myapp>.<ingress-subdomain>/ or run the below command to see the HTTP output:

    curl -I https://$MYAPP.$INGRESS_SUBDOMAIN
    

Use your own custom subdomain

This section requires you to own a custom domain. You will need to create a CNAME record pointing to the IBM-provided ingress subdomain for the cluster. If your domain is example.com then the CNAME will be <myapp>.<example.com> pointing to <myapp>.<ingress-subdomain>.

with HTTP

  1. Create an environment variable pointing to your custom domain:
    export CUSTOM_DOMAIN=<example.com>
    
  2. Create an Ingress file ingress-customdomain-http.yaml pointing to your domain from the template file ingress-customdomain-http-template.yaml:
    ./ingress.sh customdomain_http
    
  3. Deploy the Ingress:
    kubectl apply -f ingress-customdomain-http.yaml
    
  4. Access your application at http://<myapp>.<example.com>/.

with HTTPS

If you were to try to access your application with HTTPS at this time https://<myapp>.example.com/, you will likely get a security warning from your web browser telling you the connection is not private.

Now, import your certificate into the Secrets Manager instance you configured earlier to your cluster.

  1. Access the Secrets Manager service instance from the Resource List Under Security.

  2. Click on Secrets in the left navigation.

  3. Click Add.

  4. You can select either Public certificate, Imported certificate or Private certificate. Detailed steps are available in the respective documentation topics: Ordering SSL/TLS public certificates, Importing SSL/TLS certificates or Creating SSL/TLS private certificates. If you selected to import a certificate, make sure to upload the certificate, private key and intermediate certificate files.

  5. Locate the entry for the imported or ordered certificate and click on it.

    • Verify the domain name matches your $CUSTOM_DOMAIN. If you uploaded a wildcard certificate, an asterisk is included in the domain name.
    • Click the copy icon next to the certificate's ID.
    • Create an environment variable pointing to the value you just copied:
    export CERTIFICATE_ID=<certificate ID>
    
  6. In Secrets Manager, certificates that you import to the service are imported certificates (imported_cert). Certificates that you order through Secrets Manager from a third-party certificate authority are public certificates (public_cert). This information is needed in the helm chart you will be using later on to configure the ingress. Select and run the command that is appropriate for your selection in the previous step.

    export CERTIFICATE_TYPE=imported_cert
    

    or

    export CERTIFICATE_TYPE=public_cert
    
  7. Click on Endpoints in the left navigation.

  8. Locate the Public endpoint for the Service API.

    • Create an environment variable pointing to the endpoint:
    export SECRETS_MANAGER_URL=<public endpoint>
    

In order to access the Secrets Manager service instance from your cluster, we will use the External Secrets Operator and configure a service ID and API key for it.

  1. Create a service ID and set it as an environment variable:
    export SERVICE_ID=`ibmcloud iam service-id-create kubernetesnodeapp-tutorial --description "A service ID for scalable-webapp-kubernetes tutorial." --output json | jq -r ".id"`; echo $SERVICE_ID
    
  2. Assign the service ID permissions to read secrets from Secrets Manager:
    ibmcloud iam service-policy-create $SERVICE_ID --roles "SecretsReader" --service-name secrets-manager
    
  3. Create an API key for your service ID:
    export IBM_CLOUD_API_KEY=`ibmcloud iam service-api-key-create kubernetesnodeapp-tutorial $SERVICE_ID --description "An API key for scalable-webapp-kubernetes tutorial." --output json | jq -r ".apikey"`
    
  4. Create a secret in your cluster for that API key:
    kubectl -n $KUBERNETES_NAMESPACE create secret generic kubernetesnodeapp-api-key --from-literal=apikey=$IBM_CLOUD_API_KEY
    
  5. Run the following commands to install the External Secrets Operator:
    helm repo add external-secrets https://charts.external-secrets.io
    
    helm install external-secrets external-secrets/external-secrets
    
  6. Create an Ingress file ingress-customdomain-https.yaml pointing to your domain from the template ingress-customdomain-https-template.yaml:
    ./ingress.sh customdomain_https
    
  7. Deploy the Ingress:
    kubectl apply -f ingress-customdomain-https.yaml
    
  8. Validate the secret was created:
    kubectl get secret kubernetesnodeapp-certificate -n $KUBERNETES_NAMESPACE
    
    If you encounter any errors running the above command, you can run the following command to obtain more details on the reasons for that error.
     kubectl get externalsecret.external-secrets.io/kubernetesnodeapp-external-secret -o json
    
  9. Access your application at https://$MYAPP.$CUSTOM_DOMAIN/.
    curl -I https://$MYAPP.$CUSTOM_DOMAIN
    

Monitor application health

  1. To check the health of your application, navigate to clusters to see a list of clusters and click on your cluster.
  2. Click Kubernetes Dashboard to launch the dashboard in a new tab.
  3. Click Pods on the left then click a pod-name matching $MYAPP
    • Examine he CPU and Memory usage.
    • Note the node IP name.
    • Click View logs in the action menu in the upper right to see the standard output and error of the application.
  4. Select Nodes on the left pane, click the Name of a node noted earlier and see the Allocation Resources to see the health of your nodes.
  5. To exec into the container select Exec into in the action menu

Scale Kubernetes pods

As load increases on your application, you can manually increase the number of pod replicas in your deployment. Replicas are managed by a ReplicaSet. To scale the application to two replicas, run the following command:

kubectl scale deployment kubernetesnodeapp-deployment --replicas=2

After a short while, you will see two pods for your application in the Kubernetes dashboard (or with kubectl get pods). The Ingress controller in the cluster will handle the load balancing between the two replicas.

With Kubernetes, you can enable horizontal pod autoscaling to automatically increase or decrease the number of instances of your apps based on CPU.

To create an autoscaler and to define your policy, run the below command

kubectl autoscale deployment kubernetesnodeapp-deployment --cpu-percent=5 --min=1 --max=5

Once the autoscaler is successfully created, you should see horizontalpodautoscaler.autoscaling/<deployment-name> autoscaled.

Remove resources

  • Delete the horizontal pod autoscaler:

    kubectl delete horizontalpodautoscaler.autoscaling/kubernetesnodeapp-deployment
    
  • Delete the resources applied:

    kubectl delete -f ingress-customdomain-https.yaml
    kubectl delete -f ingress-customdomain-http.yaml
    kubectl delete -f ingress-ibmsubdomain.yaml
    
  • Delete the Kubernetes artifacts created for this application:

    helm uninstall $MYAPP --namespace $KUBERNETES_NAMESPACE
    
  • Delete the Kubernetes secret:

    kubectl -n $KUBERNETES_NAMESPACE delete secret kubernetesnodeapp-api-key 
    
  • Delete the External Secrets Operator:

    helm uninstall external-secrets
    
  • Delete the service ID:

    ibmcloud iam service-id-delete $SERVICE_ID
    
  • Delete the cluster.

Related content