IBM Cloud Docs
Scale workloads in shared and dedicated VPC environments

Scale workloads in shared and dedicated VPC environments

This tutorial may incur costs. Use the Cost Estimator to generate a cost estimate based on your projected usage.

This tutorial walks you through the steps of setting up isolated workloads in a shared (multi-tenant) environment and a dedicated (single-tenant) environment. Provision an IBM Cloud® Virtual Private Cloud (VPC) with subnets spanning multiple availability zones (AZs) and virtual server instances (VSIs) that can scale according to your requirements to ensure the high availability of your application. Furthermore, configure load balancers to provide high availability between zones within one region. Configure Virtual Private Endpoints (VPE) for your VPC providing private routes to services on the IBM Cloud.

Isolate workloads by provisioning a dedicated host, attaching an encrypted data volume to a VSI, expanding the attached data volume, and resizing the VSI after the fact.

You will provision all of these services and VPC resources using IBM Cloud Schematics, which provides Terraform-as-a-Service capabilities. The Terraform template defines the IBM Cloud resources to be created, updated, or deleted.

Objectives

  • Learn how to set up a multi-zone VPC with instance autoscaling.
  • Understand the concepts of public and private load balancing.
  • Learn how to scale instances dynamically or periodically.
  • Learn the use of dedicated hosts.

Architecture
Figure 1. Architecture diagram of the tutorial

  1. The frontend app deployed on VSI(s) communicates to the backend app via the private load balancer.
  2. The backend app securely communicates with the cloud services via a virtual private endpoint (VPE).
  3. As the load on the application increases, scaling for VPC is enabled and dynamically adds or removes VSIs based on metrics like CPU, RAM, etc., or through scheduled scaling.
  4. As the scope expands, dedicated host isolates and performs heavy computation on the data. Resize the instance on the dedicated host by updating the profile based on your requirement. Also, expand the block storage volume capacity.
  5. All instances communicate with IBM Cloud services over the private backbone using a virtual private endpoint (VPE). See the About virtual private endpoint gateways topic for more details.

Before you begin

The tutorial requires:

Provision required cloud services

In this section, you will create the following cloud services required for the application using IBM Cloud Schematics: IBM Cloud Databases for PostgreSQL and IBM Cloud Object Storage.

  1. Navigate to Schematics Workspaces, click on Create workspace.
    1. Under the Specify Template section, provide https://github.com/IBM-Cloud/vpc-scaling-dedicated-host under GitHub or GitLab repository URL.
    2. Select terraform_v1.5 as the Terraform version and click Next.
  2. Under Workspace details,
    1. Provide a workspace name : vpc-scaling-workspace.
    2. Choose a Resource Group and a Location.
    3. Click on Next.
  3. Verify the details and then click on Create.
  4. Under Variables, set step1_create_services to true by clicking the action menu in the row > Edit, uncheck Use default, choose true from the Override Value dropdown, and click on Save.
  5. Set any additional variables you would like to override, the most typical ones are region, resource_group_name.
  6. Scroll to the top of the page and click Generate plan. This is the same as terraform plan command.
  7. Click on Show more to check the resources to be provisioned.
  8. Navigate to the workspace page using the breadcrumb menu and click on Apply plan. Check the logs to see the status of the services created.

Navigate to the resource list. Here, you can filter by the basename used to create the resources, i.e., vpc-scaling, and you will see the cloud services required for this tutorial provisioned in the resource group you specified. All the data stored with these services is encrypted with a key generated and stored in IBM Key Protect for IBM Cloud.

Enable logging and monitoring

You can have multiple IBM Cloud Log Analysis instances in a region. However, only one instance can be configured to receive platform logs from enabled cloud services in that IBM Cloud region. Similarly, you should configure one instance of the IBM Cloud Monitoring service per region to collect platform metrics.

  1. Navigate to the Observability page and under Logging/Monitoring, look for any existing IBM Cloud Log Analysis and/or IBM Cloud Monitoring services with Platform logs and/or Platform metrics enabled. If you do not have one, you can use the steps below to create and/or enable one.

  2. To create a new IBM Cloud Log Analysis and/or IBM Cloud Monitoring service(s), navigate to the Settings tab of your Schematics workspace, update step1_create_logging variable to true and Save the setting. Repeat the same with the step1_create_monitoring variable.

  3. To configure platform logs, navigate to the Observability page and click on Logging in the navigation pane.

    1. Click on Options > Edit platform and select a region in which you plan to provision the VPC resources.
    2. Select the log analysis service instance from the dropdown menu and click Select.
  4. To configure platform metrics, repeat the above step by clicking Monitoring in the navigation pane.

    For more information, see Configuring IBM Cloud platform logs and Enabling platform metrics

Set up a multizone Virtual Private Cloud

In this section, you will:

  • Provision an IBM Cloud® Virtual Private Cloud (VPC) with subnets spanning across two availability zones (in short: zones). To ensure the high availability of your frontend app and backend app, you will create multiple VSIs across these zones.
  • Configure a public load balancer for your frontend and a private load balancer for your backend app to provide high availability between zones.
  • Create an instance template used to provision instances in your instance group.

Initially, you may not deploy all the infrastructure resources to make it scale, even if you designed it in that way. You may start with only one or a few instances, as shown below.

Deploy one VSI
Deploy one VSI

As the load increases, you may need more instances to serve the traffic. You may configure a public load balancer for the frontend app and a private load balancer for the backend app to equally distribute incoming requests across instances. With a load balancer, you can configure specific health checks for the pool members associated with instances.

Deploy multiple VSIs
Deploy multiple VSIs

An instance template is required before you can create an instance group for auto scaling. The instance template defines the details of the virtual server instances that are created for your instance group. For example, specify the profile (vCPU and memory), image, attached volumes, and network interfaces for the image template. Additionally, user data is specified to automatically run initialization scripts required for the frontend and backend applications respectively. All of the VSIs that are created for an instance group use the instance template that is defined in the instance group. The script provisions an instance template and an instance group (one for frontend and one for backend) with no auto scaling policies defined yet. This example does not require data volumes so they are commented out in the modules/create_vpc/autoscale/main.tf ibm_is_instance_group resource.

VPC uses cloud-init technology to configure virtual server instances. The user data field on the new virtual server for VPC page allows users to put in custom configuration options by using cloud-init.

Use an instance group
Use an instance group

Provision the resources

If you want to access the VSIs directly later, you can optionally create an SSH key and set ssh_keyname to the name of the VPC SSH Key.

  1. Go to the Settings tab of your Schematics workspace, click the action menu for step2_create_vpc,uncheck Use default, change the override value to true and Save the setting.

  2. Click on Apply plan to provision the VPC resources.

    There are multiple Terraform modules involved in provisioning the VPC resources. To understand better, check the main.tf file.

  3. Follow the status logs by clicking on Show more. After the apply is successful, you should see the following resources provisioned:

    • a VPC

    • two subnets (one in each zone)

    • a public load balancer with a security group driving traffic to the frontend application

    • a private load balancer with a security group driving requests from frontend to the backend

    • an instance template and an instance group for provisioning and scaling the instances

    • Initially, two VSIs (one frontend instance and one backend instance) with respective security groups attached

      The frontend instance runs an Nginx server to serve a PHP web application that talks to the backend to store and retrieve data. The backend instance runs a Node.js application with GraphQL API wrapper for IBM Cloud Databases for PostgreSQL and IBM Cloud Object Storage.

  4. Copy the public load balancer hostname from the log output and paste the hostname in a browser by prefixing http:// to see the frontend application. As shown in the diagram below, enter the balance, e.g.,10 and click Submit to see the details of the VSIs serving the request.

    View application
    View application

    To check the provisioned VPC resources, you can either use the VPC UI or Cloud Shell with ibmcloud is commands.

In the next section, you will choose a scaling method (static or dynamic) and create scaling policies.

Increase load on your instances to check scaling

In this section, you will start scaling the instances with the scaling method initially set to static. Then, you move to scaling the instances with dynamic scaling by setting up an instance manager and an instance group manager policy. Based on the target utilization metrics that you define, the instance group can dynamically add or remove instances to achieve your specified instance availability.

Manual scaling

  1. To check static scaling method, navigate to the Settings tab of your Schematics workspace to see that the step3_is_dynamic variable is set to false.
  2. Update the step3_instance_count variable to 2 and Save the setting.
  3. Apply the plan to see the additional two instances (one frontend VSI and one backend VSI) provisioned.
  4. Under Memberships tab of your frontend instance group, you should now see 2 instances.
  5. Navigate to the browser showing the frontend app and either click on the Refresh button or submit a new balance multiple times to see the details of the frontend VSI and backend VSI serving the request. You should see two of the four VSIs serving your request.
  6. Before moving to the next step, update the step3_instance_count variable from 2 to 1 and Save the setting.

You can check the logs and monitor your load balancers later in the tutorial.

Automatic scaling

  1. To switch to dynamic scaling method, set the step3_is_dynamic variable to true, Save the setting and Apply the plan. This setting adds an instance group manager and an instance group manager policy to the existing instance group thus switching the instance group scaling method from static to dynamic.

    Scale instances
    Scale instances

  2. To check the autoscaling capabilities, you can use a load generator against your application. The following shell script simulates a basic load of 90000 requests with up to 300 in parallel.

    1. Open a local terminal.

    2. Create a shell variable for the public load balancer URL from the above step with /v1/controller/balance.php appended.

      export APPURL=http://<load-balancer>/v1/controller/balance.php
      
    3. Run the following script to generate some load. You can repeat it to create more traffic.

      seq 1 90000 | xargs -n1 -P300  curl -s  $APPURL -o /dev/null
      
  3. Under Memberships tab of your instance group, you should see new instances being provisioned.

    You should see up to 5 instances taking the load as the maximum membership count is set to 5. You can check the minimum and maximum instance group size under Overview tab of the instance group.

  4. Navigate to the browser showing the frontend app and submit balance multiple times to see the details of the frontend VSI and backend VSI serving the request.

    Wait for the instances to scale as the aggregate period is set to 90 seconds and cooldown period set to 120 seconds.

  5. Wait for the instances to scale to 1 before moving to the next step.

Scheduled actions (Optional)

In this section, you will use scheduled scaling for VPC to schedule actions that automatically add or remove instance group capacity, based on daily, intermittent, or seasonal demand. You can create multiple scheduled actions that scale capacity monthly, weekly, daily, hourly, or even every set number of minutes. This section is optional and not required to complete the remainder of this tutorial.

  1. To create a one-time scheduled action, set the step3_is_scheduled variable to true, Save the setting and Apply the plan.
  2. Check the status of your scheduled action under the scheduled actions tab of the instance group. The Terraform template will schedule the actions for 5 minutes from the time you apply the plan. When the status of the action is changed to completed, the instance group size will be set to a minimum of 2 and a maximum of 5 instances. You should see 2 instances under the Memberships tab of the instance group.
  3. Click on Generate load a couple of times to generate more traffic to see the instances scale to a maximum of 5.

Monitoring Load Balancer for VPC metrics

Load balancers calculate the metrics and send those metrics to your monitoring instance, which reflects different types of use and traffic. You can visualize and analyze metrics from the IBM Cloud Monitoring dashboard.

  1. You can monitor your load balancers from the Load balancers for VPC page by
    1. Clicking on the name of the load balancer.
    2. Under Monitoring preview tile of the load balancer, click on Launch monitoring.
  2. Alternatively, you can also monitor the load balancers by navigating to the Observability page and click Monitoring on the left pane
    1. Click on Open dashboard next to the instance marked as Platform metrics.
    2. Click on Dashboards on the left sidebar to open the IBM Load Balancer for VPC Monitoring Metrics dashboard.
    3. Under Dashboard templates, expand IBM > Load Balancer for VPC Monitoring Metrics. The default dashboard is not editable.
  3. Remember to generate load against your application.

Check the logs

VPC services generate platform logs in the same region where they are available. You can view, monitor, and manage VPC logs through the IBM Cloud Log Analysis instance that is marked as platform logs in the region.

Platform logs are logs that are exposed by logging-enabled services and the platform in IBM Cloud. For more information, see Configuring IBM Cloud platform logs.

  1. Navigate to the Observability page and click Logging on the left pane.
  2. Click on Open dashboard next to the instance marked as Platform logs.
  3. Under Apps from the top menu, check the load balancer CRN for which you want to see the logs and click Apply.
  4. Alternatively, you can check the logs of a load balancer from the Load balancers for VPC page by
    1. Clicking on the load balancer name for which you want to check the logs.
    2. Under Overview tab of the load balancer, Enable Data logging and then click on Launch logging.
    3. Remember to generate load against your application to see the logs.

For checking the logs of other VPC resources, refer to VPC logging.

Set up a dedicated host and provision a VSI with an encrypted data volume

Provisioning dedicated hosts will incur costs. Use the Cost Estimator to generate a cost estimate based on your projected usage.

In this section, you will create a dedicated host in a group and provision an instance with an encrypted data volume.

The reason you create a dedicated host is to carve out a single-tenant compute node, free from users outside of your organization. Within that dedicated space, you can create virtual server instances according to your needs. Additionally, you can create dedicated host groups that contain dedicated hosts for a specific purpose. Because a dedicated host is a single-tenant space, only users within your account that have the required permissions can create instances on the host.

  1. Navigate to the Settings tab of your Schematics workspace, update the step4_create_dedicated variable to true and Save the setting.

  2. Click on Apply the plan to provision the following resources,

    • a dedicated host group
    • a dedicated host
    • a VSI with encrypted data volume (encryption using IBM Key Protect for IBM Cloud) and with a security group attached.

    Add a dedicated host
    Add a dedicated host

  3. From the log output, copy the instance IP address and launch Cloud Shell to run the below command by replacing the placeholder <IP_ADDRESS> with the instance IP address

    export INSTANCE_IP=<IP_ADDRESS>
    

    Typically, you won't set a public IP (floating IP) for an instance. In this case, a floating IP is set allow curl to the app deployed on the instance.

  4. Issue the following curl command to query the database. The application running on the instance will read content from the Databases for PostgreSQL over the private endpoint. The data is the same that is available from the frontend application.

    curl \
    -s -X POST \
    -H "Content-Type: application/json" \
    --data '{ "query": "query read_database { read_database { id balance transactiontime } }" }' \
    http://$INSTANCE_IP/api/bank
    
  5. Issue the following curl command to query the COS bucket. The application running on the instance will read content from the Object Storage and return the results in JSON format. The data stored in COS is only available to the application running from the instance on the dedicated host.

    curl \
    -s -X POST \
    -H "Content-Type: application/json" \
    --data '{ "query": "query read_items { read_items { key size modified } }" }' \
    http://$INSTANCE_IP/api/bank
    
  6. Issue the following curl command to query the database and COS bucket at once. The application running on the instance will read content from the Databases for PostgreSQL and Object Storage and return the results in JSON format.

    curl \
    -s -X POST \
    -H "Content-Type: application/json" \
    --data '{ "query": "query read_database_and_items { read_database { id balance transactiontime } read_items { key size modified } }" }' \
    http://$INSTANCE_IP/api/bank
    

Resize the VSI and expand the attached block storage volume on the dedicated host

If you have observed the profile of the instance provisioned on the dedicated host, it is set to cx2-2x4 where c stands for Compute family (category) with 2 vCPUs and 4 GiB RAM. In this section, you will resize the instance by updating the profile to cx2-8x16 with 8 vCPUs, 16 GiB RAM.

In this section, you will also expand the block storage volume attached to the VSI from 100 GB to 250 GB. To understand the maximum capacity on the selected volume profile, check expanding block storage volume capacity

Resize the VSI

  1. To resize the VSI, navigate to the Settings tab of your Schematics workspace, update step5_resize_dedicated_instance variable to true and Save the setting.

    Virtual servers can only be resized to profiles supported by the dedicated host the instance is hosted on. For example, a virtual server provisioned with a profile from the Compute family, can resize to other profiles also belonging to the Compute family. For more information on profiles, see Instance Profiles.

  2. Apply the plan to resize the instance from 2 VCPUs | 4 GiB RAM to 8 VCPUs | 16 GiB RAM.

  3. You can check the profile of the instance by launching Cloud Shell, changing the region to the one where you provisioned your VPC with ibmcloud target -r us-south command and then running ibmcloud is instances command or from Virtual server instances for VPC UI by clicking on the dedicated instance name.

Expand block storage volume capacity

  1. To expand the capacity of the attached block storage volume, navigate to the Settings tab of your Schematics workspace, update step5_resize_dedicated_instance_volume variable to true and Save the setting.
  2. Apply the plan to increase the block storage volume capacity from 100 GB to 250 GB.
  3. You can check the size of the Data volume from Virtual server instances for VPC UI by clicking on the dedicated instance name.

What's next?

Extend the scenario by configuring SSL termination, sticky sessions, and end-to-end encryption. For more information, refer to this blog post.

Remove resources

To remove the Schematics workspace and its resources, follow these steps:

  1. Navigate to Schematics workspaces and select your workspace.
  2. Click on the Actions... drop down and click Destroy resources to clean up all the resources that were provisioned via Schematics.
  3. Click on the Actions... drop down and click Delete workspace to delete the workspace.

Depending on the resource it might not be deleted immediately, but retained (by default for 7 days). You can reclaim the resource by deleting it permanently or restore it within the retention period. See this document on how to use resource reclamation.