IBM Cloud Docs
Centralize communication through a VPC Transit Hub and Spoke architecture - Part two

Centralize communication through a VPC Transit Hub and Spoke architecture - Part two

This tutorial may incur costs. Use the Cost Estimator to generate a cost estimate based on your projected usage.

A Virtual Private Cloud (VPC) provides network isolation and security in the IBM Cloud. A VPC can be a building block that encapsulates a corporate division (marketing, development, accounting, ...) or a collection of microservices owned by a DevSecOps team. VPCs can be connected to an on-premises enterprise and each other. This may create the need to route traffic through centralized firewall-gateway appliances. This tutorial will walk through the implementation of a hub and spoke architecture depicted in this high-level view:

vpc-transit-overview
Figure 1. Architecture diagram of the tutorial

This is part two of a two part tutorial. This part will focus on routing all traffic between VPCs through a transit hub firewall-router. A scalable firewall-router using a Network Load Balancer is discussed and implemented. Private DNS is used for both for microservice identification and IBM Cloud service instance identification using a Virtual Private Endpoint (VPE) gateway.

This tutorial is stand alone so it is not required to execute the steps in part one. If you are not familiar with VPC, network IP layout and planning in the IBM Cloud, Transit Gateway, IBM Cloud® Direct Link or asymmetric routing consider reading through part one.

The hub and spoke model supports a number of different scenarios:

  • The hub can be the repository for shared micro services used by spokes and enterprise.
  • The hub can be a central point of traffic firewall-router and routing between enterprise and the cloud.
  • The hub can monitor all or some of the traffic - spoke <-> spoke, spoke <-> transit, or spoke <-> enterprise.
  • The hub can hold the VPN resources that are shared by the spokes.
  • The hub can be the repository for shared cloud resources, like databases, accessed through virtual private endpoint (VPE) gateways controlled with VPC security groups and subnet access control lists, shared by spokes and enterprise

There is a companion GitHub repository that divides the connectivity into a number of incremental layers. In the tutorial thin layers enable the introduction of bite size challenges and solutions.

The following will be explored:

  • VPC egress and ingress routing.
  • Virtual Network Functions in combination with a Network Load Balancers to support a high availability and scalability.
  • VPE gateways.
  • DNS resolution.

A layered architecture will introduce resources and demonstrate connectivity. Each layer will add additional connectivity and resources. The layers are implemented in Terraform. It will be possible to change parameters, like number of zones, by changing a Terraform variable. A layered approach allows the tutorial to introduce small problems and demonstrate a solution in the context of a complete architecture.

Objectives

  • Understand the concepts behind a VPC based hub and spoke model for managing all VPC to VPC traffic.
  • Understand VPC ingress and egress routing.
  • Identify and optionally resolve asymmetric routing issues.
  • Understand the use of a Network Load Balancer for a highly available and scalable firewall-router.
  • Utilize the DNS service routing and forwarding rules to build an architecturally sound name resolution system.

Before you begin

This tutorial requires:

  • terraform to use Infrastructure as Code to provision resources,
  • python to optionally run the pytest commands,
  • Implementing a firewall-router will require that you enable IP spoofing checks,

See the prerequisites for a few options including a Dockerfile to easily create the prerequisite environment.

In addition:

Summary of Part one

In part one of this tutorial we carefully planned the address space of the transit and spoke VPCs. The zone based architecture is shown below:

Zones
Zones

This diagram shows the traffic flow. Only the enterprise <-> spoke is passing through the firewall:

Traffic flow
Traffic flow

This was achieved with Direct Link, Transit Gateway and VPC routing. All zones are configured similarly and the diagram below shows the details of zone 1:

VPC Layout
VPC Layout

The CIDR 10.1.0.0/16 covers transit and the spokes and is passed through Direct Link to the enterprise as an advertised route. Similarly the CIDR 192.168.0.0/24 covers the enterprise and is passed through the Transit Gateway to the spokes as an advertised route.

Egress routes in the spokes route traffic to the firewall-router. Ingress routes in the transit route enterprise <-> spoke traffic through the firewall-router.

Provision initial VPC resources routing all intra VPC traffic through the firewall-router

Often an enterprise uses a transit VPC to monitor the traffic with the firewall-router. In part one only enterprise <-> spoke traffic was flowing through the transit firewall-router. This section is about routing all VPC to VPC traffic through firewall-router.

This diagram shows the traffic flow implemented in this step:

Traffic flow
Traffic flow

All traffic between VPCs will flow through the firewall-router:

  • enterprise <-> spoke.
  • enterprise <-> transit.
  • transit <-> spoke.
  • spoke <-> spoke in different VPC.

Traffic within a VPC will not flow through the firewall.

If continuing from part one make special note of the configuration in the terraform.tfvars: all_firewall = true.

Apply Layers

  1. The companion GitHub Repository has the source files to implement the architecture. In a desktop shell clone the repository:

    git clone https://github.com/IBM-Cloud/vpc-transit
    cd vpc-transit
    
  2. The config_tf directory contains configuration variables that you are required to configure.

    cp config_tf/template.terraform.tfvars config_tf/terraform.tfvars
    
  3. Edit config_tf/terraform.tfvars.

    • Make the required changes.
    • Change the value all_firwewall = true.
  4. If you don't already have one, obtain a Platform API key and export the API key for use by Terraform:

    export IBMCLOUD_API_KEY=YourAPIKEy
    
  5. Since it is important that each layer is installed in the correct order and some steps in this tutorial will install multiple layers a shell command ./apply.sh is provided. The following will display help:

    ./apply.sh
    
  6. You could apply all of the layers configured by executing ./apply.sh : :. The colons are shorthand for first (or config_tf) and last (vpe_dns_forwarding_rules_tf). The -p prints the layers:

    ./apply.sh -p : :
    
  7. Apply all of the layers in part one and described above (even if continuing from part one use this command to re-apply the initial layers with the configuration change all_firewall = true).

    ./apply.sh : spokes_egress_tf
    

If you were following along in part one some additional ingress routes were added to the transit ingress route table to avoid routing through the firewall-router. In this step these have been removed and the transit ingress route table has just these entries so that all incoming traffic for a zone is routed to the firewall-router in the same zone. Your Next hop addresses may be different but will be the IP address of the firewall-router instance:

Zone Destination Next hop
Dallas 1 10.1.0.0/16 10.1.15.196
Dallas 2 10.2.0.0/16 10.2.15.196
Dallas 3 10.3.0.0/16 10.3.15.196

To observe this:

  1. Open the VPCs in the IBM Cloud.
  2. Select the transit VPC and notice the Address prefixes displayed.
  3. Click Manage routing tables
  4. Click on the tgw-ingress transit gateway ingress route table

Route Spoke and Transit to the firewall-router

Routing all cloud traffic originating at the spokes through the transit VPC firewall-router in the same zone as the originating instance is accomplished by these routes in the spoke's default egress routing table (shown for Dallas/us-south):

Zone Destination Next hop
Dallas 1 10.0.0.0/8 10.1.15.196
Dallas 2 10.0.0.0/8 10.2.15.196
Dallas 3 10.0.0.0/8 10.3.15.196

Similarly in the transit VPC - route all enterprise and cloud traffic through the firewall-router in the same zone as the originating instance. For example a transit test instance 10.1.15.4 (transit zone 1) attempting to connect with 10.2.0.4 (spoke 0, zone 2) will be sent through the firewall-router in zone 1: 10.1.15.196.

Routes in transit's default egress routing table (shown for Dallas/us-south):

Zone Destination Next hop
Dallas 1 10.0.0.0/8 10.1.15.196
Dallas 2 10.0.0.0/8 10.2.15.196
Dallas 3 10.0.0.0/8 10.3.15.196
Dallas 1 192.168.0.0/16 10.1.15.196
Dallas 2 192.168.0.0/16 10.2.15.196
Dallas 3 192.168.0.0/16 10.3.15.196

Do not route Intra VPC traffic to the firewall-router

In this example Intra-VPC traffic will not pass through the firewall-router. For example resources in spoke 0 can connect to other resources on spoke 0 directly. To accomplish this additional more specific routes can be added to delegate internal traffic. For example in spoke 0, which has the CIDR ranges: 10.1.0.0/24, 10.2.0.0/24, 10.3.0.0/24 the internal routes can be delegated.

Routes in spoke 0's default egress routing table (shown for Dallas/us-south):

Zone Destination Next hop
Dallas 1 10.1.0.0/24 delegate
Dallas 1 10.2.0.0/24 delegate
Dallas 1 10.3.0.0/24 delegate
Dallas 2 10.1.0.0/24 delegate
Dallas 2 10.2.0.0/24 delegate
Dallas 2 10.3.0.0/24 delegate
Dallas 3 10.1.0.0/24 delegate
Dallas 3 10.2.0.0/24 delegate
Dallas 3 10.3.0.0/24 delegate

Similar routes are added to the transit and other spokes.

Firewall Subnets

What about the firewall-router itself? This was not mentioned earlier but in anticipation of this change there was a egress_delegate router created in the transit VPC that delegates routing to the default for all destinations. It is only associated with the firewall-router subnets so the firewall-router is not effected by the changes to the default egress routing table used by the other subnets. Check the routing tables for the transit VPC for more details. Visit the VPCs in the IBM Cloud console. Select the transit VPC and then click on Manage routing tables, click on the egress-delegate routing table, click on the Subnets tab and note the -fw subnets used for firewall-routers.

Apply and Test More Firewall

  1. Apply the layer:

    ./apply.sh all_firewall_tf
    
  2. Run the test suite.

    Your expected results are: cross zone transit <-> spoke and spoke <-> spoke will be FAILED:

    pytest -m "curl and lz1 and (rz1 or rz2)"
    

Fix cross zone routing

As mentioned earlier for a system to be resilient across zonal failures it is best to eliminate cross zone traffic. If cross zone support is required additional egress routes can be added. The problem for spoke 0 to spoke 1 traffic is shown in this diagram:

Fixing cross zone routing
Fixing cross zone routing

The green path is an example of the originator spoke 0 zone 2 10.2.0.4 routing to spoke 1 zone 1 10.1.1.4. The matching egress route is:

Zone Destination Next hop
Dallas 2 10.0.0.0/8 10.2.15.196

Moving left to right the firewall-router in the middle zone, zone 2, of the diagram is selected. On the return path zone 1 is selected.

To fix this a few more specific routes need to be added to force the higher number zones to route to the lower zone number firewalls when a lower zone number destination is specified. When referencing an equal or higher numbered zone continue to route to the firewall in the same zone.

Cross zone routing enabled
Cross zone routing enabled

Routes in each spoke's default egress routing table (shown for Dallas/us-south):

Zone Destination Next hop
Dallas 2 10.1.0.0/16 10.1.15.196
Dallas 3 10.1.0.0/16 10.1.15.196
Dallas 3 10.2.0.0/16 10.2.15.196

These routes are also going to correct a similar transit <--> spoke cross zone asymmetric routing problem. Consider transit worker 10.1.15.4 -> spoke worker 10.2.0.4. Traffic from transit worker in zone 1 will choose the firewall-router in the zone 1 (same zone). On the return trip instead of firewall-router in zone 2 (same zone) now firewall-router in zone 1 will be used.

  1. Apply the all_firewall_asym layer:

    ./apply.sh all_firewall_asym_tf
    
  2. Run the test suite.

    Your expected results are: all tests PASSED, run them in parallel (-n 10):

    pytest -n 10 -m curl
    

All traffic between VPCs is now routed through the firewall-routers.

High Performance High Availability (HA) Firewall-Router

To prevent a firewall-router from becoming the performance bottleneck or a single point of failure it is possible to add a VPC Network Load Balancer to distribute traffic to the zonal firewall-routers to create a Highly Available, HA, firewall-router. Check your firewall-router documentation to verify it supports this architecture.

High Availability Firewall
High Availability Firewall

This diagram shows a single zone with a Network Load Balancer (NLB) configured in route mode fronting two firewall-routers. To see this constructed it is required to change the configuration and apply again.

  1. Change these two variables in config_tf/terraform.tfvars:

    firewall_nlb                 = true
    number_of_firewalls_per_zone = 2
    

    This change results in the IP address of the firewall-router changing from the firewall-router instance used earlier to the IP address of the NLB. The IP address change need to be applied to a number of VPC route table routes in the transit and spoke VPCs. It is best to apply all of the layers previously applied:

  2. Apply all the layers through the all_firewall_asym_tf layer:

    ./apply.sh : all_firewall_asym_tf
    

Observe the changes that were made:

  1. Open the Load balancers for VPC.
  2. Select the load balancer in zone 1 (Dallas 1/us-south-1) it has the suffix fw-z1-s3.
  3. Note the Private IPs.

Compare the Private IPs with those in the transit VPC ingress route table:

  1. Open the Virtual Private Clouds.
  2. Select the transit VPC.
  3. Click on Manage routing tables.
  4. Click on the tgw-ingress routing table. Notice the Next hop IP address matches one of the NLB Private IPs

Verify resiliency:

  1. Run the spoke 0 zone 1 tests:
    pytest -k r-spoke0-z1 -m curl
    
  2. Open the Virtual server instances for VPC
  3. Stop traffic to the 0 firewall instance by specifying a security group that will not allow inbound port 80. Locate the instance with the suffix fw-z1-s3-0 and open the details view:
    1. Scroll down and hit the pencil edit next to the Network Interface
    2. Uncheck the x-fw-inall-outall
    3. Check the x-fw-in22-outall
    4. Click Save
  4. Run the pytest again. It will indicate failures. It will take a few minutes for the NLB to stop routing traffic to the unresponsive instance, at which point all tests will pass. Continue waiting and running pytest until all tests pass.

The NLB firewall is no longer required. Remove the NLB firewall:

  1. Change these two variables in config_tf/terraform.tfvars:

    firewall_nlb                 = false
    number_of_firewalls_per_zone = 1
    
  2. Apply all the layers through the all_firewall_asym_tf layer:

    ./apply.sh : all_firewall_asym_tf
    

Note about NLB configured in routing mode

NLB route mode will rewrite route table entries - always keeping the active NLB appliance IP address in the route table during a fail over. But this is only done for routes in the transit VPC that contains the NLB. The spoke has egress routes that were initialized with one of the NLB appliance IPs. The spoke next hop will not be updated on NLB appliance fail over!

It will be required to maintain an ingress route in the transit VPC which will be rewritten by the NLB to reflect the active appliance. The spoke egress route will deliver packets to the correct zone of the transit VPC. Routing within the transit VPC zone will find the matching ingress rule which will contain the active appliance.

Below is the transit VPC ingress route table discussed earlier. The next hop will be kept up to date with the active NLB appliance. Note that Dallas 3 has a change written by the NLB route mode service to reflect the active appliance.

Zone Destination Next hop
Dallas 1 10.0.0.0/8 10.1.15.196
Dallas 2 10.0.0.0/8 10.2.15.196
Dallas 3 10.0.0.0/8 10.3.15.197

The NLB requires that a IAM authorization be created that allows the NLB to write to the VPC. This authorization was created by the apply.sh script. See creating a network load balancer with routing mode for more details on the configuration that was performed by the script.

The route mode NLB pool must be configured with Session persistence type set to null.

DNS

The IBM Cloud DNS Services service is used to convert names to IP addresses. In this example a DNS service is created in the cloud. The DNS zone cloud.example.com is created and the transit VPC is added as a permitted network. DNS records for the cloud instances are added to cloud.example.com. For example an A record is created for the spoke 0 worker in zone 1 that would have the full name spoke0-z1-worker.cloud.example.com.

Review about DNS sharing for VPE gateways. The transit VPC is enabled as a DNS hub. Each spoke VPC is configured with DNS resolution binding to the transit VPC hub. This will configure the spoke VPC DHCP settings for DNS servers to be the transit VPC custom resolvers.

DNS Layout
DNS Layout

DNS Resources

Apply the dns_tf layer to create add a cloud DNS zone and an A record for each of the test instances in the transit VPC and spoke VPCs. A DNS instance is also created for the enterprise simulation.

./apply.sh dns_tf

Inspect the DNS service created:

  1. Open the Resource list in the IBM Cloud console.
  2. Expand the Networking section and notice the DNS Services.
  3. Locate and click to open the instance with the suffix transit.
  4. Click on the DNS zone cloud.example.com. Notice the A records associated with each test instance in the transit and spokes.
  5. Click on the Custom resolver tab on the left and note that a resolver resides in each of the zones.
  6. Click on the Forwarding rules tab and notice the forwarding rules. Notice that enterprise.example.com is forwarded to the on premises resolvers.

Inspect the transit and spoke VPCs and notice the DNS configuration:

  1. Open the VPCs
  2. Notice the transit VPC has the DNS-Hub indicator set.
  3. Notice each spoke VPC has the DNS-Shared indicator set.
  4. Click one of the spoke VPCs.
    1. Scroll down to the Optional DNS settings
    2. Open the DNS resolver settings twisty and notice the DNS resolver type is delegated and the DNS resolver servers are in the transit VPC 10.1.15.x, 10.2.15.y, 10.2.15.z
    3. Open the DNS resolution binding twisty and notice DNS hub VPC is set to the transit VPC.

DNS Testing

There is a set of curl DNS tests that are available in the pytest script. These tests will curl using the DNS name of the remote. There are quite a few so run the tests in parallel:

pytest -n 10 -m dns

Virtual Private Endpoint Gateways

VPC allows private access to IBM Cloud Services through Virtual Private Endpoint (VPE) for VPC. The VPE gateways allow fine grain network access control via standard IBM Cloud VPC controls:

A DNS zone is created for each VPC VPE gateway. The DNS zone is automatically added to the private DNS service associated with the VPC. Each spoke VPC has a DNS configuration bound to the transit VPC. This enables the spoke VPE DNS zone to be shared to the transit VPC.

Adding virtual private endpoint gateways
Adding virtual private endpoint gateways

  1. Create a IBM Cloud Databases for PostgreSQL instance and VPEs for the transit and each of the spoke VPCs, by applying the vpe_transit_tf and vpe_spokes_tf layers:

    ./apply.sh vpe_transit_tf vpe_spokes_tf
    
  2. There is a set of vpe and vpedns tests that are available in the pytest script. The vpedns test will verify that the DNS name of a Databases for PostgreSQL instance is within the private CIDR block of the enclosing VPC. The vpe test will execute a psql command to access the Databases for PostgreSQL instance remotely. Test vpe and vpedns from spoke 0 zone 1:

    • Expected results all tests pass
    pytest -m 'vpe or vpedns' -k spoke0-z1
    

All tests in this tutorial should now pass. There are quite a few. Run them in parallel:

pytest -n 10

Production Notes and Conclusions

The VPC reference architecture for IBM Cloud for Financial Services has much more detail on securing workloads in the IBM Cloud.

Some obvious changes to make:

  • CIDR blocks were chosen for clarity and ease of explanation. The Availability Zones in the Multi zone Region could be 10.1.0.0/10, 10.64.0.0/10, 10.128.0.0/10 to conserve address space. Similarly the address space for Worker nodes could be expanded at the expense of firewall, DNS and VPE space.
  • Security Groups for each of the network interfaces for worker VSIs, Virtual Private Endpoint Gateways, DNS Locations and firewalls should all be carefully considered.
  • Network Access Control Lists for each subnet should be carefully considered.
  • Floating IPs were attached to all test instances to support connectivity tests via SSH. This is not required or desirable in production.
  • Implement context-based restrictions rules to further control access to all resources.

In this tutorial you created a hub VPC and a set of spoke VPCs. You routed all cross VPC traffic through a transit VPC firewall-router. A DNS service was created for the transit VPC hub and each spoke VPC was DNS bound to the transit VPC.

Remove resources

Execute terraform destroy in all directories in reverse order using the ./apply.sh command:

./apply.sh -d : :

Expand the tutorial

Your architecture may not be the same as the one presented, but will likely be constructed from the fundamental components discussed here. Ideas to expand this tutorial: