Power Systems communication through a transit VPC
This tutorial may incur costs. Use the Cost Estimator to generate a cost estimate based on your projected usage.
The IBM® Power® Virtual Server can host Power Virtual Server instances. The IBM Cloud also supports Virtual Private Cloud (VPC). Power Virtual Server can connect to VPCs via a IBM Cloud® Transit Gateway and access VPC resources. This tutorial walks you through an example implementation and explores the architecture depicted in this high-level view:
- Transit VPC and children resources like virtual server instances.
- VPC virtual private endpoint gateways (VPEs) are used to access cloud service instances like Object Storage.
- A Transit Gateway connected to the transit VPC and the spokes.
- VPN for VPC connectivity between the transit VPC and enterprise network.
- Power Virtual Server in a region with Power Edge Router (PER) can access everything through the attached Transit Gateway.
This tutorial is stand-alone but conceptually layers on a two-part tutorial on Centralize communication through a VPC Transit Hub and Spoke architecture. Dive even deeper into VPC in the foundation tutorials: part one and part two.
Objectives
- Understand the concepts behind a Power Virtual Server networking.
- Utilize the IBM Cloud Transit Gateway for connecting Power Virtual Server to VPC.
- Route Power Virtual Server traffic to on-premises through a VPC site-to-site VPN.
- Connect Power Virtual Server instances through VPC virtual private endpoint gateways to services.
- Utilize the DNS service routing and forwarding rules to build an architecturally sound name resolution system.
- Use VPC virtual private endpoint gateways to securely access cloud services.
Before you begin
This tutorial requires Power Virtual Server data centers that support Power Edge Routing (PER). See Getting started with the Power Edge Router for more information including the list of data centers where the solution is available.
This tutorial requires:
-
Terraform to use Infrastructure as Code to provision resources
-
Python to optionally run the pytest commands
-
Prerequisites in the companion GitHub repository
See the prerequisites for a few options including a Dockerfile to build the prerequisite environment.
In addition, check for user permissions. Be sure that your user account has sufficient permissions to create and manage all the resources in this tutorial. See the list of:
Provision resources
-
The companion GitHub Repository has the source files to implement the architecture. In a desktop shell, clone the repository:
git clone https://github.com/IBM-Cloud/vpc-transit cd vpc-transit
-
The config_tf directory requires a file terraform.tfvars. The file template.power.terraform.tfvars is the starting point for this tutorial.
cp config_tf/template.power.terraform.tfvars config_tf/terraform.tfvars
-
Edit config_tf/terraform.tfvars. Use the comments in that file as your guide. Change the values of your existing resource_group_name and basename. The string $BASENAME in the text below refers to the basename provided here.
-
It is possible to provision the architecture a layer at a time. A shell command ./apply.sh is provided to install the layers in order. The following displays help:
./apply.sh
-
If you don't already have one, obtain a Platform API key and export the API key for use by Terraform:
export IBMCLOUD_API_KEY=YourAPIKEy
-
Install all layers. The ':' characters are used to represent the first and last layers.
./apply.sh : :
It can take up to 30 minutes to create the resources in the diagram. The enterprise is simulated using a VPC.
Check the non-overlapping IP address layout
The address layout is shown below. Notice the non-overlapping addresses.
Notice:
- An availability zone address space
10.1.0.0/16
is used for VPC availability zone 1 and Power Virtual Server workspaces in dal10. - The address space for enterprise, transit, and spoke0 do not overlap.
- An on-prem address prefix in the transit VPC is used to advertise the enterprise routes through the Transit Gateway. No subnets are created in the transit VPC from the on-prem prefix. This is discussed in the "investigate the Transit Gateway" step below.
Explore the architecture in the IBM Cloud console:
- Navigate to Virtual private clouds.
- Select your region from the menu.
- Select the enterprise VPC and notice the address prefix:
192.168.0.0/24
- Navigate to Virtual private clouds.
- Select the transit VPC and notice:
- The address prefix
10.1.15.0/24
defines the transit VPC zone 1. - The
on-prem address prefix
,192.168.0.0/24
.
- The address prefix
Verify the SSH keys
- The provision created two files one for each member of the key pair required to SSH:
- config_tf/id_rsa - private key that you should keep safe.
- config_tf/id_rsa.pub - public key that can be given to third parties.
- The public key was used to create two SSH keys in the cloud:
- Power SSH key.
- SSH key for VPC.
- Locate VPC SSH key:
- Navigate to SSH keys for VPC.
- Notice the SSH key with your $BASENAME.
- Locate the Power SSH key:
- Navigate to Power SSH keys.
- On the left side navigation panel select the workspace from the drop down below the word Workspaces with your $BASENAME.
- Notice the SSH key with your $BASENAME.
Optionally, verify that the contents of the cloud SSH key matches the content of the public key file.
Open the Power® Virtual Server workspace
Along with the SSH keys, the provision created a Power® Virtual Server workspace, subnets, and an instance.
- Open the Power Virtual Server subnets page.
- On the left side navigation panel select the workspace from the drop down below the word Workspaces with your $BASENAME.
- Click the Subnets in the Networking menu on the left (if required) and notice the public and private subnets that were created.
- Click the private subnet name and notice the Gateway address that will be referenced later in the ip table configuration of the instance.
- Click the public subnet name and notice the Gateway address that will be referenced later in the ip table configuration of the instance.
- Click the Virtual server instances on the left and notice the instance that was provisioned along with the public and private IP addresses.
Configure the virtual server
The Terraform configuration created a Power Virtual Server Linux virtual server instance, but was not able to fully configure. It is now possible to configure the IP route tables and install an NGINX server to support testing.
cd power_tf; # should be in the .../vpc-transit/power_tf directory
terraform output fixpower
This experience looks something like this:
% cd power_tf
% terraform output fixpower
[
{
"abc-spoke1" = <<-EOT
# ssh -J root@52.116.131.48 root@10.1.2.31
ssh -oProxyCommand="ssh -W %h:%p -i ../config_tf/id_rsa root@52.116.131.48" -i ../config_tf/id_rsa root@10.1.2.31
ip route add 10.0.0.0/8 via 10.1.2.1 dev eth0
ip route add 172.16.0.0/12 via 10.1.2.1 dev eth0
ip route add 192.168.0.0/16 via 10.1.2.1 dev eth0
ip route change default via 192.168.232.1 dev eth1
exit
# it is now possible to ssh directly to the public IP address
ssh -i ../config_tf/id_rsa root@150.240.147.36
# execute the rest of these commands to install nginx for testing
zypper install -y nginx
systemctl start nginx
echo abc-spoke1 > /srv/www/htdocs/name
sleep 10
curl localhost/name
EOT
},
]
In a new terminal window, copy and paste the commands a line at a time. Here is what is happening:
- The SSH command logs in to the virtual server instance using the private SSH key created earlier. It is required to jump through an intermediate transit VPC virtual server. The -oProxyCommand configures the jump server.
- The ip route commands run on the PowerLinux server route all private network CIDR blocks through the private subnet (eth0). Notice
that these include both the
10.0.0.0
cloud CIDR block and the192.168.0.0
enterprise CIDR block. - The default routes the rest of the addresses, including the IP address of your workstation through the public subnet (eth1). This allows the test automation to SSH directly to the public IP address of the virtual server instance in the future and avoid the jump server.
- Quit the SSH session.
- Use SSH to directly log in to the instance using the public IP address. This verifies that the iptable configuration is correct.
- The final step is to install NGINX. NGINX is an HTTP server that hosts a web page that is verified using a
curl
command.
You can keep this shell available for use in future steps.
Test network connectivity
A pytest test suite is used to exhaustively test communication paths.
It is not required for the reader to use pytest to verify the results. It is straight forward to reproduce the test results shown below by hand but tedious. For each line of the example output, find the resource in the Resources view of the IBM Cloud console, navigate to the left resource, and locate the public IP addresses for an SSH session. Using the shell of the cloud instance, run a curl
command to the private IP address of the instance on the right:
curl A.B.C.D/name
.
There are a couple of ways to install and use python as covered in the README.md.
Each pytest test SSHs to an instance on the left and performs a connectivity test, like running a curl
command to the instance on the right. The default SSH environment is used to log in to the instances on the
left. If you see unexpected test results, try the pytest troubleshooting section.
Make sure that your current directory is vpc-transit.
cd ..
pwd; # .../vpc-transit
Test network connectivity using pytest:
pytest
Example output:
(vpc-transit) IBM-Cloud/vpc-transit % pytest
============================================ test session starts =============================================
platform darwin -- Python 3.11.5, pytest-7.4.4, pluggy-1.3.0 -- /Users/powellquiring/github.com/IBM-Cloud/vpc-transit/venv/bin/python
cachedir: .pytest_cache
rootdir: /Users/powellquiring/github.com/IBM-Cloud/vpc-transit
configfile: pytest.ini
testpaths: py
plugins: xdist-3.5.0
collected 31 items
py/test_transit.py::test_ping[l-spoke0 -> r-spoke0] PASSED [ 3%]
py/test_transit.py::test_ping[l-spoke0 -> r-enterprise-z1-worker] PASSED [ 6%]
py/test_transit.py::test_ping[l-spoke0 -> r-transit-z1-worker] PASSED [ 9%]
py/test_transit.py::test_ping[l-enterprise-z1-worker -> r-spoke0] PASSED [ 12%]
py/test_transit.py::test_ping[l-enterprise-z1-worker -> r-enterprise-z1-worker] PASSED [ 16%]
py/test_transit.py::test_ping[l-enterprise-z1-worker -> r-transit-z1-worker] PASSED [ 19%]
py/test_transit.py::test_ping[l-transit-z1-worker -> r-spoke0] PASSED [ 22%]
py/test_transit.py::test_ping[l-transit-z1-worker -> r-enterprise-z1-worker] PASSED [ 25%]
py/test_transit.py::test_ping[l-transit-z1-worker -> r-transit-z1-worker] PASSED [ 29%]
py/test_transit.py::test_curl[l-spoke0 -> r-spoke0] PASSED [ 32%]
py/test_transit.py::test_curl[l-spoke0 -> r-enterprise-z1-worker] PASSED [ 35%]
py/test_transit.py::test_curl[l-spoke0 -> r-transit-z1-worker] PASSED [ 38%]
py/test_transit.py::test_curl[l-enterprise-z1-worker -> r-spoke0] PASSED [ 41%]
py/test_transit.py::test_curl[l-enterprise-z1-worker -> r-enterprise-z1-worker] PASSED [ 45%]
py/test_transit.py::test_curl[l-enterprise-z1-worker -> r-transit-z1-worker] PASSED [ 48%]
py/test_transit.py::test_curl[l-transit-z1-worker -> r-spoke0] PASSED [ 51%]
py/test_transit.py::test_curl[l-transit-z1-worker -> r-enterprise-z1-worker] PASSED [ 54%]
py/test_transit.py::test_curl[l-transit-z1-worker -> r-transit-z1-worker] PASSED [ 58%]
py/test_transit.py::test_curl_dns[l-spoke0 -> r-abc-enterprise-z1-worker.abc-enterprise.example.com] PASSED [ 61%]
py/test_transit.py::test_curl_dns[l-spoke0 -> r-abc-transit-z1-worker.abc-transit.example.com] PASSED [ 64%]
py/test_transit.py::test_curl_dns[l-enterprise-z1-worker -> r-abc-enterprise-z1-worker.abc-enterprise.example.com] PASSED [ 67%]
py/test_transit.py::test_curl_dns[l-enterprise-z1-worker -> r-abc-transit-z1-worker.abc-transit.example.com] PASSED [ 70%]
py/test_transit.py::test_curl_dns[l-transit-z1-worker -> r-abc-enterprise-z1-worker.abc-enterprise.example.com] PASSED [ 74%]
py/test_transit.py::test_curl_dns[l-transit-z1-worker -> r-abc-transit-z1-worker.abc-transit.example.com] PASSED [ 77%]
py/test_transit.py::test_vpe_dns_resolution[cos spoke0 -> transit s3.direct.us-south.cloud-object-storage.appdomain.cloud] PASSED [ 80%]
py/test_transit.py::test_vpe_dns_resolution[cos enterprise-z1-worker -> transit s3.direct.us-south.cloud-object-storage.appdomain.cloud] PASSED [ 83%]
py/test_transit.py::test_vpe_dns_resolution[cos transit-z1-worker -> transit s3.direct.us-south.cloud-object-storage.appdomain.cloud] PASSED [ 87%]
py/test_transit.py::test_vpe[cos spoke0 -> transit s3.direct.us-south.cloud-object-storage.appdomain.cloud] PASSED [ 90%]
py/test_transit.py::test_vpe[cos enterprise-z1-worker -> transit s3.direct.us-south.cloud-object-storage.appdomain.cloud] PASSED [ 93%]
py/test_transit.py::test_vpe[cos transit-z1-worker -> transit s3.direct.us-south.cloud-object-storage.appdomain.cloud] PASSED [ 96%]
py/test_transit.py::test_lb[lb0] SKIPPED (got empty parameter set ['lb'], function test_lb at /Use...) [100%]
======================================= 30 passed, 1 skipped in 42.36s =======================================
Each test SSHs to the instance on the left side of the arrow '->' and accesses the right side of the arrow in the following way:
test_ping
- Ping IP address.test_curl
- Curl IP address.test_curl_dns
- Curl the DNS name.test_vpe_dns_resolution
- Verify the VPC virtual private endpoint (VPE) name DNS name resolves to an IP address in the CIDR block of the cloud (this test does not actually access the right side.)test_vpe
- Exercise the resource using the DNS name and the resource-specific tool as required.
All tests should pass except for the load balancer (lb) test, which is skipped in this configuration.
Investigate the Transit Gateway
This diagram has a green line showing the traffic path from the Power instance to the enterprise instance:
Inspect the transit Transit Gateway:
- Open Transit gateway and select the $BASENAME-tgw.
- There are two connections:
- Transit VPC.
- Spoke0 (Power Systems Virtual Server).
- Click BGP and Generate report. The enterprise CIDR,
192.168.0.0/24
, is advertised by the transit VPC.
Why an on-prem address prefix in the transit VPC?
VPC address prefix routes are advertised through the Transit Gateway. The transit VPC address prefix, 10.1.15.0/24
, is advertised and allows the Power® Virtual Server to route traffic to the resources in the transit VPC. The on-prem
address prefix in the transit VPC, 192.168.0.0/24
, allows the Power® Virtual Server to route traffic to this range to the transit VPC. See policy-based ingress routing integration.
Understand the Power to enterprise data path through the transit VPC
The previous step demonstrated how the Transit Gateway learned the enterprise routes needed for the Power instance to reach the transit VPC when sending to an enterprise IP address like 192.168.0.4
. VPC ingress routing in the transit
VPC routes traffic directly to the VPN instance.
- Navigate to the Virtual private cloud.
- Click VPCs on the left.
- Click the transit VPC.
- Scroll down and click Manage routing tables.
- Click the vpn-ingress routing table.
In the Traffic box, the Accepts routes from indicates VPN gateway. This configuration allows the VPN gateway to automatically create a route in this routing table "and" adjust the next hop address of the route as needed.
The current status of this route can be found in the Routes table. It indicates that traffic addressed to 192.168.0.0/24 will be forwarded to a Next hop address in the VPC. Note the next hop IP address. You can find it in the VPC VPN service.
- Navigate to VPN and select the transit VPN gateway.
- Inspect the Gateway members section. The Private IP of the active IP should match the Next hop noted earlier.
To ensure high availability, the VPN service keeps the Next hop IP address consistent with the active IP address of the available VPN resources!
Verify Power DNS resolution
This diagram has blue line showing the DNS resolution forward chain used by the Power® Virtual Server instance.
The $BASENAME shown below are abc
; substitute in your own $BASENAME. In the Power® Virtual Server instance shell:
abc-spoke0:~ # BASENAME=abc
abc-spoke0:~ # dig abc-enterprise-z1-worker.$BASENAME-enterprise.com
; <<>> DiG 9.16.44 <<>> abc-enterprise-z1-worker.abc-enterprise.com
;; global options: +cmd
;; Got answer:
...
;; ANSWER SECTION:
abc-enterprise-z1-worker.abc-enterprise.com. 2454 IN A 192.168.0.4
...
A curl command returns data from the enterprise:
curl $BASENAME-enterprise-z1-worker.$BASENAME-enterprise.com/name
Example:
abc-spoke0:~ # curl $BASENAME-enterprise-z1-worker.$BASENAME-enterprise.com/name
abc-enterprise-z1-worker
It is possible to verify the DNS forwarding path shown on the blue line. First find the DNS server that is resolving the address:
- Navigate to Power Systems Virtual Server and select your workspace.
- Click Subnets on the left.
- Click the private subnet.
- One of the DNS Servers is
10.1.15.xy
. Note the exact IP.
This is the address of a DNS Services custom resolver. The initial bits of the address, 10.1.15
indicates it is in the transit VPC. Locate the DNS instance and the custom
resolver:
- Navigate to the Resource list.
- Open the Networking section and click the transit instance of the DNS service.
- In the transit DNS instance, click Custom resolver on the left.
- Click the custom resolver to open the details page.
Match the DNS Server IP address noted earlier (found in the Power private subnet) to the Resolver locations IP addresses.
The diagram shows an arrow from this DNS resolver to the enterprise network. Verify this by following the forwarding rules:
- Click the Forwarding rules tab at the top.
- Note the forwarding rules for the $BASENAME-enterprise.com subdomain is forwarded to the enterprise resolvers having
192.168.0.xy
addresses. These are the IP addresses of DNS resolvers in the enterprise. You can verify these by locating the DNS service for the enterprise in the Resource list.
Understand the VPC Virtual private endpoint gateway
IBM Cloud VPE for VPC enables you to connect to supported IBM Cloud services from your VPC network by using the IP addresses of your choosing, which are allocated from a subnet within your VPC. A Object Storage has been provisioned. When a VPE for VPC for the Object Storage was provisioned a DNS record was created in the DNS service. Find the DNS name for Object Storage in the transit VPC:
- Navigate to the VPC virtual private endpoint gateways.
- Select the $BASENAME-transit-cos VPC virtual private endpoint gateway.
- Note the attached resource IP address. It is
10.1.15.x
in the transit VPC zone 1. - Note Service endpoint. It is region specific: s3.direct.us-south.cloud-object-storage.appdomain.cloud.
In the Power® Virtual Server instance shell use the dig
command with the DNS name to find the IP address. Here is an example (abbreviated):
abc-spoke0:~ # dig s3.direct.us-south.cloud-object-storage.appdomain.cloud
; <<>> DiG 9.16.44 <<>> s3.direct.us-south.cloud-object-storage.appdomain.cloud
...
;; ANSWER SECTION:
s3.direct.us-south.cloud-object-storage.appdomain.cloud. 900 IN A 10.1.15.132
...
In this case 10.1.15.132 is the IP address of Object Storage through the virtual private endpoint gateway.
Enforce VPC security
VPCs have Network Access Control Lists (ACLs)](/docs/vpc?topic=vpc-using-acls) for subnets and security groups for network interfaces that can be configured to limit access to network resources.
Introduce a security group rule to restrict access to the VPC virtual private endpoint gateway from just the Power Virtual Server instances.
In the Power® Virtual Server instance shell, use the curl
command to access a VPC instance in the transit VPC:
BASENAME=abc
curl $BASENAME-transit-z1-worker.$BASENAME-transit.com/name
Locate the security group and tighten up the rules.
- Navigate to Virtual server instances for VPC.
- Click the transit instance.
- Scroll down to Network interfaces and click the entry in Security groups.
- Click the Rules tab in the Security group property page.
- Locate the
10.0.0.0/8
Source. Click the hamburger menu on the right, then click Edit. - Temporarily change the CIDR to
10.0.0.0/32
.
Back in the Power® Virtual Server instance shell, repeat the curl
command. The command does not complete:
curl $BASENAME-transit-z1-s0.$BASENAME-transit.com/name
Determine the IP address in the shell:
hostname -I
Example:
abc-spoke0:~ # hostname -I
10.1.0.37 192.168.230.234
The first 10.1.0.x
number is the private IP address. Back in the VPC security group tab of the browser, edit the security group rule and change it to the address/32
(for example, 10.1.0.37/32
).
Try the curl
again and it should work.
curl $BASENAME-transit-z1-s0.$BASENAME-transit.com/name
Back in the security group rule, change the CIDR block back to the original value 10.0.0.0/8
.
Remove resources
Run terraform destroy
in all directories in reverse order using the ./apply.sh
command:
./apply.sh -d : :
Expand the tutorial
Your architecture might not be the same as the one presented, but will likely be constructed from the fundamental components discussed here. Ideas to expand this tutorial:
- Use a VPC load balancer](/docs/openshift?topic=openshift-vpclb-about) to balance traffic between multiple Power Virtual Server instances.
- Integrate incoming public Internet access using IBM Cloud® Internet Services.
- Add Flow Logs for VPC capture in the transit.
- Put each of the spokes in a separate account in an enterprise.