Implementing a Red Hat Enterprise Linux High Availability Add-On cluster in a multizone region environment
Use the following information and procedures to implement a Red Hat Enterprise Linux (RHEL) High Availability Add-On cluster in a multizone region environment. The cluster uses instances in IBM® Power® Virtual Server as cluster nodes. The virtual server instances run in different zones in a multizone region. The setup uses either the powervs-move-ip or the powervs-subnet cluster resource agent to manage the service IP address of an application in a multizone region implementation.
Recommended resource agent is powervs-move-ip.
The resource agent supports only the use of different zones in the same multizone region. Deployment across multiple regions is not supported. See Multizone regions (MZR) and IBM Cloud regions for more information about multizone regions and available locations.
The information describes how to transform the individual virtual server instances into a cluster.
These procedures include installing the high availability packages and agents on each cluster node and configuring the fencing devices.
This information is intended for architects and specialists who are planning a high availability deployment of SAP applications on Power Virtual Server. It is not intended to replace existing SAP or Red Hat documentation.
Before you begin
Review the general requirements, product documentation, support articles, and SAP notes listed in Implementing high availability for SAP applications on IBM Power Virtual Server References.
Creating virtual server instances for the cluster
Use the instructions in Creating instances for a high availability cluster to create the virtual server instances that you want to use as cluster nodes.
Create two workspaces in two zones of a multizone region. Create a Transit Gateway and add both workspaces to the connections. Create two virtual server instances, one in each workspace.
Preparing the nodes for RHEL HA Add-On installation
The following section describes basic preparation steps on the cluster nodes. Make sure that you follow the steps on both nodes.
Log in as the root user to each of the cluster nodes.
Adding cluster node entries to the hosts file
On both nodes, add the IP addresses and hostnames of both nodes to the /etc/hosts file.
For more information, see Setting up /etc/hosts files on RHEL cluster nodes.
Preparing environment variables
To simplify the setup process, prepare some environment variables for the root user. These environment variables are used with later operating system commands in this information.
On both nodes, set the following environment variables.
# General settings
export CLUSTERNAME="SAP_CLUSTER"         # Cluster name
export APIKEY=<APIKEY>                   # API Key of the IBM Cloud IAM ServiceID for the fencing agent
export CLOUD_REGION=<CLOUD_REGION>       # Multizone region name
export PROXY_IP=<IP_ADDRESS>             # IP address of proxy server
# Workspace 1
export IBMCLOUD_CRN_1=<IBMCLOUD_CRN_1>   # Workspace CRN
export GUID_1=<GUID_1>                   # Workspace GUID
# Workspace 2
export IBMCLOUD_CRN_2=<IBMCLOUD_CRN_2>   # Workspace CRN
export GUID_2=<GUID_2>                   # Workspace GUID
# Virtual server instance 1
export NODE1=<HOSTNAME_1>                # Virtual server instance hostname
export POWERVSI_1=<POWERVSI_1>           # Virtual server instance id
# Virtual server instance 2
export NODE2=<HOSTNAME_2>                # Virtual server instance
export POWERVSI_2=<POWERVSI_2>           # Virtual server instance id
To find the settings for the APIKEY, IBMCLOUD_CRN_?, GUID_?, and POWERVSI_? variables, follow the steps in Collecting parameters for configuring a high availability cluster.
Installing and configuring a RHEL HA Add-On cluster
Use the following steps to set up a two-node cluster for an IBM Power Virtual Server.
The instructions are based on the Red Hat product documentation and articles that are listed in Implementing high availability for SAP applications on IBM Power Virtual Server References.
You need to complete some steps on both nodes and some steps on either NODE1 or on NODE2.
Installing RHEL HA Add-On software
Install the required software packages. The minimum operating system version required to use the powervs-subnet resource agent is RHEL 9.2.
The @server group must be installed on the operating system. This installation is a standard requirement for SAP applications.
Checking the RHEL HA repository
See Checking the RHEL HA repository for the steps to enable the RHEL HA repository.
Installing the RHEL HA Add-On software packages
Install the required software packages on both nodes by running the following command.
dnf install -y pcs pacemaker fence-agents-ibm-powervs
Make sure that you install the minimal version of the fence-agents-ibm-powervs package dependent on your Red Hat Enterprise Linux release:
- RHEL 9
- fence-agents-ibm-powervs-4.10.0-43.el9
Configuring a RHEL HA Add-On cluster
Use the following steps to configure a RHEL HA Add-On cluster.
Configuring firewall services
Add the high availability service to the RHEL firewall if firewalld.service is installed and enabled.
On both nodes, run the following commands.
firewall-cmd --permanent --add-service=high-availability
firewall-cmd --reload
Starting the PCS daemon
Start the PCS daemon that is used for controlling and configuring RHEL HA Add-On clusters through PCS.
On both nodes, run the following commands.
systemctl enable --now pcsd.service
Make sure that the PCS service is running.
systemctl status pcsd.service
Setting a password for the hacluster user ID
Set the password for the hacluster user ID.
On both nodes, run the following command.
passwd hacluster
Authenticating the cluster nodes
Use the following command to authenticate the user hacluster to the PCS daemon on the nodes in the cluster. The command prompts you for the password that you set in the previous step.
On NODE1, run the following command.
pcs host auth ${NODE1} ${NODE2} -u hacluster
Configuring and starting the cluster nodes
Configure the cluster configuration file and synchronize the configuration to the specified nodes.
The --start option also starts the cluster service on the nodes.
On NODE1, run the following command.
pcs cluster setup ${CLUSTERNAME} --start ${NODE1} ${NODE2}
pcs status
Creating the fencing device
STONITH is an acronym for "Shoot The Other Node In The Head" and protects your data from corruption in a split-brain situation.
You must enable STONITH (fencing) for a RHEL HA Add-On production cluster.
Fence agent fence_ibm_powervs is the only supported agent for a STONITH device on Power Virtual Server clusters.
You must configure a fencing device for each of the two workspaces in the multizone region. The fence agent connects to the Power Cloud API by using
                the common APIKEY and CLOUD_REGION parameters. The parameters IBMCLOUD_CRN_<n>, GUID_<n>, and the instance ID POWERVSI_<n> are specific to the workspace.
                You can test the agent invocation by using the parameters that you gathered in the Collecting parameters for configuring a high availability cluster                section.
Identifying the virtual server instances for fencing
Use the list option of fence_ibm_powervs to identify and or verify the instance IDs of the two cluster nodes.
On any node, run the following commands.
fence_ibm_powervs \
    --token=${APIKEY} \
    --crn=${IBMCLOUD_CRN_1} \
    --instance=${GUID_1} \
    --region=${CLOUD_REGION} \
    --api-type=public \
    -o list
fence_ibm_powervs \
    --token=${APIKEY} \
    --crn=${IBMCLOUD_CRN_2} \
    --instance=${GUID_2} \
    --region=${CLOUD_REGION} \
    --api-type=public \
    -o list
If the virtual server instances have access to only a private network, you must use the --api-type=private option, which also requires an extra --proxy option.
Example:
fence_ibm_powervs \
    --token=${APIKEY} \
    --crn=${IBMCLOUD_CRN_1} \
    --instance=${GUID_1} \
    --region=${CLOUD_REGION} \
    --api-type=private \
    --proxy=http://${PROXY_IP}:3128 \
    -o list
The following examples use the --api-type=private option.
Checking the status of both virtual server instances
On both nodes, run the following commands.
time fence_ibm_powervs \
    --token=${APIKEY} \
    --crn=${IBMCLOUD_CRN_1} \
    --instance=${GUID_1} \
    --region=${CLOUD_REGION} \
    --plug=${POWERVSI_1} \
    --api-type=private \
    --proxy=http://${PROXY_IP}:3128 \
    -o status
time fence_ibm_powervs \
    --token=${APIKEY} \
    --crn=${IBMCLOUD_CRN_2} \
    --instance=${GUID_2} \
    --region=${CLOUD_REGION} \
    --plug=${POWERVSI_2} \
    --api-type=private \
    --proxy=http://${PROXY_IP}:3128 \
    -o status
The status action of the fence agent against a virtual server instance --plug=<POWERVSI_n> displays its power status.
On both nodes, the two commands must report Status: ON.
The output of the time command might be useful later when you choose timeouts for the STONITH device.
You can add the -v flag for verbose output, which shows more information about connecting to the Power Cloud API and querying virtual server power status.
Creating the stonith devices
The following command shows the device-specific options for the fence_ibm_powervs fencing agent.
pcs stonith describe fence_ibm_powervs
Create the stonith device for both virtual server instances.
On NODE1, run the following commands.
pcs stonith create fence_node1 fence_ibm_powervs \
    token=${APIKEY} \
    crn=${IBMCLOUD_CRN_1} \
    instance=${GUID_1} \
    region=${CLOUD_REGION} \
    api_type=private \
    proxy=http://${PROXY_IP}:3128 \
    pcmk_host_map="${NODE1}:${POWERVSI_1}" \
    pcmk_reboot_timeout=600 \
    pcmk_monitor_timeout=600 \
    pcmk_status_timeout=60
pcs stonith create fence_node2 fence_ibm_powervs \
    token=${APIKEY} \
    crn=${IBMCLOUD_CRN_2} \
    instance=${GUID_2} \
    region=${CLOUD_REGION} \
    api_type=private \
    proxy=http://${PROXY_IP}:3128 \
    pcmk_host_map="${NODE2}:${POWERVSI_2}" \
    pcmk_reboot_timeout=600 \
    pcmk_monitor_timeout=600 \
    pcmk_status_timeout=60
Although the fence_ibm_powervs agent uses api-type as an option when started from the command line, the stonith resource needs to be created by using api_type.
Verify the configuration with the following commands.
pcs config
pcs status
pcs stonith config
pcs stonith status
Setting the stonith-action cluster property
For the powervs-subnet resource agent to work, you must set the stonith-action cluster property to off. When the cluster performs a fencing action, it triggers an off operation instead of a reboot for the fenced instance.
After this change, you always need to log in to the IBM Cloud Console, and manually start an instance that was fenced by the cluster.
pcs property set stonith-action=off
Verify the change.
pcs config
Testing fencing operations
To test the STONITH configuration, manually fence the nodes.
On NODE1, run the following commands.
pcs stonith fence ${NODE2}
pcs status
As a result, NODE2 stops.
Activate NODE2, then start the cluster on the node and try to fence NODE1.
On NODE2, run the following commands.
pcs cluster start
pcs status
pcs stonith status
pcs stonith fence ${NODE1}
NODE1 stops.
Activate NODE2, then start the cluster on the node.
On NODE1, run the following command.
pcs cluster start
pcs status
pcs stonith status
Disabling the automatic startup of cluster services when the server boots
After a virtual server instance restarts, it takes some time for its STATUS to become ACTIVE and its Health Status to become OK. The powervs-subnet resource agent requires these states to function properly. Therefore, you must disable automatic cluster startup and start the cluster manually after the instance reaches the required states.
On any node, disable the automatic startup of cluster services at boot time.
pcs cluster disable --all
When you restart an instance, check the instance status in the IBM Cloud Console and wait until the Status field shows Active with a green checkmark. Then, use the following command to manually start the cluster.
pcs cluster start
Preparing a multizone RHEL HA Add-On cluster for a virtual IP address resource
Use the following steps to prepare a multizone RHEL HA Add-on cluster for a virtual IP address resource.
Two specific resource agents are available to manage a service IP address in a multizone region environment:
- 
              powervs-move-ipresource agentDuring a takeover event, the resource agent powervs-move-ipupdates predefined static routes in the IBM Power Virtual Server, and configures an overlay IP address as IP alias address on the virtual server instance. If you use thepowervs-move-ipresource agent, continue with the steps in Installing the powervs-move-ip resource agent
- 
              powervs-subnetresource agentDuring a takeover event, the resource agent powervs-subnetmoves the entire subnet, including the IP address, from one workspace to another.
Installing the powervs-subnet resource agent
- 
                Verify that the NetworkManager-config-serverpackage is installed.On both nodes, run the following command. dnf list NetworkManager-config-serverSample output: # dnf list NetworkManager-config-server Installed Packages NetworkManager-config-server.noarch 1:1.42.2-16.el9_2 @rhel-9-for-ppc64le-baseos-e4s-rpmsMake sure that the NetworkManager no-auto-defaultconfiguration variable is set to*.NetworkManager --print-config | grep "no-auto-default="Sample output: # NetworkManager --print-config | grep "no-auto-default=" no-auto-default=*If the no-auto-defaultshows a value other than*, edit the/etc/NetworkManager/conf.d/00-server.conffile and change the variable as needed.
- 
                Download the powervs-subnet resource agent Currently, the powervs-subnet resource agent is available in the ClusterLabs GitHub resource agent repository. Download the resource agent from https://github.com/ClusterLabs/resource-agents/blob/main/heartbeat/powervs-subnet.in and place a copy in the /tmpdirectory on both nodes.
- 
                Install the resource agent script On both nodes, install the script in the OCF Resource Agents heartbeat directory and set its permissions. sed -e 's|#!@PYTHON@|#!/usr/bin/python3|' /tmp/powervs-subnet.in \ > /usr/lib/ocf/resource.d/heartbeat/powervs-subnetchmod 755 /usr/lib/ocf/resource.d/heartbeat/powervs-subnetUse the following command to verify the installation and display a brief description of the resource agent. pcs resource describe powervs-subnet
Continue with the steps in Creating a service ID for the resource agent
Installing the powervs-move-ip resource agent
- 
                Download the powervs-move-ip resource agent Currently, the powervs-move-ip resource agent is available in the ClusterLabs GitHub resource agent repository. Download the resource agent from https://github.com/ClusterLabs/resource-agents/blob/main/heartbeat/powervs-move-ip.in and place a copy in the /tmpdirectory on both nodes.
- 
                Install the resource agent script On both nodes, install the script in the OCF Resource Agents heartbeat directory and set its permissions. sed -e 's|#!@PYTHON@|#!/usr/bin/python3|' /tmp/powervs-move-ip.in \ > /usr/lib/ocf/resource.d/heartbeat/powervs-move-ipchmod 755 /usr/lib/ocf/resource.d/heartbeat/powervs-move-ipUse the following command to verify the installation and display a brief description of the resource agent. pcs resource describe powervs-move-ip
Creating the static route in the workspace for the powervs-move-ip resource agent
- 
                Determine the next hop IP addresses of the virtual server instances of the cluster. Follow these steps: - Open the Power Virtual Server user interface in IBM Cloud.
- Click Workspaces in the left navigation menu.
- Select the workspace where the cluster node is provisioned. The "Workspace details" panel is displayed.
- Click View virtual servers. The list of virtual server instances is displayed.
- Identify your virtual server instance and its IP address. Note the IP address. You need to enter that IP address as Next Hop in the route.
 
- 
                For each virtual IP address that you configure as cluster resource, create a static route in both Power Virtual Server workspaces. Follow these steps: - Open the Power Virtual Server user interface in IBM Cloud.
- Click Workspaces in the left navigation menu.
- Select the workspace in which you want to create the static route. The "Workspace details" panel is displayed.
- Click View virtual servers.
- In the navigation pane, click Networking > Routes. The Static routes page lists the existing static routes (if any).
- Click Create static route to create a new route.
- In the "Create static route" panel
                    - 
                        Enter a name for the static route in the Name field. 
- 
                        Optionally, enter user tags in the User tags (optional) field. 
- 
                        In the Destination field, enter a valid IP address. The destination IP address must not belong to any of the CIDR blocks of the subnets in the scenario. 
- 
                        In the Next hop field, enter a valid IP address. The next hop IP address must - Belong to a CIDR ranges of a subnet in the workspace.
- Match an primary IP address of a network adapter of the cluster nodes virtual server instance.
 
- 
                        Advertise and Status: Leave both switches set to Enabled (default). The Advertise switch controls whether the static route is propagated outside of the workspace to the Power Edge Router (PER). If Advertise is disabled, the route remains internal and is not visible to external network connections. The Status switch determines whether the static route is active within the network fabric. If Status is set to Disabled, the route is not used — even if Advertise is enabled. 
- 
                        Click Create route. 
 
- 
                        
 Repeat the step for both cluster nodes. Note the cloud resource name CRN for each of the routes. You need to enter the CRNs during the cluster resource configuration steps for the specific high availability scenario. 
Creating a service ID for the resource agent
Follow the steps in Creating a Custom Role, Service ID, and API key in IBM Cloud to create a Service ID and an API key for the resource agent.
Conclusion
This completes the basic cluster implementation and the necessary preparations.
The IP address cluster resource using either the powervs-move-ip or the powervs-subnet resource agent is created during the configuration of the specific high availability scenario.