IBM Cloud Docs
Configuring high availability for SAP S/4HANA (ASCS and ERS) in a Red Hat Enterprise Linux High Availability Add-On cluster in a multizone region environment

Configuring high availability for SAP S/4HANA (ASCS and ERS) in a Red Hat Enterprise Linux High Availability Add-On cluster in a multizone region environment

The following information describes how to configure ABAP SAP Central Services (ASCS) and Enqueue Replication Server (ERS) in a Red Hat Enterprise Linux (RHEL) High Availability Add-On cluster. The cluster runs on virtual server instances in IBM® Power® Virtual Server.

This configuration example applies to the second generation of the Standalone Enqueue Server, also known as ENSA2.

Since SAP S/4HANA 1809, ENSA2 is installed by default and supports both two-node and multi-node cluster configurations. This example demonstrates a two-node RHEL HA Add-On cluster setup with ENSA2. If the ASCS service fails, it automatically restarts on the node that hosts the ERS instance. The lock entries are restored from the ERS instance’s copy of the lock table. When the failed node is reactivated, the ERS instance relocates to the other node (anti-colocation) to maintain redundancy and protect the lock table copy.

Install the SAP database instance and other SAP application server instances on virtual server instances outside the two-node cluster that is used for the ASCS and ERS instances.

Before you begin

Review the general requirements, product documentation, support articles, and SAP notes listed in Implementing high availability for SAP applications on IBM Power Virtual Server References.

Prerequisites

  • This information describes a setup that uses NFS-mounted storage for the instance directories.

    • The ASCS instance uses the mount point /usr/sap/<SID>/ASCS<INSTNO>.
    • The ERS instance uses the mount point /usr/sap/<SID>/ERS<INSTNO>.
    • Both instances use the /sapmnt/<SID> mount point with shared read and write access.
    • Other shared file systems such as saptrans /usr/sap/trans might be needed.

    Make sure that a highly available NFS server is configured to serve these shares. The NFS server must not be installed on a virtual server that is part of the ENSA2 cluster. This document does not describe the steps for setting up file storage or creating cluster file systems.

Ensure that the virtual hostnames for the ASCS and ERS instances comply with the requirements that are outlined in Hostnames of SAP ABAP Platform servers.

  • The subnets and the virtual IP addresses for the ASCS and ERS instances must not exist in the Power Virtual Server workspaces. They are configured as cluster resources. However, you must add the virtual IP addresses and virtual hostnames of the ASCS and ERS instances to the Domain Name Service (DNS), and to the /etc/hosts file on all cluster nodes.

Preparing nodes to install ASCS and ERS instances

The following information describes how to prepare the nodes for installing the SAP ASCS and ERS instances.

Preparing environment variables

To simplify the setup process, define the following environment variables for the root user on both cluster nodes. These variables are used in subsequent operating system commands.

On both nodes, set the following environment variables. Certain variables are specific to either the powervs-move-ip or the powervs-subnet resource agent, as indicated in the respective comments.

# General settings
export CLUSTERNAME="SAP_S01"        # Cluster name
export NODE1=<HOSTNAME_1>           # Virtual server instance 1 hostname (in Workspace_1)
export NODE2=<HOSTNAME_2>           # Virtual server instance 2 hostname (in Workspace_2)

export SID=<SID>                    # SAP System ID (uppercase)
export sid=<sid>                    # SAP System ID (lowercase)

# ASCS instance
export ASCS_INSTNO=<INSTNO>         # ASCS instance number
export ASCS_VH=<virtual hostname>   # ASCS virtual hostname
export ASCS_IP=<IP address>         # ASCS virtual IP address
# resource agent powervs-move-ip only
export ASCS_ROUTE_CRN1=<Route_CRN1> # CRN of the static route in Workspace_1 with destination ASCS_IP (use with powervs-move-ip)
export ASCS_ROUTE_CRN2=<Route_CRN2> # CRN of the static route in Workspace_2 with destination ASCS_IP (use with powervs-move-ip)
# resource agent powervs-subnet only
export ASCS_NET=<Subnet name>       # Name for the ASCS subnet in IBM Cloud (use with powervs-subnet)
export ASCS_CIDR=<CIDR of subnet>   # CIDR of the ASCS subnet containing the service IP address (use with powervs-subnet)

# ERS instance
export ERS_INSTNO=<INSTNO>          # ERS instance number
export ERS_VH=<virtual hostname>    # ERS virtual hostname
export ERS_IP=<IP address>          # ERS virtual IP address
# resource agent powervs-move-ip only
export ERS_ROUTE_CRN1=<Route_CRN1>  # CRN of the static route in Workspace_1 with destination ERS_IP (use with powervs-move-ip)
export ERS_ROUTE_CRN2=<Route_CRN2>  # CRN of the static route in Workspace_2 with destination ERS_IP (use with powervs-move-ip)
# resource agent powervs-subnet only
export ERS_NET=<Subnet name>        # Name for the ERS subnet in IBM Cloud (use with powervs-subnet)
export ERS_CIDR=<CIDR of subnet>    # CIDR of the ERS subnet containing the service IP address (use with powervs-subnet)

# Other multizone region settings
export CLOUD_REGION=<CLOUD_REGION>       # Multizone region name
export APIKEY="APIKEY or path to file"   # API key of the ServiceID for the resource agent
export API_TYPE="private or public"      # Use private or public API endpoints
# resource agent powervs-move-ip only
export MON_API="false or true"           # Use cloud api in monitor command (use with powervs-move-ip)
# resource agent powervs-subnet only
export IBMCLOUD_CRN_1=<IBMCLOUD_CRN_1>   # Workspace 1 CRN (use with powervs-subnet)
export IBMCLOUD_CRN_2=<IBMCLOUD_CRN_2>   # Workspace 2 CRN (use with powervs-subnet)
export POWERVSI_1=<POWERVSI_1>           # Virtual server 1 instance id (use with powervs-subnet)
export POWERVSI_2=<POWERVSI_2>           # Virtual server 2 instance id (use with powervs-subnet)
export JUMBO="true or false"             # Enable Jumbo frames (use with powervs-subnet)

# NFS settings
export NFS_SERVER="NFS server"           # Hostname or IP address of the highly available NFS server
export NFS_SHARE="NFS server directory"  # Exported file system directory on the NFS server
export NFS_OPTIONS="rw,sec=sys"          # Sample NFS client mount options

The following export commands are an example of how to set the extra environment variables that are required for a multizone region implementation when using the resource agent powervs-move-ip.

# General settings
export CLUSTERNAME="SAP_S01"         # Cluster name
export NODE1="cl-s01-1"              # Virtual service instance 1 hostname
export NODE2="cl-s01-2"              # Virtual server instance 2 hostname

export SID="S01"                     # SAP System ID (uppercase)
export sid="s01"                     # SAP System ID (lowercase)

# ASCS instance
export ASCS_INSTNO="21"              # ASCS instance number
export ASCS_VH="s01ascs"             # ASCS virtual hostname
export ASCS_IP="10.40.21.102"        # ASCS virtual IP address
export ASCS_ROUTE_CRN1="crn:v1:bluemix:public:power-iaas:eu-de-2:a/a1b2c3d4e5f60123456789a1b2c3d4e5:a1b2c3d4-0123-4567-89ab-a1b2c3d4e5f6:route:a1b2c3d4-1234-5678-9abc-a1b2c3"
export ASCS_ROUTE_CRN2="crn:v1:bluemix:public:power-iaas:eu-de-1:a/a1b2c3d4e5f60123456789a1b2c3d4e5:e5f6a1b2-cdef-0123-4567-a1b2c3d4e5f6:route:1a2b3c4d-cba9-8765-4321-c3b2a1"

# ERS instance
export ERS_INSTNO="22"               # ERS instance number
export ERS_VH="s01ers"               # ERS virtual hostname
export ERS_IP="10.40.22.102"         # ERS virtual IP address
export ERS_ROUTE_CRN1="crn:v1:bluemix:public:power-iaas:eu-de-2:a/a1b2c3d4e5f60123456789a1b2c3d4e5:a1b2c3d4-0123-4567-89ab-a1b2c3d4e5f6:route:cba98765-5678-1234-9abc-a1b2c3"
export ERS_ROUTE_CRN2="crn:v1:bluemix:public:power-iaas:eu-de-1:a/a1b2c3d4e5f60123456789a1b2c3d4e5:e5f6a1b2-cdef-0123-4567-a1b2c3d4e5f6:route:9abca1b2-4321-8765-4321-b2a1c3"

# Other multizone region settings
export CLOUD_REGION="eu-de"
export APIKEY="@/root/.apikey.json"
export API_TYPE="private"
export MON_API="false"

# NFS settings
export NFS_SERVER="cl-nfs"           # Hostname or IP address of the highly available NFS server
export NFS_SHARE="/sapS01"           # Exported file system directory on the NFS server
export NFS_OPTIONS="rw,sec=sys"      # Sample NFS client mount options

Creating mount points for the instance file systems

On both cluster nodes, run the following command to create the required mount points for the SAP instance file systems.

mkdir -p /usr/sap/${SID}/{ASCS${ASCS_INSTNO},ERS${ERS_INSTNO}} /sapmnt/${SID}

Installing and setting up the RHEL HA Add-On cluster

Follow the instructions in Implementing a RHEL HA Add-On cluster on IBM Power Virtual Server in a Multizone Region Environment to install and configure the RHEL HA Add-On cluster. After the installation, configure and test cluster fencing as described in Creating the fencing device.

Preparing cluster resources before the SAP installation

Ensure that the RHEL HA Add-On cluster is active on both virtual server instances, and verify that node fencing functions as expected.

Configuring general cluster properties

To prevent the cluster from relocating healthy resources, for example when previously failed node restarts, set the following default meta attributes.

  • resource-stickiness=1: Ensures that resources remain on their current node.
  • migration-threshold=3: Limits the number of failures before a resource is moved.

On NODE1, run the following command.

pcs resource defaults update resource-stickiness=1
pcs resource defaults update migration-threshold=3

Configuring the cluster resource for sapmnt

On NODE1, run the following command to create a cloned Filesystem cluster resource that mounts SAPMNT from an NFS server on all cluster nodes.

pcs resource create fs_sapmnt Filesystem \
    device="${NFS_SERVER}:${NFS_SHARE}/sapmnt" \
    directory="/sapmnt/${SID}" \
    fstype='nfs' \
    options="${NFS_OPTIONS}" \
    clone interleave=true

Preparing to install the ASCS instance on NODE1

On NODE1, run the following command to create a Filesystem cluster resource that mounts the ASCS instance directory.

pcs resource create ${sid}_fs_ascs${ASCS_INSTNO} Filesystem \
    device="${NFS_SERVER}:${NFS_SHARE}/ASCS" \
    directory=/usr/sap/${SID}/ASCS${ASCS_INSTNO} \
    fstype=nfs \
    options="${NFS_OPTIONS}" \
    force_unmount=safe \
    op start interval=0 timeout=60 \
    op stop interval=0 timeout=120 \
    --group ${sid}_ascs${ASCS_INSTNO}_group

Decide which resource agent to use for managing virtual IP resources in the cluster. For details, see SAP HANA high availability solution in a multizone region environment - Network considerations.

Complete all steps that are described in Preparing a multi-zone RHEL HA Add-On cluster for a virtual IP address resource.

Use the pcs resource describe command to view detailed parameter information for the powervs-move-ip or powervs-subnet resource agents.

If you use the powervs-move-ip resource agent, run the following command on NODE1 to create a cluster resource for the ASCS virtual IP address.

pcs resource create ${sid}_vip_ascs${ASCS_INSTNO} powervs-move-ip \
    api_key=${APIKEY} \
    api_type=${API_TYPE} \
    ip=${ASCS_IP} \
    route_host_map="${NODE1}:${ASCS_ROUTE_CRN1};${NODE2}:${ASCS_ROUTE_CRN2}" \
    region=${CLOUD_REGION} \
    monitor_api=${MON_API}
    op start timeout=60 \
    op stop timeout=60 \
    op monitor interval=60 timeout=60 \
    --group ${sid}_ascs${ASCS_INSTNO}_group

Otherwise, run the following command on NODE1 to create a powervs-subnet cluster resource for the ASCS virtual IP address.

pcs resource create ${sid}_vip_ascs${ASCS_INSTNO} powervs-subnet \
    api_key=${APIKEY} \
    api_type=${API_TYPE} \
    cidr=${ASCS_CIDR} \
    ip=${ASCS_IP} \
    crn_host_map="${NODE1}:${IBMCLOUD_CRN_1};${NODE2}:${IBMCLOUD_CRN_2}" \
    vsi_host_map="${NODE1}:${POWERVSI_1};${NODE2}:${POWERVSI_2}" \
    jumbo=${JUMBO} \
    region=${CLOUD_REGION} \
    subnet_name=${ASCS_NET} \
    route_table=5${ASCS_INSTNO} \
    op start timeout=720 \
    op stop timeout=300 \
    op monitor interval=60 timeout=30 \
    --group ${sid}_ascs${ASCS_INSTNO}_group

Preparing to install the ERS instance on NODE2

On NODE1, run the following command to create a Filesystem cluster resource to mount the ERS instance directory.

pcs resource create ${sid}_fs_ers${ERS_INSTNO} Filesystem \
    device="${NFS_SERVER}:${NFS_SHARE}/ERS" \
    directory=/usr/sap/${SID}/ERS${ERS_INSTNO} \
    fstype=nfs \
    options="${NFS_OPTIONS}" \
    force_unmount=safe \
    op start interval=0 timeout=60 \
    op stop interval=0 timeout=120 \
    --group ${sid}_ers${ERS_INSTNO}_group

If you use the powervs-move-ip resource agent, run the following command on NODE1 to create a cluster resource for the ERS virtual IP address.

pcs resource create ${sid}_vip_ers${ERS_INSTNO} powervs-move-ip \
    api_key=${APIKEY} \
    api_type=${API_TYPE} \
    ip=${ERS_IP} \
    route_host_map="${NODE1}:${ERS_ROUTE_CRN1};${NODE2}:${ERS_ROUTE_CRN2}" \
    region=${CLOUD_REGION} \
    monitor_api=${MON_API}
    op start timeout=60 \
    op stop timeout=60 \
    op monitor interval=60 timeout=60 \
    --group ${sid}_ers${ERS_INSTNO}_group

Otherwise, run the following command on NODE1 to create a powervs-subnet cluster resource for the ERS virtual IP address.

pcs resource create ${sid}_vip_ers${ERS_INSTNO} powervs-subnet \
    api_key=${APIKEY} \
    api_type=${API_TYPE} \
    cidr=${ERS_CIDR} \
    ip=${ERS_IP} \
    crn_host_map="${NODE1}:${IBMCLOUD_CRN_1};${NODE2}:${IBMCLOUD_CRN_2}" \
    vsi_host_map="${NODE1}:${POWERVSI_1};${NODE2}:${POWERVSI_2}" \
    jumbo=${JUMBO} \
    region=${CLOUD_REGION} \
    subnet_name=${ERS_NET} \
    route_table=5${ERS_INSTNO} \
    op start timeout=720 \
    op stop timeout=300 \
    op monitor interval=60 timeout=30 \
    --group ${sid}_ers${ERS_INSTNO}_group

Ensure that both virtual server instances in the cluster have the status Active and the health status OK before running the pcs resource config command.

Verifying the cluster configuration

On NODE1, run the following command to verify the current cluster configuration and ensure that all resources are correctly defined and active.

pcs status --full

Sample output:

# pcs status --full
Cluster name: SAP_S01
Status of pacemakerd: 'Pacemaker is running' (last updated 2024-11-20 14:04:05 +01:00)
Cluster Summary:
  * Stack: corosync
  * Current DC: cl-s01-2 (2) (version 2.1.5-9.el9_2.4-a3f44794f94) - partition with quorum
  * Last updated: Wed Nov 20 14:04:06 2024
  * Last change:  Wed Nov 20 13:51:19 2024 by hacluster via crmd on cl-s01-2
  * 2 nodes configured
  * 8 resource instances configured

Node List:
  * Node cl-s01-1 (1): online, feature set 3.16.2
  * Node cl-s01-2 (2): online, feature set 3.16.2

Full List of Resources:
  * fence_node1	(stonith:fence_ibm_powervs):	 Started cl-s01-2
  * fence_node2	(stonith:fence_ibm_powervs):	 Started cl-s01-2
  * Clone Set: fs_sapmnt-clone [fs_sapmnt]:
    * fs_sapmnt	(ocf:heartbeat:Filesystem):	 Started cl-s01-1
    * fs_sapmnt	(ocf:heartbeat:Filesystem):	 Started cl-s01-2
  * Resource Group: s01_ascs21_group:
    * s01_fs_ascs21	(ocf:heartbeat:Filesystem):	 Started cl-s01-1
    * s01_vip_ascs21	(ocf:heartbeat:powervs-subnet):	 Started cl-s01-1
  * Resource Group: s01_ers22_group:
    * s01_fs_ers22	(ocf:heartbeat:Filesystem):	 Started cl-s01-1
    * s01_vip_ers22	(ocf:heartbeat:powervs-subnet):	 Started cl-s01-1

Migration Summary:

Tickets:

PCSD Status:
  cl-s01-1: Online
  cl-s01-2: Online

Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/enabled

Ensure that the ${sid}_ascs${ASCS_INSTNO}_group cluster resource group is running on NODE1, and that the ${sid}_ers${ERS_INSTNO}_group cluster resource group is running on NODE2. If needed, use the pcs resource move <resource_group_name> command to relocate the resource group to the appropriate node.

Changing the ownership of the ASCS and ERS mount points

The sidadm user must own the mount points for the filesystems of the ASCS and ERS instances. Create the required users and groups and set the mount point ownership before you start the instance installation.

Follow these steps on both nodes to configure the correct ownership.

  1. Start the SAP Software Provisioning Manager (SWPM) to create the operating system users and groups.

    <swpm>/sapinst
    

    In the SWPM web interface, go to System Rename > Preparations > Operating System Users and Group. Record the user and group IDss, and verify that they are identical on both nodes.

  2. Change the ownership of the mount points.

    chown -R ${sid}adm:sapsys /sapmnt/${SID} /usr/sap/${SID}
    

Installing the ASCS and ERS instances

Use SWPM to install both instances.

  • Install ASCS and ERS instances on the cluster nodes.

    • On NODE1, use the virtual hostname ${ASCS_VH}, which is associated with the ASCS virtual IP address, to install the ASCS instance.
    <swpm>/sapinst SAPINST_USE_HOSTNAME=${ASCS_VH}
    
    • On NODE2, use the virtual hostname ${ERS_VH}, which is associated with the ERS virtual IP address, to install the ERS instance.
    <swpm>/sapinst SAPINST_USE_HOSTNAME=${ERS_VH}
    
  • Install all other SAP application instances outside the cluster environment.

Preparing the ASCS and ERS instances for cluster integration

Use the following steps to prepare the SAP instances for cluster integration.

Disabling the automatic start of the SAP instance agents for ASCS and ERS

Disable the automatic start of the sapstartsrv instance agents for both ASCS and ERS instances after a system reboot.

Verifying the SAP instance agent integration type

Recent versions of the SAP instance agent sapstartsrv provide native systemd support on Linux. For more information, refer to the the SAP notes that are listed at SAP Notes.

On both nodes, check the content of the /usr/sap/sapservices file.

cat /usr/sap/sapservices

In the systemd format, entries begin with systemctl commands.

Example:

systemctl --no-ask-password start SAPS01_01 # sapstartsrv pf=/usr/sap/S01/SYS/profile/S01_ASCS01_cl-sap-scs

If the ASCS and ERS entries use systemd format, continue with the steps in Registering the ASCS and the ERS instances. In the classic format, entries begin with LD_LIBRARY_PATH definitions.

Example:

LD_LIBRARY_PATH=/usr/sap/S01/ASCS01/exe:$LD_LIBRARY_PATH;export LD_LIBRARY_PATH;/usr/sap/S01/ASCS01/exe/sapstartsrv pf=/usr/sap/S01/SYS/profile/S01_ASCS01_cl-sap-scs -D -u s01adm

If the entries for ASCS and ERS are in classic format, modify the /usr/sap/sapservices file to prevent the automatic start of the sapstartsrv instance agents for both ASCS and ERS instances after a system reboot.

On both nodes, remove or comment out the sapstartsrv entries for ASCS and ERS in the SAP services file.

sed -i -e 's/^LD_LIBRARY_PATH=/#LD_LIBRARY_PATH=/' /usr/sap/sapservices

Example:

#LD_LIBRARY_PATH=/usr/sap/S01/ASCS01/exe:$LD_LIBRARY_PATH;export LD_LIBRARY_PATH;/usr/sap/S01/ASCS01/exe/sapstartsrv pf=/usr/sap/S01/SYS/profile/S01_ASCS01_cl-sap-scs -D -u s01adm

Proceed to Installing permanent SAP license keys.

Registering the ASCS and the ERS instances

Register the SAP instances on both nodes.

  1. Login as the root user on both nodes.

  2. Set the LD_LIBRARY_PATH environment variable to include the ASCS instance executable directory, and register the ASCS instance.

    export LD_LIBRARY_PATH=/usr/sap/${SID}/ASCS${ASCS_INSTNO}/exe && \
    /usr/sap/${SID}/ASCS${ASCS_INSTNO}/exe/sapstartsrv \
       pf=/usr/sap/${SID}/SYS/profile/${SID}_ASCS${ASCS_INSTNO}_${ASCS_VH} -reg
    
  3. Repeat the registration step for the ERS instance by using the ERS profile.

    export LD_LIBRARY_PATH=/usr/sap/${SID}/ERS${ERS_INSTNO}/exe && \
    /usr/sap/${SID}/ERS${ERS_INSTNO}/exe/sapstartsrv \
       pf=/usr/sap/${SID}/SYS/profile/${SID}_ERS${ERS_INSTNO}_${ERS_VH} -reg
    

Disabling systemd services of the ASCS and the ERS instances

On both nodes, disable the systemd service for the ASCS instance agent.

systemctl disable --now SAP${SID}_${ASCS_INSTNO}.service

Then, disable the systemd service for the ERS instance agent.

systemctl disable --now SAP${SID}_${ERS_INSTNO}.service

Disabling systemd restart of a crashed ASCS or ERS instance

Systemd includes built-in mechanisms for restarting crashed services. In a high availability setup, only the HA cluster should manage the SAP ASCS and ERS instances. To prevent systemd from automatically restarting these instances, create drop-in configuration files on both cluster nodes.

On both nodes, create the directories for the drop-in files.

mkdir /etc/systemd/system/SAP${SID}_${ASCS_INSTNO}.service.d
mkdir /etc/systemd/system/SAP${SID}_${ERS_INSTNO}.service.d

On both nodes, create the drop-in files for ASCS and ERS.

cat >> /etc/systemd/system/SAP${SID}_${ASCS_INSTNO}.service.d/HA.conf << EOT
[Service]
Restart=no
EOT
cat >> /etc/systemd/system/SAP${SID}_${ERS_INSTNO}.service.d/HA.conf << EOT
[Service]
Restart=no
EOT

Restart=no must be in the [Service] section, and the drop-in files must be available on all cluster nodes.

On both nodes, reload the systemd unit files.

systemctl daemon-reload

Installing permanent SAP license keys

When the SAP ASCS instance runs on a Power Virtual Server instance, the SAP license mechanism uses the partition UUID to generate the hardware key. For details, see SAP note 2879336 - Hardware key based on unique ID.

On both nodes, run the following command as the sidadm user to retrieve the hardware key.

sudo -i -u ${sid}adm -- sh -c 'saplikey -get'

Sample output:

$ sudo -i -u ${sid}adm -- sh -c 'saplikey -get'

saplikey: HARDWARE KEY = H1428224519

Record the HARDWARE KEY from each node.

You need the hardware keys from both nodes to request separate SAP license keys. For guidance on requesting license keys for failover systems, refer to the following SAP Notes:

Installing SAP resource agents

Install the required software packages. The resource-agents-sap package provides the SAPInstance cluster resource agent that is used to manage SAP instances.

If sap_cluster_connector is not configured for the SAP instance, the RHEL HA Add-On cluster treats any state change as a potential issue. When external SAP tools such as sapcontrol are used to manage the instance, sap_cluster_connector enables safe interaction with SAP instances running inside the cluster. If SAP instances are managed exclusively by cluster tools, sap_cluster_connector is not required.

Install the packages for the cluster resource agent and the SAP Cluster Connector library. For details, see How to enable the SAP HA Interface for SAP ABAP application server instances managed by the RHEL HA Add-On

On both nodes, run the following commands.

If necessary, use subscription-manager to enable the SAP NetWeaver repository. For instructions, refer to the RHEL for SAP Subscriptions and Repositories documentation.

subscription-manager repos --enable="rhel-8-for-ppc64le-sap-netweaver-e4s-rpms"

Install the required packages.

dnf install -y resource-agents-sap sap-cluster-connector

Configuring SAP Cluster Connector

Add the sidadm user to the haclient group on both nodes.

usermod -a -G haclient ${sid}adm

Adapting the SAP instance profiles

Modify the start profiles of SAP instances that are managed by SAP tools outside the cluster. The RHEL HA Add-On cluster and its resource agents can control both ASCS and ERS instances. To prevent automatic restarts of instance processes, adjust the SAP instance profiles accordingly.

On NODE1, change to the SAP profile directory.

cd /sapmnt/${SID}/profile

Replace all Restart_Program entries with Start_Program in the ASCS and ERS instance profiles.

sed -i -e 's/Restart_Program_\([0-9][0-9]\)/Start_Program_\1/' ${SID}_ASCS${ASCS_INSTNO}_${ASCS_VH}
sed -i -e 's/Restart_Program_\([0-9][0-9]\)/Start_Program_\1/' ${SID}_ERS${ERS_INSTNO}_${ERS_VH}

Append the following lines to the end of the ASCS and ERS instance profiles to enable sap_cluster_connector integration:

service/halib = $(DIR_EXECUTABLE)/saphascriptco.so
service/halib_cluster_connector = /usr/bin/sap_cluster_connector

Configuring the ASCS and ERS cluster resources

Up to this point, the following are assumed:

  • A RHEL HA Add-On cluster is running on both virtual server instances and node fencing is tested.
  • A cloned Filesystem cluster resource is configured to mount the sapmnt share.
  • Two Filesystem cluster resources are configured to mount the ASCS and ERS instance file systems.
  • Two powervs-subnet cluster resources are configured for the virtual IP addresses of the ASCS and ERS instances.
  • The ASCS instance is installed and active on NODE1.
  • THE ERS instance is installed and active on NODE2.
  • All steps that are described in Prepare ASCS and ERS instances for the cluster integration are complete.

Configuring the ASCS cluster resource group

On NODE1, run the following commands to create a cluster resource to manage the ASCS instance.

pcs resource create ${sid}_ascs${ASCS_INSTNO} SAPInstance \
    InstanceName="${SID}_ASCS${ASCS_INSTNO}_${ASCS_VH}" \
    START_PROFILE=/sapmnt/${SID}/profile/${SID}_ASCS${ASCS_INSTNO}_${ASCS_VH} \
    AUTOMATIC_RECOVER=false \
    meta resource-stickiness=5000 \
    migration-threshold=1 failure-timeout=60 \
    op monitor interval=20 on-fail=restart timeout=60 \
    op start interval=0 timeout=600 \
    op stop interval=0 timeout=600 \
    --group ${sid}_ascs${ASCS_INSTNO}_group

The meta resource-stickiness=5000 option balances the failover behavior for the ERS instance. This option ensures that the resource remains on its original node and does not migrate unexpectedly within the cluster.

To ensure that the ASCS instance remains on its designated node, add resource stickiness to the group.

pcs resource meta ${sid}_ascs${ASCS_INSTNO}_group \
    resource-stickiness=3000

Configuring the ERS cluster resource group

On NODE2, run the following command to create a cluster resource to manage the ERS instance.

pcs resource create ${sid}_ers${ERS_INSTNO} SAPInstance \
    InstanceName="${SID}_ERS${ERS_INSTNO}_${ERS_VH}" \
    START_PROFILE=/sapmnt/${SID}/profile/${SID}_ERS${ERS_INSTNO}_${ERS_VH} \
    AUTOMATIC_RECOVER=false \
    IS_ERS=true \
    op monitor interval=20 on-fail=restart timeout=60 \
    op start interval=0 timeout=600 \
    op stop interval=0 timeout=600 \
    --group ${sid}_ers${ERS_INSTNO}_group

Configuring the cluster constraints

On NODE1, run the following commands to configure cluster constraints.

A colocation constraint ensures that the resource groups ${sid}_ascs${ASCS_INSTNO}_group and ${sid}_ers${ERS_INSTNO}_group do not run on the same node, if at least two nodes are available. If only one node is available, the stickiness value of -5000 allows both groups to run on the same node.

pcs constraint colocation add \
    ${sid}_ers${ERS_INSTNO}_group with ${sid}_ascs${ASCS_INSTNO}_group -- -5000

An order constraint ensures that ${sid}_ascs${ASCS_INSTNO}_group starts before ${sid}_ers${ERS_INSTNO}_group.

pcs constraint order start \
    ${sid}_ascs${ASCS_INSTNO}_group then stop ${sid}_ers${ERS_INSTNO}_group \
    symmetrical=false \
    kind=Optional

The following two order constraints ensure that the SAPMNT file system mounts before ${sid}_ascs${ASCS_INSTNO}_group and ${sid}_ers${ERS_INSTNO}_group start.

pcs constraint order fs_sapmnt-clone then ${sid}_ascs${ASCS_INSTNO}_group
pcs constraint order fs_sapmnt-clone then ${sid}_ers${ERS_INSTNO}_group

Conclusion

The ENSA2 cluster implementation in a multizone region environment is now complete.

Now, run tests similar to the ones described in Testing an SAP ENSA2 cluster to validate the cluster.

The following shows a sample output of the pcs status command for a completed ENSA2 cluster in a multizone region deployment.

Cluster name: SAP_S01
Status of pacemakerd: 'Pacemaker is running' (last updated 2024-11-22 09:42:15 +01:00)
Cluster Summary:
  * Stack: corosync
  * Current DC: cl-s01-1 (version 2.1.5-9.el9_2.4-a3f44794f94) - partition with quorum
  * Last updated: Fri Nov 22 09:42:15 2024
  * Last change:  Fri Nov 22 09:06:18 2024 by root via cibadmin on cl-s01-1
  * 2 nodes configured
  * 10 resource instances configured

Node List:
  * Online: [ cl-s01-1 cl-s01-2 ]

Full List of Resources:
  * fence_node1	(stonith:fence_ibm_powervs):	 Started cl-s01-1
  * fence_node2	(stonith:fence_ibm_powervs):	 Started cl-s01-2
  * Clone Set: fs_sapmnt-clone [fs_sapmnt]:
    * Started: [ cl-s01-1 cl-s01-2 ]
  * Resource Group: s01_ascs21_group:
    * s01_fs_ascs21	(ocf:heartbeat:Filesystem):	 Started cl-s01-1
    * s01_vip_ascs21	(ocf:heartbeat:powervs-subnet):	 Started cl-s01-1
    * s01_ascs21	(ocf:heartbeat:SAPInstance):	 Started cl-s01-1
  * Resource Group: s01_ers22_group:
    * s01_fs_ers22	(ocf:heartbeat:Filesystem):	 Started cl-s01-2
    * s01_vip_ers22	(ocf:heartbeat:powervs-subnet):	 Started cl-s01-2
    * s01_ers22	(ocf:heartbeat:SAPInstance):	 Started cl-s01-2

Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/enabled