Preparing and Installing SAP Data Intelligence
In the previous chapter we have created a Red Hat OpenShift on IBM Cloud cluster and prepared the jump host. The next steps cater to the preparation of the cluster to be ready for the SAP Data Intelligence installation.
The instructions for implementing SAP Data Intelligence on the Red Hat OpenShift on IBM Cloud cluster follow the Red hat article (RHA) SAP Data Intelligence 3 on OpenShift Container Platform 4 starting from 3.3. OCP Post Installation Steps, because in this scenario Red Hat OpenShift on IBM Cloud 4.8.x already has been provisioned.
Setting the sdi
role for the worker nodes for SAP Data Intelligence
According to section 3.3.4.1. Label the compute nodes for SAP Data Intelligence in the RHA, set the sdi
role for the
worker nodes that will be considered by Red Hat's SDI observer. See below the parameter SDI_NODE_SELECTOR
in the SDI observers's installation script.
-
Determine the nodes' names.
oc get nodes
Example output:
NAME STATUS ROLES AGE VERSION 10.**********.177 Ready master,worker 1d v1.21.8+ed4d8fd 10.**********.182 Ready master,worker 1d v1.21.8+ed4d8fd 10.**********.185 Ready master,worker 1d v1.21.8+ed4d8fd
-
Set the nodes' role.
$ oc label node/10.**********.177 node-role.kubernetes.io/sdi="" $ oc label node/10.**********.182 node-role.kubernetes.io/sdi="" $ oc label node/10.**********.185 node-role.kubernetes.io/sdi=""
-
Check the nodes.
oc get nodes
Example output:
NAME STATUS ROLES AGE VERSION 10.**********.177 Ready master,sdi,worker 1d v1.21.8+ed4d8fd 10.**********.182 Ready master,sdi,worker 1d v1.21.8+ed4d8fd 10.**********.185 Ready master,sdi,worker 1d v1.21.8+ed4d8fd
Note that the
sdi
role has been added.
Creating the related projects
For deployment of SAP applications in cloud-based environments, SAP provides the Software Lifecycle Container Bridge 1.0 tool with the Maintenance Planner. In the following, we use the abbreviated tool name “SLC Bridge”. Find more information on SAP's SLC Bridge here. Red Hat's SAP Data Intelligence (SDI) Observer, SLC Bridge and SDI runtime components require separate projects/namespaces for the related pods that will be deployed.
In the RHA, the sample project names (i.e. namespaces) are sdi
, sdi-observer
and sap-slcbridge
. Following these naming conventions of the Red Hat article, create the related projects as follows.
-
create the projects
$ oc new-project sdi-observer $ oc new-project sap-slcbridge $ oc new-project sdi
Deploying Red Hat's SAP Data Intelligence (SDI) Observer
SAP Data Intelligence (SDI) Observer monitors SDI and SLC Bridge namespaces and applies changes to SDI deployments to allow SDI to run on OpenShift.
The projects' namespaces and Container Registry names used in the following steps are the same as those used in previous steps.
-
SDI Observer needs a secret with credentials for registry.redhat.io
Follow the section 4.2.1. Prerequisites for Connected OpenShift Cluster in the RHA and save your
rht-registry-secret.yaml
in the ~/sap/install directory. This yaml file will be required to automatically set the respective parameters below. -
Get information about Red Hat's SDI Observer installation script
Review the section 4.2.3. Instantiation of Observer's Template in the RHA to confirm the deployment instructions and the source URL are valid.
-
Download the installation script
curl -O https://raw.githubusercontent.com/redhat-sap/sap-data-intelligence/master/observer/run-observer-template.sh
-
Edit the downloaded script file in your favorite editor; especially, mind the following parameters:
FLAVOUR=ubi-build REDHAT_REGISTRY_SECRET_PATH="$HOME/sap/install/rht-registry-secret.yaml" NAMESPACE=sdi-observer SDI_NAMESPACE=sdi SLCB_NAMESPACE=sap-slcbridge SDI_NODE_SELECTOR="node-role.kubernetes.io/sdi="
-
Save the script.
-
Deploy the SAP Data Intelligence Observer in the
sdi
namespace - i.e. run the script using bash.bash ./run-observer-template.sh
In chapter 4.3 Managing SDI Observer of the RHA you will learn how you can review and update SDI Observer's current configuration.
-
Get information about Red Hat's SDI node-configurator
The worker nodes need be configured to grant proper execution of SAP Data Intelligence - see also the documentation at Red Hat's SDI Node Configurator on GitHub.
- First set security context constraints to the sdi-node-configurator service account:
oc adm policy add-scc-to-user -n sdi-observer privileged -z sdi-node-configurator
- Copy the template from GitHub:
curl -O https://raw.githubusercontent.com/redhat-sap/sap-data-intelligence/master/node-configurator/ocp-template.json
- Create the objects directly from the downloaded template
oc process NAMESPACE=sdi-observer -f ./ocp-template.json | oc create -f -
Preparing IBM Cloud Object Storage for SAP Data Intelligence
For production environments, storage for backup&restore as well as Semantic Data Lake (SDL) Connections must be set up. Follow these instructions to set up IBM Cloud Object Storage and to define the parameters that will be handed over to the installation dialog.
For ad hoc small term test and evaluation environments, you may skip this topic and go directly to the Installing the Software Lifecycle Container Bridge (SLCB) section.
Before you begin
Review Getting started with IBM Cloud Object Storage for general information on IBM Cloud Object Storage.
Provisioning Object Storage
Use the steps in Provision storage to provision your Object Storage.
The service instance name for the following example is sdi_cos_k8
. Choose a name that fits your needs when creating your service instance.
Creating the bucket and the directory
-
Use the steps under Creating some buckets to store your data.
-
Choose Regional as your level of resiliency and select the same Location where your Red Hat OpenShift on IBM Cloud cluster is deployed. For this example, the location is
eu-de
. -
Select the Storage Class of Standard that will meet your performance needs.
The bucket name in this example is
sdi-cos-bucket
. -
Create the directory by uploading an empty folder from your desktop to the bucket. In the console, navigate to your bucket. Select Resource List > Storage > sdi_cos_k8 > sdi-cos-bucket and click Upload.
-
Select the empty folder and name it
checkpoints
.The first time you upload, you may be required to install the free tool, Aspera Connect. Another option is to create the directory by using an
S3 bucket
compatible tool. Using such tool is not covered in this topic.
Creating the service instance and credentials for accessing the bucket
-
Follow the steps under Service credentials to create the service credentials for accessing the bucket that is handed over to the SAP Data Intelligence installation script. Make sure that you select the Writer role and click Include HMAC Credential.
The credential name in this example is
sdiOScred
. Choose a name that fits your needs when creating your service credentials. -
After the service credentials have been created, click View credentials and note the values of
access_key_id
andsecret_access_key
. You will need them during the SDI installation dialog later. See below for an example.... "cos_hmac_keys": { "access_key_id": "383**************************cf3", "secret_access_key": "a24******************************************0f9" },
-
Use Endpoints and storage locations to find the S3 Host that matches the location where your bucket's created. In the example, it's
s3.eu-de.cloud-object-storage.appdomain.cloud
.In a production environment, you should use the private endpoint.
Installing the Software Lifecycle Container Bridge (SLCB)
Go directly to the (RHA) 5.1 Install Software Lifecycle Container Bridge (SLCB) instructions, and see the next section to learn the specific parameters for IBM Cloud.
You need to copy the downloaded SLCB exectuable (SLCB01_##-70003322.EXE
) to your working directory $HOME/sap/install
and rename it to slcb
. Note that ##
will be the current version number.
Locating the Installation Dialog parameters for SLCB
In order to run SLCB for your SAP DI installation, several parameters need to be supplied. One of these parameters is the address of the Container Image Repository. You can take advantage of the Red Hat® OpenShift® on IBM Cloud® Container
Registry. The address consists of the endpoint URL for the registry and the namespace that you have created during the setup of the jump host, sap_di_cr
- see: Creating a new Container Registry namespace.
-
To find the endpoint URL of the registry that you are currently targeting, run the
ibmcloud cr api
command.ibmcloud cr api
Example output:
Registry API endpoint https://de.icr.io/api
-
In this example the domain is
icr.io
. -
The region code precedes the domain - here it is
de
. -
Finally, the address of the Container Image Repository here is
de.icr.io/sap_di_cr
. -
Run the SLCB installation process:
./slcb init
During installation of the SLCB, you're prompted to enter following parameters:
Parameter | Value |
---|---|
Address of the Container Image Repo: | de.icr.io/sap_di_cr |
S-User Name: | Sxxxxxxxxxx |
S-User Password: | xxxxxxxx |
New Technical User - xxxxx-#custnr#: | tUser |
Path to the "kubeconfig" file: | ~/.kube/config |
Expert mode: | 2 |
Kubernetes namespace for the SLC Bridge: | sap-slcbridge |
name of the admin user for the SLCB Base: | admin |
password of the administrator user admin: | xxxxxxx IMPORTANT: regard the password constraints below |
NodePort: | 2 |
Proxy Settings: | no |
noFeedback: | 3 |
Password constraints:
- It must at least consist of 8 characters
- It must contain at least one lower case, one upper case, one numerical and one special character
- The allowed special characters are . @ # $ % * + _ ? ! -
You need to note the hostname and port of the SLCB service by following commands:
- get the fully qualified hostname
that you will need later ibmcloud oc cluster get -c rv-sap-di-cl | grep -i subdomain
- get the port number >PORT> that you will need later
oc get svc -n sap-slcbridge slcbridgebase-service -o jsonpath=$'{.spec.ports[0].nodePort}\n'
This URL will be required later to launch the SDI installation - https://HOSTNAME:PORT/docs/index.html.
SAP Data Intelligence installation via running the SAP Maintenance Planner
Review SAP documentation about the installation setup - Install SAP Data Intelligence with SLC Bridge.
Before installing SAP Data Intelligence you need to review the Red Hat article and do these steps:
As described there execute all oc adm policy add-scc-to-*
commands and then start the SAP Maintenance Planner.
Launch SAP Maintenance Planner (MP) in a browser and click Plan a New System.
- Click Plan
- Select CONTAINER BASED
- Select SAP DATA INTELLIGENCE
- Select SAP DATA INTELLIGENCE 3
- Select 3.2 11/2021 from the drop-down box
- Check DI - Platform full
- Click Confirm Selection then Next
- Select Linux on x86_64 64bit
- Click Confirm Selection then Next
- Click Execute Plan
- Fill HOSTNAME that you found above into entry field Fully Qualified Domain Name
- Fill PORT that you found above into entry field Port
- Click Next
Before you click Deploy you must enter your previously found URL https://HOSTNAME:PORT/docs/index.html into a browser and login with your SLCB admin user credentials that you have provided during the ./slcb init installation process.
- After successfull login, click Deploy then Next
Now, select the link in the tools column which will launch SAP Intelligence installer in a separate browser window. During installation of SAP Data Intelligence, you're prompted to provide the following parameters:
Parameter | Value |
---|---|
When the installer is called for the first time it will download SLCB images from SAP and therefore prompt you for your credentials | |
S-User Name: | Sxxxxxxxx |
S-User Password: | xxxxxxxx |
Then you will select Install and DI Platform Full and click Next two times | |
Kubernetes Namespace: | sdi |
Installation Type: | advanced |
Restore from Backup: | no |
Address of the Container Image Repository: | de.icr.io/sdi_cr_os |
Eventually you will need to enter your S-User credentials again | |
S-User Name: | Sxxxxxxxx |
S-User Password: | xxxxxxxx |
SAP DI System Tenant Administrator Password: | xxxxxxxx (you have specified this password before) |
SAP DI Initial Tenant Name: | default |
SAP DI Initial Tenant Administrator Username: | default-adm |
Specify a password for "default" user of "default" tenant: | xxxxxxx |
proxy | no |
backup | confirm that SAP Note 2918288 is read |
backup, restore and vora storage: | s3 compatible |
Access Key: | xxxxxxxxxxxxxxxxxxxxxx (see above) |
Secret Access Key: | xxxxxxxxxxxxxxxxxxxxxxx (see above) |
Endpoint: | https://s3.direct.eu-de.cloud-object-storage.appdomain.cloud |
S3 bucket and directory: | sdi-cos-bucket/checkpoints |
timeout for checkpoint store: | 180 |
checkpoint store validation: | yes |
disable checksum | yes |
disable certificate validation | yes |
Backup Schedule (Cron Expression): | 0 0 * * * (daily at midnight) |
want to configure storage classes for ReadWriteOnce PersistentVolumes | yes |
define default storage class | ibmc-block-bronze |
Default Storage Class: | ibmc-block-bronze |
System Management Storage Class: | ibmc-block-bronze |
Dlog Storage Class: | ibmc-block-bronze |
Disk Storage Class: | ibmc-block-bronze |
SAP HANA Storage Class: | ibmc-block-bronze |
SAP Data Intelligence Diagnostics Storage Class: | ibmc-block-bronze |
different log path: | no |
Kaniko: | yes |
different registry: | no |
load NFS: | no |
network policies: | no |
timeout: | 3600 |
Additional parameters: | -e hana.memoryRequest=7Gi -e storageGateway.replicas=2 -e vsystem.vRep.exportsMask=true |
Cluster Name: | sap-di32-cluster |
S-User Name: | Sxxxxxxxx |
S-User Password: | xxxxxxxx |
The installation and the initialization of all pods may take a while to complete.
-
You may want to watch how the pods get initialized and start running
watch oc get -n sdi pods -o wide