Managing the cluster
Preparing your local machine
Ensure that you have the following prerequisites installed and working correctly on your local machine before performing any cluster-management tasks.
```sh {: pre}
helm init --client-only
```
-
Verify that the tools are installed correctly by running the following test commands.
-
Test the IBM Cloud Private CLI (
cloudctl
):cloudctl login -a https://{hostname}:8443 -u {admin_user_id} -p {admin_password}
If you are using a load balancer, specify the hostname of the load balancer instead of the hostname of the master node.
-
Test Kubernetes (
kubectl
):kubectl get namespaces
If you cannot run the
kubectl
command, see Enabling access to the Kubernetes command-line interface. -
Test Helm (
helm
):helm version --tls
-
Managing user access
After you provision an instance, you can share the URL for the product user interface with other users. However, those users can only log in to the product user interface if you give them access.
If you plan to use SAML for single sign-on (SSO), complete Configuring single sign-on before you add users. If you add users before you configure SSO, you will need to re-add the users with their SAML ID to enable them to use SSO.
-
From the web client menu, click Administer > Manage user.
-
Click Add user, and specify the user's full name, user name, and email address. Set the user's permissions, and then click Add.
-
From the web client menu, select My Instances.
-
Find your Knowledge Studio instance, click the more (...) menu, and then choose Manage Access.
-
Click Add user.
-
Click the user name field to see a list of the people you can add.
The users you added in the previous steps are listed. Select a name, choose their access role, and then click Add.
If you aren't connecting to an existing user registry and enabling single sign-on, then temporary passwords are created for the users you add and are sent to them by way of the email addresses you specified.
Scaling Deployments and StatefulSets
Deployments
Component | Deployment name | Pod name | Default number of replicas |
---|---|---|---|
WKS Front-end | {release_name}-ibm-watson-ks |
{release_name}-ibm-watson-ks-xxxxxxxxxx-xxxxx |
2 |
SIREG | {release_name}-sire-training-sireg-{lang} |
{release_name}-sire-training-sireg-{lang}-xxxxxxxxxx-xxxxx |
2 |
SIRE Job queue | {release_name}-sire-training-jobq |
{release_name}-sire-training-jobq-xxxxxxxxxx-xxxxx |
2 |
SIRE Train Facade | {release_name}-sire-training-facade |
{release_name}-sire-training-facade-xxxxxxxxxx-xxxxx |
2 |
MMA | {release_name}-ibm-watson-mma-prod-model-management-api |
{release_name}-ibm-watson-mma-prod-model-management-api-xxxxxxxxxx-xxxxx |
2 |
Watson Add-on | {release_name}-wcn-addon-addon |
{release_name}-wcn-addon-addon-xxxxxxxxxx-xxxxx |
2 |
PostgreSQL | {release_name}-ibm-postgresql-proxy |
{release_name}-ibm-postgresql-proxy-xxxxxxxxxx-xxxxx |
2 |
PostgreSQL | {release_name}-ibm-postgresql-sentinel |
{release_name}-ibm-postgresql-sentinel-xxxxxxxxxx-xxxxx |
2 |
{release name}
is the Helm release name of your installation.{lang}
is the language name (en
,ar
,de
etc..) of the SIREG tokenizer deployment. Deployments are created for each language.
To scale up/down the number of replicas for Deployments, use the kubectl scale
command.
kubectl scale deployment/{deployment_name} --replicas={count}
{deployment_name}
is the name of the Deployment you want to scale, which should be one of the Deployment names listed above.{count}
is the expected number of replicas to be scaled.
For example, suppose that you want to scale up the Deployment of WKS Front-end associated with the release my-release
from 2 to 3 replicas, the command will be like following.
kubectl scale deployment/my-release-ibm-watson-ks --replicas=3
StatefulSets
Component | StatefulSet name | Pod name | Default number of replicas |
---|---|---|---|
PostgreSQL | {release_name}-ib-xxxx-keeper |
{release_name}-ib-xxxx-keeper-0 , {release_name}-ib-xxxx-keeper-1 |
2 |
MongoDB | {release_name}-ib-xxxx-server |
{release_name}-ib-xxxx-server-0 , {release_name}-ib-xxxx-server-1 |
2 |
Minio | {release_name}-minio |
{release_name}-minio-0 , {release_name}-ib-minio-1 {release_name}-ib-minio-2 , {release_name}-ib-minio-3 |
4 |
To scale up/down the number of replicas for StatefulSets, use kubectl scale
command.
kubectl scale statefulset/{statefulset_name} --replicas={count}
{deployment_name}
is the name of the StatefulSet you want to scale, which should be one of the StatefulSet names listed above.{count}
is the expected number of replicas to be scaled.
For example, suppose that you want to scale up the MongoDB server associated with the release my-release
from 2 to 3 replicas, the command will be like following.
kubectl scale statefulset/my-release-ib-336f-server --replicas=3
Additional persistent volumes are required for scaling up StatefulSets. One persistent volume is consumed per replica.
Identifying which nodes the product is deployed to
To identify the nodes to which the release is deployed, run the following kubectl
command.
kubectl get pods -l release={release_name} -o wide
{release name}
is the Helm release name of your installation.
The NODE
column of the command output shows the node to which each pod is deployed. Use the Deployments and StatefulSets tables to determine the pod names of the components.
You can use the following command to check the status of each node.
kubectl get nodes -l node-role.kubernetes.io/worker=true
Viewing logs from the IBM Cloud Pak for Data Logging dashboard
-
Make sure an ELK stack is deployed to your cluster. See IBM Cloud Private Logging for more details.
-
Login to the ICP console of your cluster by accessing
https://{cluster_CA_domain}:8443
using your Web browser.{cluster_CA_domain}
: your cluster CA domain name. e.g.,mycluster.icp
.
-
Open Kibana by accessing Side Menu -> Platform -> Logging
-
Viewing and querying logs. See Viewing and querying logs for more general information.
-
To see all logs of your installation, query logs by
kubernetes.pod:"{release_name}-*
.{release name}
is the Helm release name of your installation. To identify which component output a log, checkkubernetes.pod
field of the log to see the pod name, and look up the component on the table in Deployments and StatefulSets sections with the pod name. -
To see the logs of specific Deployment, query logs with
kubernetes.pod:"{deployment_name}-*"
.{deployment_name}
is the Kubernetes Deployment name of the component you want to see logs. See Deployments for the Deployment name of each component. -
To see the logs of specific StatefulSet, query logs with
kubernetes.pod:"{statefulset_name}-*"
.{statefulset_name}
is the Kubernetes StatefulSet name of the component you want to see logs. See StatefulSets for the StatefulSet names of each component.
-