Logging for clusters
For cluster and app logs, Red Hat® OpenShift® on IBM Cloud® clusters include built-in tools to help you manage the health of your single cluster instance. You can also set up IBM Cloud tools for multi-cluster analysis or other use cases, such as IBM Cloud Kubernetes Service cluster add-ons: IBM Log Analysis and IBM Cloud Monitoring.
Understanding options for logging
To help understand when to use the built-in Red Hat OpenShift tools or IBM Cloud integrations, review the following information.
- IBM Cloud Logs
-
Customizable user interface for live streaming of log tailing, real-time troubleshooting issue alerts, and log archiving.
- Quick integration with the cluster via a script.
- Aggregated logs across clusters and cloud providers.
- Historical access to logs that is based on the plan you choose.
- Highly available, scalable, and compliant with industry security standards.
- Integrated with IBM Cloud IAM for user access management.
-
View cluster management events that are generated by the Red Hat OpenShift on IBM Cloud API. To access these logs, provision an instance of IBM Cloud Logs. For more information about the types of IBM Cloud Kubernetes Service events that you can track, see Activity Tracker events.
- Built-in Red Hat OpenShift logging tools
-
Built-in view of pod logs in the Red Hat OpenShift web console.
- Built-in pod logs are not configured with persistent storage. You must integrate with a cloud database to back up the logging data and make it highly available, and manage the logs yourself.
To set up an OpenShift Container Platform Elasticsearch, Fluentd, and Kibana EFK stack, see installing the cluster logging operator. Keep in mind that your worker nodes must have at least 4 cores and GB memory to run the cluster logging stack.
- Built-in Red Hat OpenShift audit logging tools
-
API audit logging to monitor user-initiated activities is currently not supported.
Forwarding cluster and app logs to IBM Cloud Logs
The following steps are deprecated. The observability CLI plug-in ibmcloud ob
and the v2/observe
endpoints are deprecated and support ends on 28 March 2025. You can now manage your logging and monitoring integrations
from the console or through the Helm charts. For the latest steps, see Managing the Logging agent for Red Hat OpenShift on IBM Cloud clusters or Managing the Logging agent for IBM Cloud Kubernetes Service clusters
Using the cluster logging operator
To deploy the OpenShift Container Platform cluster logging operator and stack on your Red Hat OpenShift on IBM Cloud cluster, see the Red Hat OpenShift documentation. Additionally, you must update the cluster logging instance to use an IBM Cloud Block Storage storage class.
-
Prepare your worker pool to run the operator.
- Create a VPC or classic worker pool with a flavor of at least 4 cores and 32 GB memory and 3 worker nodes.
- Label the worker pool.
- Taint the worker pool so that other workloads can't run on the worker pool.
-
From the Red Hat OpenShift web console Administrator perspective, click Operators > Installed Operators.
-
Click Cluster Logging.
-
In the Provided APIs section, Cluster Logging tile, click Create Instance.
-
Modify the configuration YAML to change the storage class for the ElasticSearch log storage from
gp2
to one of the following storage classes that vary with your cluster infrastructure provider.- Classic clusters:
ibmc-block-gold
- VPC clusters:
ibmc-vpc-block-10iops-tier
... elasticsearch: nodeCount: 3 redundancyPolicy: SingleRedundancy storage: storageClassName: ibmc-block-gold #or ibmc-vpc-block-10iops-tier for VPC clusters size: 200G ...
- Classic clusters:
-
Modify the configuration YAML to include the node selector and toleration for the worker pool label and taint that you previously created. For more information and examples, see the following Red Hat OpenShift documents. The examples use a label and toleration of
logging: clo-efk
.- Node selector. Add the node selector to the Elasticsearch (
logstore
)and Kibana (visualization
), and Fluentd (collector.logs
) pods.spec: logStore: elasticsearch: nodeSelector: logging: clo-efk ... visualization: kibana: nodeSelector: logging: clo-efk ... collection: logs: fluentd: nodeSelector: logging: clo-efk
- Toleration. Add the node selector to the Elasticsearch (
logstore
)and Kibana (visualization
), and Fluentd (collector.logs
) pods.spec: logStore: elasticsearch: tolerations: - key: app value: clo-efk operator: "Exists" effect: "NoExecute" ... visualization: kibana: tolerations: - key: app value: clo-efk operator: "Exists" effect: "NoExecute" ... collection: logs: fluentd: tolerations: - key: app value: clo-efk operator: "Exists" effect: "NoExecute"
- Node selector. Add the node selector to the Elasticsearch (
-
Click Create.
-
Verify that the operator, Elasticsearch, Fluentd, and Kibana pods are all Running.