Logging for clusters
For cluster and app logs, Red Hat® OpenShift® on IBM Cloud® clusters include built-in tools to help you manage the health of your single cluster instance. You can also set up IBM Cloud tools for multi-cluster analysis or other use cases, such as IBM Cloud Kubernetes Service cluster add-ons: IBM Log Analysis and IBM Cloud Monitoring.
Understanding options for logging
To help understand when to use the built-in Red Hat OpenShift tools or IBM Cloud integrations, review the following information.
- IBM Log Analysis
-
Customizable user interface for live streaming of log tailing, real-time troubleshooting issue alerts, and log archiving.
- Quick integration with the cluster via a script.
- Aggregated logs across clusters and cloud providers.
- Historical access to logs that is based on the plan you choose.
- Highly available, scalable, and compliant with industry security standards.
- Integrated with IBM Cloud IAM for user access management.
- Flexible plans, including a free
Lite
option.
To get started, see Forwarding cluster and app logs to IBM Log Analysis.
- Built-in Red Hat OpenShift logging tools
-
Built-in view of pod logs in the Red Hat OpenShift web console.
- Built-in pod logs are not configured with persistent storage. You must integrate with a cloud database to back up the logging data and make it highly available, and manage the logs yourself.
To set up an OpenShift Container Platform Elasticsearch, Fluentd, and Kibana EFK stack, see installing the cluster logging operator. Keep in mind that your worker nodes must have at least 4 cores and GB memory to run the cluster logging stack.
- Service logs: IBM Cloud® Activity Tracker
-
Use IBM Cloud Activity Tracker to view cluster management events that are generated by the Red Hat OpenShift on IBM Cloud API. To access these logs, provision an instance of IBM Cloud Activity Tracker. For more information about the types of IBM Cloud Kubernetes Service events that you can track, see Activity Tracker events.
- API server logs: IBM Log Analysis
-
Customizable user interface for live streaming of log tailing, real-time troubleshooting issue alerts, and log archiving.
- Quick integration with the cluster via a script.
- Aggregated logs across clusters and cloud providers.
- Historical access to logs that is based on the plan you choose.
- Highly available, scalable, and compliant with industry security standards.
- Integrated with IBM Cloud IAM for user access management.
- Flexible plans, including a free
Lite
option.
- Built-in Red Hat OpenShift audit logging tools
-
API audit logging to monitor user-initiated activities is currently not supported.
Forwarding cluster and app logs to IBM Log Analysis
Use the Red Hat OpenShift on IBM Cloud observability plug-in to create a logging configuration for IBM Log Analysis in your cluster, and use this logging configuration to automatically collect and forward pod logs to IBM Log Analysis.
Considerations for using the Red Hat OpenShift on IBM Cloud observability plug-in:
- You can have only one logging configuration for IBM Log Analysis in your cluster at a time. If you want to use a different IBM Log Analysis service instance to send logs to, use the
ibmcloud ob logging config replace
command. - Red Hat OpenShift clusters in Satellite can't currently use the Red Hat OpenShift on IBM Cloud console or the observability plug-in CLI to enable logging for Satellite clusters. You must manually deploy logging agents to your cluster to forward logs to Log Analysis.
- If you created a Log Analysis configuration in your cluster without using the Red Hat OpenShift on IBM Cloud observability plug-in, you can use the
ibmcloud ob logging agent discover
command to make the configuration visible to the plug-in. Then, you can use the observability plug-in commands and functionality in the IBM Cloud console to manage the configuration.
Before you begin
- Verify that you are assigned the Editor platform access role and Manager server access role for IBM Log Analysis.
- Verify that you are assigned the Administrator platform access role and the Manager service access role for all Kubernetes namespaces in IBM Cloud Kubernetes Service to create the logging configuration. To
view a logging configuration or launch the Log Analysis dashboard after the logging configuration is created, users must be assigned the Administrator platform access role and the Manager service access
for the
ibm-observe
Kubernetes namespace in IBM Cloud Kubernetes Service. - If you want to use the CLI to set up the logging configuration:
To set up a logging configuration for your cluster,
-
Create an IBM Log Analysis service instance and note the name of the instance. The service instance must belong to the same IBM Cloud account where you created your cluster, but can be in a different resource group and IBM Cloud region than your cluster.
-
Set up a logging configuration for your cluster. When you create the logging configuration, an Red Hat OpenShift project
ibm-observe
is created and a Log Analysis agent is deployed as a daemon set to all worker nodes in your cluster. This agent collects logs with the extension*.log
and extensionless files that are stored in the/var/log
directory of your pod from all projects, includingkube-system
. The agent then forwards the logs to the IBM Log Analysis service.-
From the console
- From the Red Hat OpenShift clusters console, select the cluster for which you want to create a Log Analysis configuration.
- On the cluster Overview page, click Connect.
- Select the region and the IBM Log Analysis service instance that you created earlier, and click Connect.
-
From the CLI
-
Create the Log Analysis configuration. When you create the Log Analysis configuration, the ingestion key that was last added is retrieved automatically. If you want to use a different ingestion key, add the
--logdna-ingestion-key <ingestion_key>
option to the command.To use a different ingestion key after you created your logging configuration, use the
ibmcloud ob logging config replace
command.ibmcloud ob logging config create --cluster <cluster_name_or_ID> --instance <Log_Analysis_instance_name_or_ID>
Example output
Creating configuration... OK
-
Verify that the logging configuration was added to your cluster.
ibmcloud ob logging config list --cluster <cluster_name_or_ID>
Example output
Listing configurations... OK Instance Name Instance ID CRN IBM Cloud Log Analysis-opm 1a111a1a-1111-11a1-a1aa-aaa11111a11a crn:v1:prod:public:logdna:us-south:a/a11111a1aaaaa11a111aa11a1aa1111a:1a111a1a-1111-11a1-a1aa-aaa11111a11a::
-
-
-
Optional: Verify that the Log Analysis agent was set up successfully.
-
If you used the console to create the Log Analysis configuration, log in to your cluster. For more information, see Access your Red Hat OpenShift cluster..
-
Verify that the daemon set for the Log Analysis agent was created and all instances are listed as
AVAILABLE
.oc get daemonsets -n ibm-observe
Example output
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE logdna-agent 9 9 9 9 9 <none> 14m
The number of daemon set instances that are deployed equals the number of worker nodes in your cluster.
-
Review the ConfigMap that was created for your Log Analysis agent.
oc describe configmap -n ibm-observe
-
-
Access the logs for your pods from the Log Analysis dashboard.
- From the Red Hat OpenShift clusters console, select the cluster that you configured.
- On the cluster Overview page, click Launch. The Log Analysis dashboard opens.
- Review the pod logs that the Log Analysis agent collected from your cluster. It might take a few minutes for your first logs to show.
-
Review how you can search and filter logs in the Log Analysis dashboard.
Using the cluster logging operator
To deploy the OpenShift Container Platform cluster logging operator and stack on your Red Hat OpenShift on IBM Cloud cluster, see the Red Hat OpenShift documentation. Additionally, you must update the cluster logging instance to use an IBM Cloud Block Storage storage class.
-
Prepare your worker pool to run the operator.
- Create a VPC or classic worker pool with a flavor of at least 4 cores and 32 GB memory and 3 worker nodes.
- Label the worker pool.
- Taint the worker pool so that other workloads can't run on the worker pool.
-
From the Red Hat OpenShift web console Administrator perspective, click Operators > Installed Operators.
-
Click Cluster Logging.
-
In the Provided APIs section, Cluster Logging tile, click Create Instance.
-
Modify the configuration YAML to change the storage class for the ElasticSearch log storage from
gp2
to one of the following storage classes that vary with your cluster infrastructure provider.- Classic clusters:
ibmc-block-gold
- VPC clusters:
ibmc-vpc-block-10iops-tier
... elasticsearch: nodeCount: 3 redundancyPolicy: SingleRedundancy storage: storageClassName: ibmc-block-gold #or ibmc-vpc-block-10iops-tier for VPC clusters size: 200G ...
- Classic clusters:
-
Modify the configuration YAML to include the node selector and toleration for the worker pool label and taint that you previously created. For more information and examples, see the following Red Hat OpenShift documents. The examples use a label and toleration of
logging: clo-efk
.- Node selector. Add the node selector to the Elasticsearch (
logstore
)and Kibana (visualization
), and Fluentd (collector.logs
) pods.spec: logStore: elasticsearch: nodeSelector: logging: clo-efk ... visualization: kibana: nodeSelector: logging: clo-efk ... collection: logs: fluentd: nodeSelector: logging: clo-efk
- Toleration. Add the node selector to the Elasticsearch (
logstore
)and Kibana (visualization
), and Fluentd (collector.logs
) pods.spec: logStore: elasticsearch: tolerations: - key: app value: clo-efk operator: "Exists" effect: "NoExecute" ... visualization: kibana: tolerations: - key: app value: clo-efk operator: "Exists" effect: "NoExecute" ... collection: logs: fluentd: tolerations: - key: app value: clo-efk operator: "Exists" effect: "NoExecute"
- Node selector. Add the node selector to the Elasticsearch (
-
Click Create.
-
Verify that the operator, Elasticsearch, Fluentd, and Kibana pods are all Running.