Send IBM Cloud Kubernetes Service log data to IBM Cloud Logs
In this tutorial, you set up IBM Cloud® Kubernetes Service to send logs directly to IBM Cloud Logs. These logs help you troubleshoot issues and improve the health and performance of your Kubernetes clusters and apps.
Goals
In this tutorial you will:
-
Deploy the Logging agent in an existing IBM Cloud Kubernetes Service cluster.
-
Verify that log data is flowing to your IBM Cloud Logs instance.
Before you begin
Before you begin, make sure you have the prerequisites.
-
Create an IBM Cloud® Kubernetes Service, and provisioned an instance of IBM Cloud Logs.
-
Install the IBM Cloud CLI.
-
Install The Kubernetes command-line tool, kubectl.
-
Install the JSON CLI processor (
jq
). -
Install the YAML CLI processor (
yq
).
An alternative to installing the IBM Cloud CLI, kubectl
, jq
, and yq
would be to use the IBM Cloud Shell
Connect to your cluster
Connect to your IBM Cloud Kubernetes Service cluster. Connecting to the cluster enables kubectl
commands to be run on the cluster.
-
Log in to your IBM Cloud account. Include the
--sso
option if you are using a federated ID.ibmcloud login
-
Install the IBM Cloud Kubernetes Service CLI plug-in
ibmcloud plugin install ks
-
List the available clusters and make note of the cluster to be connected.
ibmcloud ks clusters
-
Connect to your cluster. Replace
<cluster_name>
with the name of your cluster.ibmcloud ks cluster config --cluster <cluster_name>
-
Verify that you are connected to your cluster and can run
kubectl
commands by listing all pods that are running in all namespaces.kubectl get pods --all-namespaces
Creating your API key
Before you provision the Logging agent as a daemonset, you need an IAM API key and a logging ingestion endpoint. The IBM Cloud CLI is used to obtain this information.
First, create a service ID and obtain an API key.
-
Create a service ID by running the following command.
ibmcloud iam service-id-create kubernetes-logs-agent --description "Service ID for sending logs from IKS"
-
Grant the
Sender
role for IBM Cloud Logs to the created service ID by running the following command.ibmcloud iam service-policy-create kubernetes-logs-agent --service-name logs --roles Sender
-
Create an IAM API key by running the following command. You can customize the key name (
kubernetes-logs-agent-apikey
) and description (--d
) if needed.ibmcloud iam service-api-key-create kubernetes-logs-agent-apikey kubernetes-logs-agent --description "API key for sending logs to the IBM Cloud Logs service"
The API key is returned in the output following
API Key
:ID ApiKey-xxxxxxxx-b815-46c7-bc9f-516115bc31c6 Name kubernetes-logs-agent-apikey Description API key for sending logs to the IBM Cloud Logs service Created At 2024-09-18T17:17+0000 API Key <apikey is displayed here> Locked false
Note:
-
Each time that you create an API key, a new IAM secret is generated.
-
Make sure that you securely store the API key, since it contains sensitive information.
Determining your logging endpoint
The second piece of information that is needed is the ingestion endpoint for your IBM Cloud Logs instance. Use this command to retrieve the URL:
ibmcloud resource service-instances --service-name logs --long --output JSON | jq '[.[] | {name: .name, id: .id, region: .region_id, ingestion_endpoint: .extensions.external_ingress}]'
The endpoint is similar to:
ingestion_endpoint: 3a622101-7521-4002-bf91-8c26e17eedcf.ingress.eu-de.logs.cloud.ibm.com
Creating the agent YAML file
In this step, create the YAML file that is used to configure the agent daemonset.
-
Create a file named
logs-values.yaml
with the following content:This file contains the configurations that are specific to your deployment.
metadata: name: "logs-agent" image: version: "1.4.0" # required clusterName: "" # Enter the name of your cluster. This information is used to improve the metadata and help with your filtering. env: # ingestionHost is a required field. For example: # ingestionHost: "<logs instance>.ingress.us-east.logs.cloud.ibm.com" ingestionHost: "" # required # If you are using private CSE proxy, then use port number "3443" # If you are using private VPE Gateway, then use port number "443" # If you are using the public endpoint, then use port number "443" ingestionPort: "" # required iamMode: "TrustedProfile" # trustedProfileID - trusted profile id - required for iam trusted profile mode trustedProfileID: "Profile-yyyyyyyy-xxxx-xxxx-yyyy-zzzzzzzzzzzz" # required if iamMode is set to TrustedProfile
-
Update the fields in the yaml file with values specific to your environment.
Deploying the daemonset
Using the API key, endpoint URL, and YAML file, deploy the agent to your cluster.
-
Log in to the Helm registry by running the
helm registry login
command:helm registry login -u iambearer -p $(ibmcloud iam oauth-tokens --output json | jq -r .iam_token | cut -d " " -f2) icr.io
For more information, see Using Helm charts in Container Registry: Pulling charts from another registry or Helm repository.
-
Deploy the agent.
If you are using a service ID API key (
iamMode
=IAMAPIKey
), run the following command:helm install <install-name> oci://icr.io/ibm/observe/logs-agent-helm --version <chart-version> --values <PATH>/logs-values.yaml -n ibm-observe --create-namespace --set secret.iamAPIKey=<APIKey-value>
where:
<install-name>
is the name of the Helm installation (ie.logging-agent
)<chart-version>
is the version of the helm chart. The Helm chart version should match the agent image version. For more information, see Helm chart versions.<PATH>
is the directory path where thelogs-values.yaml
file is located.<APIKey-value>
is the IAM apikey associated with the ServiceID setup in Step 1
You can also deploy the agent using a trusted profile. For more information, see Deploying the Logging agent for Kubernetes clusters using a Helm chart.
Verify that logs are being sent
To help ensure that your IBM Cloud® Kubernetes Service logs are successfully flowing to IBM Cloud Logs, do the following steps:
-
Access the IBM Cloud Logs UI and click
Livetail.
-
Click
Start
to watch the logs as they arrive in real-time.
A continuous flow of log data from the kube-system
application should be seen. These logs are coming from your cluster.