Debugging Calico components
Virtual Private Cloud Classic infrastructure
In version 1.29 and later, the Calico Operator determines the number of calico-typha pods based on the number of cluster workers and does not consider tainted nodes. If you have fewer than 3 untainted nodes in your cluster, or have
a very large cluster with a small number of untainted nodes, you might have one or more calico-typha pods are stuck in Pending state because the pod can't find an untainted node to run on. Usually, this doesn't cause
an issue as long as there is at least one calico-typha pod in Running state. However, for high availability it is recommended to have at least two calico-typha pods always running. As a best practice, make sure that
there are enough untainted nodes to run all the calico-typha pods that are created by the Calico operator.
You experience issues with Calico components such as pods that don't deploy or intermittent networking issues.
Increase the logging level of Calico components to gather more information about the issue.
Increasing the log level for the calico-typha components
Complete the following steps to increase the log level for the calico-typha component.
-
Run the following command to edit the
calico-typhadeployment.For 1.29 and later:
kubectl edit deploy calico-typha -n calico-systemFor 1.28 and earlier:
kubectl edit deploy calico-typha -n kube-system -
Change the
TYPHA_LOGSEVERITYSCREENenvironment variable frominfotodebug.containers: - env: - name: TYPHA_LOGSEVERITYSCREEN value: debug -
Save and close the file to apply the changes and restart the
calico-typhadeployment.
Increasing the log level for the calico-cni components
Complete the following steps to increase the log level for the calico-cni component.
-
Run the following command to edit the
calico-configConfigMap.kubectl edit cm -n kube-system calico-config -
Change the
cni_network_config>plugins>log_levelenvironment variable todebug.cni_network_config: |- { "name": "k8s-pod-network", "cniVersion": "0.3.1", "plugins": [ { "type": "calico", "log_level": "debug", -
Save and close the file. The change won't take effect until the
calico-nodepods are restarted. -
Restart the
calico-nodepods to apply the changes.kubectl rollout restart daemonset/calico-node -n kube-systemExample output
daemonset.apps/calico-node restarted
Increasing the log level for the calico-node components
Complete the following steps to increase the log level for the calico-node component.
-
Run the following command:
For 1.29 and later:
kubectl edit ds calico-node -n calico-systemFor 1.28 and earlier:
kubectl edit ds calico-node -n kube-system -
Under the
FELIX_USAGEREPORTINGENABLEDname and value pair (or after any of theFELIX_*environment variable name value pairs), add the following entry.- name: FELIX_LOGSEVERITYSCREEN value: Debug -
Save the change. After saving your changes, all the pods in the
calico-nodedaemonset complete a rolling update that applies the changes. Thecalico-cnialso applies any changes to logging levels in thekube-system/calico-configConfigMap.
Increasing the log level for the calico-kube-controllers components
Complete the following steps to increase the log level for the calico-kube-controllers component.
-
Edit the daemonset by running the following command.
For 1.29 and later:
kubectl edit ds calico-node -n calico-systemFor 1.28 and earlier:
kubectl edit ds calico-node -n kube-system -
Under the
DATASTORE_TYPEname and value pair, add the following entry.- name: LOG_LEVEL value: debug -
Save the change. The
calico-kube-controllerspod restarts and applies the changes.
Gathering Calico logs
-
List the pods and nodes in your cluster and make a node of the pod name, pod IP address, and worker node that has the issue.
-
Get the logs for the
calico-nodepod on the worker node where the problem occurred.For 1.29 and later:
kubectl logs calico-typha-aaa11111a-aaaaa -n calico-systemFor 1.28 and earlier:
kubectl logs calico-typha-aaa11111a-aaaaa -n kube-system -
Get logs for the
calico-kube-controllerspod.For 1.29 and later:
kubectl logs calico-kube-controllers-11aaa11aa1-a1a1a -n calico-systemFor 1.28 and earlier:
kubectl logs calico-kube-controllers-11aaa11aa1-a1a1a -n kube-system -
Follow the instructions for Debugging by using kubectl exec to get
/var/log/syslog,containerd.log,kubelet.log, andkern.logfrom the worker node.