IBM Cloud Docs
Why does the Ingress status show an ERRADRUH error?

Why does the Ingress status show an ERRADRUH error?

Virtual Private Cloud Classic infrastructure

You can use the ibmcloud ks ingress status-report ignored-errors add command to add an error to the ignored-errors list. Ignored errors still appear in the output of the ibmcloud ks ingress status-report get command, but are ignored when calculating the overall Ingress Status.

When you check the status of your cluster's Ingress components by running the ibmcloud ks ingress status-report get command, you see an error similar to the following example.

One or more ALB pod is not in running state (ERRADRUH).

One or more ALBs have replicas that are not running.

Complete the following steps to verify your cluster setup.

  1. List your ALB pods.

    kubectl get pods -n kube-system | grep -E "public-cr|private-cr"
    
  2. Describe the ALB pods that are not running and review the Events section.

    kubectl describe pod POD
    
  3. If you notice scheduler problems, follow the steps:

    1. List your ALBs using the ibmcloud ks ingress alb ls command.
    2. List your workers using the ibmcloud ks worker ls command.
    3. Classic clusters: Ensure you have at least two worker nodes in the VLANs where your ALBs are deployed. See Adding worker nodes and zones to clusters.
    4. VPC clusters: Ensure you have at least two worker nodes in the zones where your ALBs are deployed. See Adding worker nodes and zones to clusters.
    5. Ensure that your workers are healthy. For more information, see Worker node states.
    6. Ensure your nodes are not tainted or cordoned. For more information, see Taints and Tolerations and Safely Drain a Node.
  4. If you notice pod restarts, follow the steps.

    1. Get the logs for the failing pod.
      kubectl logs --previous -n kube-system <POD NAME>
      
    2. Review the logs and adjust the Ingress resource configurations or the Ingress ConfigMap in the kube-system namespace. For more information, see the NGINX Ingress Annotations and ConfigMap.
  5. Wait a few minutes and verify if the failing pods are now running.

  6. If the issue persists, contact support. Open a support case. In the case details, be sure to include any relevant log files, error messages, or command outputs.