IBM Cloud Docs
Why does the Network status show an NHC004 error?

copyright: years: 2025, 2025 lastupdated: "2025-09-04"

keywords: , nhc004, vpe gateway hostname resolution

subcollection: openshift

content-type: troubleshoot


Why does the Network status show an NHC004 error?

Virtual Private Cloud

When you check the status of your cluster's health by running the ibmcloud oc cluster health issues --cluster <CLUSTER_ID>, you see an error similar to the following example.

ID       Component   Severity   Description
NHC004   Network     Warning    Some worker nodes in the cluster can not resolve VPE gateway hostnames.

This warning indicates that DNS resolution is failing for hostnames associated with Virtual Private Endpoints (VPE). This can affect services that rely on private connectivity to IBM Cloud services.

Make sure that your worker nodes have correct DNS resolvers configured and can resolve the VPE gateway hostnames. The following steps help you confirm whether the DNS resolution issue is due to unreachable VPE endpoints or an incorrect configuration.

How to find the VPE gateways for a VPC cluster

To identify the VPE gateways used by your IBM Cloud Kubernetes Service cluster, follow these steps:

1. Use the IBM Cloud CLI to list endpoint gateways

  1. Run the following command to list all VPE (endpoint) gateways in your VPC.

    ibmcloud is endpoint-gateways
    

    Example command to list VPEs for a specific VPC.

    ibmcloud is endpoint-gateways --vpc <VPC-ID>
    
  2. Filter the output for iks-cluster_ID and look for Service Endpoints. This shows the sassociated services, IP addresses, and endpoint names.

2. Use the IBM Cloud console to list endpoint gateways

  1. Navigate to VPC Infrastructure > Endpoint Gateways
  2. Select your VPC
  3. Review configured gateways for private service access and view DNS/IP information

3. After finding the VPE gateway hostname complete the following steps.

  1. From a pod running on a worker node, launch a debug shell:

    kubectl run -i --tty debug --image=us.icr.io/armada-master/network-alpine:latest --restart=Never -- sh
    
  2. Inside the pod, try resolving a VPE gateway hostname.

    nslookup <vpe-hostname>
    
    dig <vpe-hostname>
    
  3. Test DNS resolution directly using a VPC DNS service.

    dig <vpe-hostname> @161.26.0.7
    
    dig <vpe-hostname> @161.26.0.8
    
  4. If your cluster is using custom DNS configurations (e.g., modified CoreDNS settings), inspect the config.

    kubectl get configmap coredns -n kube-system -o yaml
    
  5. If required, update the CoreDNS configuration to ensure VPE-resolving DNS servers are included, then reload CoreDNS.

    kubectl rollout restart deployment coredns -n kube-system
    
  6. Ensure that your VPC DNS settings allow access to the VPE hostnames. In the IBM Cloud console, navigate to VPC > DNS services and validate the DNS rules.

  • For more on Virtual private endpoint (VPE) gateways in IBM Cloud Kubernetes Service, see Virtual private endpoint (VPE) gateways.

  • After corrections, wait a few minutes and recheck the cluster health status.

  • If the issue persists, contact support for further assistance. Open a support case. In the case details, be sure to include any relevant log files, error messages, or command outputs.