copyright: years: 2025, 2025 lastupdated: "2025-09-04"
keywords: , nhc003, container registry unreachable
subcollection: openshift
content-type: troubleshoot
Why does the Network status show an NHC003
error?
Virtual Private Cloud Classic infrastructure
When you check the status of your cluster's health by running the ibmcloud oc cluster health issues --cluster <CLUSTER_ID>
, you see an error similar to the following example.
ID Component Severity Description
NHC003 Network Warning Some worker nodes in the cluster can not reach container image registries to pull images.
If you check the details of the issue, you will see which registry cannot be accessed from which worker node.
ibmcloud ks cluster health issue get --cluster <CLUSTER_ID> --issue NHC003
This warning means that some of the worker nodes cannot access external container registries, such as Docker Hub, Quay, or IBM Cloud Container Registry, which prevents them from pulling images required by your workloads.
Ensure the worker nodes have access to the internet and can reach external container registries. Also check network policies, security groups, and firewall settings.
-
From a pod running on the affected node, check if the node can access the registry. Start a debug pod.
kubectl run -i --tty debug \ --image=us.icr.io/armada-master/network-alpine:latest \ --restart=Never \ --overrides=' { "apiVersion": "v1", "spec": { "nodeName": "<node-name>" } }' -- sh
Then inside the pod, try accessing a container registry.
wget <registry_address>
Or use
curl
if available:curl -I <registry_address>
-
Check if the worker nodes have outbound internet access by running a traceroute or ping from the debug pod.
traceroute <registry_address>
ping <registry_address>
-
Check if there are any restrictive network policies and global network policies in place.
kubectl get networkpolicies --all-namespaces
kubectl get globalnetworkpolicies.crd.projectcalico.org
Look for policies that block egress traffic from worker nodes to the internet or to the specific registry domains.
-
Verify your cluster's security groups and make sure outbound traffic is allowed. For each worker node, check the security group. Make sure there are no rules blocking HTTPS (TCP port 443) or DNS (UDP port 53).
-
Review your infrastructure (network appliances, security groups, ACLs, etc.) and enable outbound access if necessary.
-
If you're using private container registries, verify DNS resolution and authentication is working.
-
After applying the fixes, wait a few minutes and recheck the cluster health.
-
If the issue persists, contact support for further assistance. Open a support case. In the case details, be sure to include any relevant log files, error messages, or command outputs.