Why does block storage change to read-only?
Virtual Private Cloud Classic infrastructure
You might see the following symptoms:
- When you run
oc get pods -o wide
, you see that multiple pods on the same worker node are stuck in theContainerCreating
orCrashLoopBackOff
state. All these pods use the same block storage instance. - When you run a
oc describe pod
command, you see the following error in the Events section:MountVolume.SetUp failed for volume ... read-only
.
If a network error occurs while a pod writes to a volume, IBM Cloud infrastructure protects the data on the volume from getting corrupted by changing the volume to a read-only mode. Pods that use this volume can't continue to write to the volume and fail.
Verify the plug-in version, re-create your app, and safely reload your worker node.
- Check the version of the IBM Cloud Block Storage plug-in that is installed in your cluster.
helm list --all-namespaces
- Verify that you use the latest version of the IBM Cloud Block Storage plug-in. If not, update your plug-in.
- If you used a Kubernetes deployment for your pod, restart the pod that is failing by removing the pod and letting Kubernetes re-create it. If you did not use a deployment, retrieve the YAML file that was used to create your pod by running
oc get pod <pod_name> -o yaml >pod.yaml
. Then, delete and manually re-create the pod.oc delete pod <pod_name>
- Check whether re-creating your pod resolved the issue. If not, reload the worker node.
- Find the worker node where your pod runs and note the private IP address that is assigned to your worker node.
Example output:oc describe pod <pod_name> | grep Node
Node: 10.75.XX.XXX/10.75.XX.XXX Node-Selectors: <none>
- Retrieve the ID of your worker node by using the private IP address from the previous step.
ibmcloud oc worker ls --cluster <cluster_name_or_ID>
- Safely reload the worker node.
- Find the worker node where your pod runs and note the private IP address that is assigned to your worker node.