Removing OpenShift Data Foundation
Review the following topics to manage your OpenShift Data Foundation deployment.
Removing ODF from your apps
To remove ODF from your apps, you can delete your app or deployment and the corresponding PVCs.
If you want to fully remove ODF and all your data, you can remove your storage cluster.
- List your PVCs and note the name of the PVC and the corresponding PV that you want to remove.
oc get pvc
- Remove any pods that mount the PVC.
- List all the pods that currently mount the PVC that you want to delete. If no pods are returned, you don't have any pods that currently use your PVC.
Example outputoc get pods --all-namespaces -o=jsonpath='{range .items[*]}{"\n"}{.metadata.name}{":\t"}{range .spec.volumes[*]}{.persistentVolumeClaim.claimName}{" "}{end}{end}' | grep "<pvc_name>"
app ocs-storagecluster-cephfs
- Remove the pod that uses the PVC. If the pod is part of a deployment, remove the deployment.
oc delete pod <pod_name>
oc delete deployment <deployment_name>
- Verify that the pod or the deployment is removed.
oc get pods
oc get deployments
- List all the pods that currently mount the PVC that you want to delete. If no pods are returned, you don't have any pods that currently use your PVC.
- Optional Delete the PVC. Deleting the PVC deletes your app data from the storage volume.
oc delete pvc <pvc_name>
Removing your ODF custom resource
Complete the following steps to remove the ODF resources from your cluster.
The following steps result in data loss. Back up the data on your local volumes before you remove your ODF deployment.
When you delete the OcsCluster
custom resource from your cluster, the following resources are deleted. Additionally, any apps that require access to the data on your local volumes might experience downtime. - ODF driver pods. -
The MON and OSD PVCs. - All data from your volumes. However, if you created PVCs by using the NooBaa storage class, and your apps wrote data to your backing store, the data in your backing store is not deleted.
- Get the name of your
OcsCluster
custom resource.
Example output for a custom resource calledoc get ocscluster
ocscluster-vpc
.NAME AGE ocscluster-vpc 4s
- Delete your
OcsCluster
custom resource. Replace<ocscluster_name>
with the name of your custom resource.
Example command for anoc delete ocscluster <ocscluster_name>
OcsCluster
custom resource calledocscluster-vpc
.oc delete ocscluster ocscluster-vpc
- Delete any PVCs that you created.
oc get pvc
- Optional If you don't want to reinstall ODF, you can Remove the ODF add-on from your cluster.
Cleaning up ODF
- Copy one of the following clean up scripts based on your ODF deployment.
- VPC or Satellite with dynamically provisioned disks Clean up the remaining Kubernetes resources from your cluster. Save the following script in a file called
cleanup.sh
to your local machine.#!/bin/bash ocscluster_name=`oc get ocscluster | awk 'NR==2 {print $1}'` oc delete ocscluster --all --wait=false kubectl patch ocscluster/$ocscluster_name -p '{"metadata":{"finalizers":[]}}' --type=merge oc delete ns openshift-storage --wait=false sleep 20 kubectl -n openshift-storage patch persistentvolumeclaim/db-noobaa-db-0 -p '{"metadata":{"finalizers":[]}}' --type=merge kubectl -n openshift-storage patch cephblockpool.ceph.rook.io/ocs-storagecluster-cephblockpool -p '{"metadata":{"finalizers":[]}}' --type=merge kubectl -n openshift-storage patch cephcluster.ceph.rook.io/ocs-storagecluster-cephcluster -p '{"metadata":{"finalizers":[]}}' --type=merge kubectl -n openshift-storage patch cephfilesystem.ceph.rook.io/ocs-storagecluster-cephfilesystem -p '{"metadata":{"finalizers":[]}}' --type=merge kubectl -n openshift-storage patch cephobjectstore.ceph.rook.io/ocs-storagecluster-cephobjectstore -p '{"metadata":{"finalizers":[]}}' --type=merge kubectl -n openshift-storage patch cephobjectstoreuser.ceph.rook.io/noobaa-ceph-objectstore-user -p '{"metadata":{"finalizers":[]}}' --type=merge kubectl -n openshift-storage patch cephobjectstoreuser.ceph.rook.io/ocs-storagecluster-cephobjectstoreuser -p '{"metadata":{"finalizers":[]}}' --type=merge kubectl -n openshift-storage patch noobaa/noobaa -p '{"metadata":{"finalizers":[]}}' --type=merge kubectl -n openshift-storage patch backingstores.noobaa.io/noobaa-default-backing-store -p '{"metadata":{"finalizers":[]}}' --type=merge kubectl -n openshift-storage patch bucketclasses.noobaa.io/noobaa-default-bucket-class -p '{"metadata":{"finalizers":[]}}' --type=merge kubectl -n openshift-storage patch storagecluster.ocs.openshift.io/ocs-storagecluster -p '{"metadata":{"finalizers":[]}}' --type=merge sleep 20 oc delete pods -n openshift-storage --all --force --grace-period=0 sleep 20 for item in `oc get csiaddonsnodes.csiaddons.openshift.io -n openshift-storage| awk 'NR>1 {print $1}'`; do kubectl -n openshift-storage patch csiaddonsnodes.csiaddons.openshift.io/$item -p '{"metadata":{"finalizers":[]}}' --type=merge done kubectl -n openshift-storage patch configmap/rook-ceph-mon-endpoints -p '{"metadata":{"finalizers":[]}}' --type=merge kubectl -n openshift-storage patch secret/rook-ceph-mon -p '{"metadata":{"finalizers":[]}}' --type=merge kubectl -n openshift-storage patch storagesystems.odf.openshift.io/ocs-storagecluster-storagesystem -p '{"metadata":{"finalizers":[]}}' --type=merge
- Classic clusters or Satellite clusters with local disks Clean up the remaining Kubernetes resources from your cluster. Save the following script in a file called
cleanup.sh
to your local machine.#!/bin/bash ocscluster_name=`oc get ocscluster | awk 'NR==2 {print $1}'` oc delete ocscluster --all --wait=false kubectl patch ocscluster/$ocscluster_name -p '{"metadata":{"finalizers":[]}}' --type=merge oc delete ns openshift-storage --wait=false sleep 20 kubectl -n openshift-storage patch persistentvolumeclaim/db-noobaa-db-0 -p '{"metadata":{"finalizers":[]}}' --type=merge kubectl -n openshift-storage patch cephblockpool.ceph.rook.io/ocs-storagecluster-cephblockpool -p '{"metadata":{"finalizers":[]}}' --type=merge kubectl -n openshift-storage patch cephcluster.ceph.rook.io/ocs-storagecluster-cephcluster -p '{"metadata":{"finalizers":[]}}' --type=merge kubectl -n openshift-storage patch cephfilesystem.ceph.rook.io/ocs-storagecluster-cephfilesystem -p '{"metadata":{"finalizers":[]}}' --type=merge kubectl -n openshift-storage patch cephobjectstore.ceph.rook.io/ocs-storagecluster-cephobjectstore -p '{"metadata":{"finalizers":[]}}' --type=merge kubectl -n openshift-storage patch cephobjectstoreuser.ceph.rook.io/noobaa-ceph-objectstore-user -p '{"metadata":{"finalizers":[]}}' --type=merge kubectl -n openshift-storage patch cephobjectstoreuser.ceph.rook.io/ocs-storagecluster-cephobjectstoreuser -p '{"metadata":{"finalizers":[]}}' --type=merge kubectl -n openshift-storage patch noobaa/noobaa -p '{"metadata":{"finalizers":[]}}' --type=merge kubectl -n openshift-storage patch backingstores.noobaa.io/noobaa-default-backing-store -p '{"metadata":{"finalizers":[]}}' --type=merge kubectl -n openshift-storage patch bucketclasses.noobaa.io/noobaa-default-bucket-class -p '{"metadata":{"finalizers":[]}}' --type=merge kubectl -n openshift-storage patch storagecluster.ocs.openshift.io/ocs-storagecluster -p '{"metadata":{"finalizers":[]}}' --type=merge sleep 20 oc delete pods -n openshift-storage --all --force --grace-period=0 if oc get po -n openshift-local-storage|grep -operator; then LOCAL_NS=openshift-local-storage else LOCAL_NS=local-storage fi oc delete ns $LOCAL_NS --wait=false sleep 20 kubectl -n $LOCAL_NS patch localvolume.local.storage.openshift.io/local-block -p '{"metadata":{"finalizers":[]}}' --type=merge kubectl -n $LOCAL_NS patch localvolume.local.storage.openshift.io/local-file -p '{"metadata":{"finalizers":[]}}' --type=merge sleep 20 oc delete pods -n $LOCAL_NS --all --force --grace-period=0 for item in `oc get csiaddonsnodes.csiaddons.openshift.io -n openshift-storage| awk 'NR>1 {print $1}'`; do kubectl -n openshift-storage patch csiaddonsnodes.csiaddons.openshift.io/$item -p '{"metadata":{"finalizers":[]}}' --type=merge done kubectl -n openshift-storage patch configmap/rook-ceph-mon-endpoints -p '{"metadata":{"finalizers":[]}}' --type=merge kubectl -n openshift-storage patch secret/rook-ceph-mon -p '{"metadata":{"finalizers":[]}}' --type=merge kubectl -n openshift-storage patch storagesystems.odf.openshift.io/ocs-storagecluster-storagesystem -p '{"metadata":{"finalizers":[]}}' --type=merge
- VPC or Satellite with dynamically provisioned disks Clean up the remaining Kubernetes resources from your cluster. Save the following script in a file called
- Run the
cleanup.sh
script.sh ./cleanup.sh
- **Classic clusters or Satellite clusters with local disks After you run the cleanup script, log in to each worker node and run the following commands.
- Deploy a debug pod and run
chroot /host
.oc debug node/<node_name> -- chroot /host
- Run the following command to remove any files or directories on the specified paths. Repeat this step for each worker node that you used in your OCS configuration.
Example output:rm -rvf /var/lib/rook /mnt/local-storage
removed '/var/lib/rook/openshift-storage/log/ocs-deviceset-0-data-0-6fgp6/ceph-volume.log' removed directory: '/var/lib/rook/openshift-storage/log/ocs-deviceset-0-data-0-6fgp6' removed directory: '/var/lib/rook/openshift-storage/log' removed directory: '/var/lib/rook/openshift-storage/crash/posted' removed directory: '/var/lib/rook/openshift-storage/crash' removed '/var/lib/rook/openshift-storage/client.admin.keyring' removed '/var/lib/rook/openshift-storage/openshift-storage.config' removed directory: '/var/lib/rook/openshift-storage' removed directory: '/var/lib/rook' removed '/mnt/local-storage/localblock/nvme3n1' removed directory: '/mnt/local-storage/localblock' removed '/mnt/local-storage/localfile/nvme2n1' removed directory: '/mnt/local-storage/localfile' removed directory: '/mnt/local-storage'
- Deploy a debug pod and run
- Optional: Classic clusters or Satellite clusters with local disks If you no longer want to use the local volumes that you used in your configuration, you can delete them from the cluster. List the local PVs.
Example output:oc get pv
local-pv-180cfc58 139Gi RWO Delete Available localfile 11m local-pv-67f21982 139Gi RWO Delete Available localfile 12m local-pv-80c5166 100Gi RWO Delete Available localblock 12m local-pv-9b049705 139Gi RWO Delete Available localfile 12m local-pv-b09e0279 100Gi RWO Delete Available localblock 12m local-pv-f798e570 100Gi RWO Delete Available localblock 12m
- Delete the local PVs.
oc delete pv <pv_name> <pv_name> <pv_name>
- After deleting your PVCs and PVs, you also need to delete the storage volumes from your account. To locate and remove unused storage volumes in your account, see Why am I still seeing charges for block storage devices after deleting my cluster?.
Uninstalling the OpenShift Data Foundation add-on
Uninstalling the OpenShift Data Foundation add-on from the console
To remove the OpenShift Data Foundation add-on from your cluster, complete the following steps.
If you want to remove all ODF resources and data from your cluster, remove the CRDs before uninstalling the add-on.
- From the Red Hat OpenShift clusters console, select the cluster for which you want to remove the OpenShift Data Foundation add-on.
- On the cluster Overview page, scroll to your installed add-ons.
- On the OpenShift Data Foundation card, click the Actions icon and then Uninstall.
Uninstalling the OpenShift Data Foundation add-on from the CLI
You can uninstall the OpenShift Data Foundation add-on from your cluster by using the Red Hat OpenShift clusters console or the CLI.
If you want to remove all ODF resources and data from your cluster, remove the CRDs before uninstalling the add-on.
- Uninstall the add-on.
ibmcloud oc cluster addon disable openshift-container-storage -c <cluster_name>
- Verify that the add-on is removed.
ibmcloud oc cluster addon ls -c <cluster_name>
Troubleshooting ODF
To gather the information to troubleshoot ODF, you can use the oc adm must-gather
command and specify the ODF image. For more information, see Gathering cluster data.
Example command:
oc adm must-gather --image=registry.redhat.io/ocs4/ocs-must-gather-rhel8:latest --dest-dir=ocs_mustgather
You can use the Rook community toolbox to debug issues with your Ceph cluster. For more information, see the Rook documentation.
For more information, review the common troubleshooting topics.