Managing IBM Cloud File Storage for VPC
When you set up persistent storage in your cluster, you have three main components: the Kubernetes persistent volume claim (PVC) that requests storage, the Kubernetes persistent volume (PV) that is mounted to a pod and described in the PVC, and the file share. Depending on how you created your storage, you might need to delete all three components separately.
The File Storage for VPC cluster add-on is available in Beta.
The following limitations apply to the add-on beta.
- It is recommended that your cluster and VPC are part of same resource group. If your cluster and VPC are in separate resource groups, then before you can provision file shares, you must create your own storage class and provide your VPC resource group ID. For more information, see Creating your own storage class.
- New security group rules were introduced in cluster versions 4.11 and later. These rule changes mean that you must sync your security groups before you can use File Storage for VPC. For more information, see Adding File Storage for VPC to apps.
- New storage classes were added with version 2.0 of the add-on. You can no longer provision new file shares that use the older storage classes. Existing volumes that use the older storage classes continue to function, however you cannot expand the volumes that were created using the older classes. For more information, see the Migrating to a new storage class.
Updating the File Storage for VPC cluster add-on
Access your Red Hat OpenShift cluster.
-
Get your cluster ID.
ibmcloud oc cluster ls
-
Review the available add-on versions.
ibmcloud oc cluster addon versions
-
Disable the add-on.
ibmcloud oc cluster addon disable vpc-file-csi-driver --cluster CLUSTER
-
Enable the newer version of the add-on.
ibmcloud oc cluster addon enable vpc-file-csi-driver --cluster CLUSTER --version VERSION
-
Verify the add-on is enabled by running the following commands.
oc get deploy -n kube-system | grep file
ibm-vpc-file-csi-controller 2/2 2 2 13m
oc get ds -n kube-system | grep file
ibm-vpc-file-csi-node 2 2 2 2 2 <none> 14m
oc get pods -n kube-system | grep file
ibm-vpc-file-csi-controller-7899db784-kc29g 5/5 Running 0 14m ibm-vpc-file-csi-controller-7899db784-mp5jt 5/5 Running 0 14m ibm-vpc-file-csi-node-bfqdz 4/4 Running 0 14m ibm-vpc-file-csi-node-n7jbx 4/4 Running 0 14m
Updating encryption in-transit (EIT) packages
The PACKAGE_DEPLOYER_VERSION
in the addon-vpc-file-csi-driver-configmap
indicates the image version of the EIT packages.
When a new image is available, edit the add-on configmap and specify the new image version, to update the packages on your worker nodes.
-
Edit the
addon-vpc-file-csi-driver-configmap
configmap and specify the new image version.oc edit cm addon-vpc-file-csi-driver-configmap -n kube-system
Example output
PACKAGE_DEPLOYER_VERSION: v1.0.0
-
Follow the status of the update by reviewing the events in the
file-csi-driver-status
config mapoc get cm file-csi-driver-status -n kube-system -o yaml
events: | - event: EnableVPCFileCSIDriver description: 'VPC File CSI Driver enable successful, DriverVersion: v2.0.3' timestamp: "2024-06-13 09:17:07" - event: EnableEITRequest description: 'Request received to enableEIT, workerPools: , check the file-csi-driver-status configmap for eit installation status on each node of each workerpool.' timestamp: "2024-06-13 09:17:31" - event: 'Enabling EIT on host: 10.240.0.10' description: 'Package installation successful on host: 10.240.0.10, workerpool: default' timestamp: "2024-06-13 09:17:48" - event: 'Enabling EIT on host: 10.240.0.8' description: 'Package installation successful on host: 10.240.0.8, workerpool: default' timestamp: "2024-06-13 09:17:48" - event: 'Enabling EIT on host: 10.240.0.8' description: 'Package update successful on host: 10.240.0.8, workerpool: default' timestamp: "2024-06-13 09:20:21" - event: 'Enabling EIT on host: 10.240.0.10' description: 'Package update successful on host: 10.240.0.10, workerpool: default' timestamp: "2024-06-13 09:20:21"
Disabling the add-on
Disabling the vpc-file-csi-driver
removes the encryption in-transit packages from your worker nodes.
-
Run the following command to disable the add-on.
ibmcloud oc cluster addon disable --addon vpc-file-csi-driver --cluster CLUSTER
-
Verify the pods have been removed.
oc get pods -n kube-system | grep file
Understanding your storage removal options
Tagging was not supported in version 1.2. This impacts the removal of file shares when a cluster is deleted with the --force-delete-storage
option. Make sure you clean up all PVCs that were created with version 1.2 of the add-on
before deleting your cluster.
Removing persistent storage from your IBM Cloud account varies depending on how you provisioned the storage and what components you already removed.
- Is my persistent storage deleted when I delete my cluster?
- During cluster deletion, you have the option to remove your persistent storage. However, depending on how your storage was provisioned, the removal of your storage might not include all storage components. If you dynamically provisioned storage
with a storage class that sets
reclaimPolicy: Delete
, your PVC, PV, and the storage instance are automatically deleted when you delete the cluster. For storage that was statically provisioned or storage that you provisioned with a storage class that setsreclaimPolicy: Retain
, the PVC and the PV are removed when you delete the cluster, but your storage instance and your data remain. You are still charged for your storage instance. Also, if you deleted your cluster in an unhealthy state, the storage might still exist even if you chose to remove it. - How do I delete the storage when I want to keep my cluster?
- When you dynamically provisioned the storage with a storage class that sets
reclaimPolicy: Delete
, you can remove the PVC to start the deletion process of your persistent storage. Your PVC, PV, and storage instance are automatically removed. For storage that was statically provisioned or storage that you provisioned with a storage class that setsreclaimPolicy: Retain
, you must manually remove the PVC, PV, and the storage instance to avoid further charges. - How does the billing stop after I delete my storage?
- Depending on what storage components you delete and when, the billing cycle might not stop immediately. If you delete the PVC and PV, but not the storage instance in your IBM Cloud account, that instance still exists and you are charged for it.
If you delete the PVC, PV, and the storage instance, the billing cycle stops depending on the billingType
that you chose when you provisioned your storage and how you chose to delete the storage.
-
When you manually cancel the persistent storage instance from the IBM Cloud console or the CLI, billing stops as follows:
- Hourly storage: Billing stops immediately. After your storage is canceled, you might still see your storage instance in the console for up to 72 hours.
- Monthly storage: You can choose between immediate cancellation or cancellation on the anniversary date. In both cases, you are billed until the end of the current billing cycle, and billing stops for the next billing cycle. After your storage is canceled, you might still see your storage instance in the console or the CLI for up to 72 hours.
- Immediate cancellation: Choose this option to immediately remove your storage. Neither you nor your users can use the storage anymore or recover the data.
- Anniversary date: Choose this option to cancel your storage on the next anniversary date. Your storage instances remain active until the next anniversary date and you can continue to use them until this date, such as to give your team time to make backups of your data.
-
When you dynamically provisioned the storage with a storage class that sets
reclaimPolicy: Delete
and you choose to remove the PVC, the PV and the storage instance are immediately removed. For hourly billed storage, billing stops immediately. For monthly billed storage, you are still charged for the remainder of the month. After your storage is removed and billing stops, you might still see your storage instance in the console or the CLI for up to 72 hours.
- What do I need to be aware of before I delete persistent storage?
- When you clean up persistent storage, you delete all the data that is stored in it. If you need a copy of the data, make a backup.
- I deleted my storage instance. Why can I still see my instance?
- After you remove persistent storage, it can take up to 72 hours for the removal to be fully processed and for the storage to disappear from your IBM Cloud console or CLI.
Cleaning up persistent storage
Remove the PVC, PV, and the storage instance from your IBM Cloud account to avoid further charges for your persistent storage.
Before you begin:
- Make sure that you backed up any data that you want to keep.
- Access your Red Hat OpenShift cluster.
To clean up persistent data:
-
List the PVCs in your cluster and note the
NAME
of the PVC, theSTORAGECLASS
, and the name of the PV that is bound to the PVC and shown asVOLUME
.oc get pvc
Example output
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE claim1 Bound pvc-06886b77-102b-11e8-968a-f6612bb731fb 20Gi RWO class 78d claim2 Bound pvc-457a2b96-fafc-11e7-8ff9-b6c8f770356c 4Gi RWX class 105d claim3 Bound pvc-1efef0ba-0c48-11e8-968a-f6612bb731fb 24Gi RWX class 83d
-
Review the
ReclaimPolicy
andbillingType
for the storage class.oc describe storageclass <storageclass_name>
If the reclaim policy says
Delete
, your PV and the physical storage are removed when you remove the PVC. If the reclaim policy saysRetain
, or if you provisioned your storage without a storage class, then your PV and physical storage are not removed when you remove the PVC. You must remove the PVC, PV, and the physical storage separately.If your storage is charged monthly, you still get charged for the entire month, even if you remove the storage before the end of the billing cycle.
-
Remove any pods that mount the PVC. List the pods that mount the PVC. If no pod is returned in your CLI output, you don't have a pod that uses the PVC.
oc get pods --all-namespaces -o=jsonpath='{range .items[*]}{"\n"}{.metadata.name}{":\t"}{range .spec.volumes[*]}{.persistentVolumeClaim.claimName}{" "}{end}{end}' | grep "<pvc_name>"
Example output
depl-12345-prz7b: claim1
-
Remove the pod that uses the PVC. If the pod is part of a deployment, remove the deployment.
oc delete pod <pod_name>
-
Verify that the pod is removed.
oc get pods
-
Remove the PVC.
oc delete pvc <pvc_name>
-
Review the status of your PV. Use the name of the PV that you retrieved earlier as
VOLUME
. When you remove the PVC, the PV that is bound to the PVC is released. Depending on how you provisioned your storage, your PV goes into aDeleting
state if the PV is deleted automatically, or into aReleased
state, if you must manually delete the PV. Note: For PVs that are automatically deleted, the status might briefly sayReleased
before it is deleted. Rerun the command after a few minutes to see whether the PV is removed.oc get pv <pv_name>
-
If your PV is not deleted, manually remove the PV.
oc delete pv <pv_name>
-
Verify that the PV is removed.
oc get pv
-
List your shares.
ibmcloud is shares
-
List each file share and find the associated cluster ID.
ibmcloud is share SHARE | grep CLUSTER-ID
-
Delete the shares.
ibmcloud is share-delete (SHARE1 SHARE2 ...)