Setting up Block Storage for Classic
IBM Cloud Block Storage for Classic is persistent, high-performance iSCSI storage that you can add to your apps by using Kubernetes persistent volumes (PVs). You can choose between predefined storage tiers with GB sizes and IOPS that meet the requirements of your workloads. To find out whether IBM Cloud Block Storage for Classic is the correct storage option for you, see Choosing a storage solution.
Keep in mind the following requirements when you use the IBM Cloud Block Storage for Classic plug-in.
IBM Cloud Block Storage for Classic plug-in is available only for standard Red Hat OpenShift on IBM Cloud clusters that are provisioned on classic infrastructure.
Block Storage for Classic instances are specific to a single-campus multizone region. If you have a multizone cluster, consider multizone persistent storage options.
Classic infrastructure
The steps on this page apply to classic clusters only. On VPC clusters, the Block Storage for VPC cluster add-on is installed by default. For more information, see Setting up Setting up Block Storage for VPC.
Quick start for IBM Cloud Block Storage for Classic
In this quick start guide, you create a 24Gi silver tier Block Storage for Classic volume in your cluster by creating a PVC to dynamically provision the volume. Then, you create an app deployment that mounts your PVC.
First time using Block Storage for Classic in your cluster? Come back here after your have reviewed the storage configurations.
-
Save the following persistent volume claim (PVC) configuration to a file called
pvc.yaml
.apiVersion: v1 kind: PersistentVolumeClaim metadata: name: block-storage-pvc labels: billingType: "hourly" region: us-east zone: wdc07 spec: accessModes: - ReadWriteOnce resources: requests: storage: 45Gi storageClassName: ibmc-block-silver
-
Apply the configuration to your cluster to create the PVC.
oc apply -f pvc.yaml
-
Wait until your PVC is in the
Bound
status. You can check the status by running the following command.oc get pvc
-
After your PVC is
Bound
, create an app deployment that uses your PVC. Save the following deployment configuration to a file calleddeployment.yaml
.apiVersion: apps/v1 kind: Deployment metadata: name: my-deployment labels: app: my-app spec: selector: matchLabels: app: my-app template: metadata: labels: app: my-app spec: containers: - image: nginx # Use the nginx image, or your own containerized app image. name: my-container command: ["/bin/sh"] args: ["-c", "while true; do date \"+%Y-%m-%d %H:%M:%S\"; sleep 3600; done"] # This app prints the timestamp, then sleeps. workingDir: /home imagePullPolicy: Always ports: - containerPort: 80 volumeMounts: - name: my-volume mountPath: /mount-path volumes: - name: my-volume persistentVolumeClaim: claimName: block-storage-pvc
-
Create the deployment in your cluster.
oc apply -f deployment.yaml
-
Wait until the deployment is
Ready
. Check the status of the deployment by running the following command.oc get deployments
Example output
NAME READY UP-TO-DATE AVAILABLE AGE my-deployment 1/1 1 1 3m19s
-
List your pods and verify the
my-deployment
pod is running.oc get pods
Example output
NAME READY STATUS RESTARTS AGE my-deployment-ccdf87dfb-vzn95 1/1 Running 0 5m27s
-
Get the pods logs to verify the timestamp is written.
oc logs
Example output
2022-01-21 14:18:59
You've successfully created a deployment that uses Block Storage for Classic! For more information, see the following links.
Deciding on the block storage configuration
Red Hat OpenShift on IBM Cloud provides pre-defined storage classes for block storage that you can use to provision block storage with a specific configuration.
Every storage class specifies the type of block storage that you provision, including available size, IOPS, file system, and the retention policy.
Make sure to choose your storage configuration carefully to have enough capacity to store your data. After you provision a specific type of storage by using a storage class, you can't change the type or retention policy for the storage device. However, you can change the size and the IOPS if you want to increase your storage capacity and performance. To change the type and retention policy for your storage, you must create a new storage instance and copy the data from the old storage instance to your new one.
-
List available storage classes in IBM Cloud® Kubernetes Service.
oc get sc | grep block
Example output
ibmc-block-bronze ibm.io/ibmc-block Delete Immediate true 148m ibmc-block-custom ibm.io/ibmc-block Delete Immediate true 148m ibmc-block-gold ibm.io/ibmc-block Delete Immediate true 148m ibmc-block-retain-bronze ibm.io/ibmc-block Retain Immediate true 148m ibmc-block-retain-custom ibm.io/ibmc-block Retain Immediate true 148m ibmc-block-retain-gold ibm.io/ibmc-block Retain Immediate true 148m ibmc-block-retain-silver ibm.io/ibmc-block Retain Immediate true 148m ibmc-block-silver ibm.io/ibmc-block Delete Immediate true 148m
-
Review the configuration of a storage class.
oc describe storageclass STORAGECLASS
For more information about each storage class, see the storage class reference. If you don't find what you are looking for, consider creating your own customized storage class. To get started, check out the customized storage class samples.
-
Choose the type of block storage that you want to provision.
- Bronze, silver, and gold storage classes: These storage classes provision Endurance storage. With Endurance storage, you can choose the size of the storage in gigabytes at predefined IOPS tiers.
- Custom storage class: This storage class provisions Performance storage. With performance storage, you have more control over the size of the storage and the IOPS.
-
Choose the size and IOPS for your block storage. The size and the number of IOPS define the total number of IOPS (input/ output operations per second) that serves as an indicator for how fast your storage is. The more total IOPS your storage has, the faster it processes read and write operations.
-
Bronze, silver, and gold storage classes: These storage classes come with a fixed number of IOPS per gigabyte and are provisioned on SSD hard disks. The total number of IOPS depends on the size of the storage that you choose. You can select any whole number of gigabyte within the allowed size range, such as 20 Gi, 256 Gi, or 11854 Gi. To determine the total number of IOPS, you must multiply the IOPS with the selected size. For example, if you select a 1000Gi block storage size in the silver storage class that comes with 4 IOPS per GB, your storage has a total of 4000 IOPS.
Table of storage class size ranges and IOPS per gigabyte Storage class IOPS per gigabyte Size range in gigabytes Bronze 2 IOPS/GB 20-12000 Gi Silver 4 IOPS/GB 20-12000 Gi Gold 10 IOPS/GB 20-4000 Gi -
Custom storage class: When you choose this storage class, you have more control over the size and IOPS that you want. For the size, you can select any whole number of gigabytes within the allowed size range. The size that you choose determines the IOPS range that is available to you. You can choose an IOPS that is a multiple of 100 within the specified range. The IOPS that you choose is static and does not scale with the size of the storage. For example, if you choose 40Gi with 100 IOPS, your total IOPS remains 100. The IOPS to gigabyte ratio also determines the type of hard disk that is provisioned for you. For example, if you are using 500Gi at 100 IOPS, your IOPS to gigabyte ratio is 0.2. Storage with a ratio of less than or equal to 0.3 is provisioned on SATA hard disks. If your ratio is greater than 0.3, then your storage is provisioned on SSD hard disks.
Table class size ranges and IOPS Size range in gigabytes IOPS range in multiples of 100 20-39 Gi 100-1000 IOPS 40-79 Gi 100-2000 IOPS 80-99 Gi 100-4000 IOPS 100-499 Gi 100-6000 IOPS 500-999 Gi 100-10000 IOPS 1000-1999 Gi 100-20000 IOPS 2000-2999 Gi 200-40000 IOPS 3000-3999 Gi 200-48000 IOPS 4000-7999 Gi 300-48000 IOPS 8000-9999 Gi 500-48000 IOPS 10000-12000 Gi 1000-48000 IOPS
-
-
Choose if you want to keep your data after the cluster or the persistent volume claim (PVC) is deleted.
- If you want to keep your data, then choose a
retain
storage class. When you delete the PVC, only the PVC is deleted. The PV, the physical storage device in your IBM Cloud infrastructure account, and your data still exist. To reclaim the storage and use it in your cluster again, you must remove the PV and follow the steps for using existing block storage. - If you want the PV, the data, and your physical block storage device to be deleted when you delete the PVC, choose a storage class without
retain
.
- If you want to keep your data, then choose a
-
Choose if you want to be billed hourly or monthly. The default setting is hourly billing.
Setting up encryption for Block Storage for Classic
You can set up encryption for Block Storage for Classic by using IBM Key Protect.
The following example explains how to create a service ID with the required access roles for Key Protect and your cluster. The credentials of this service ID are used to enable encryption for your Block Storage for Classic volumes.
You can enable encryption by creating a Kubernetes secret that uses your personal API key as long as you have the Reader service access role for your Key Protect instance as well as the Viewer platform access role and the Writer service access role for your cluster.
-
Make sure that you are assigned the Editor platform access role and the Writer service access role for Key Protect so that you can create your own root key that you use to encrypt your Block Storage for Classic instance. You can review your IAM access roles in the IAM console. For more information about IAM roles, see IAM access.
-
If you don't have a Key Protect instance, provision one.
-
Create a root key. By default, the root key is created without an expiration date.
-
Create an IAM service ID. Replace
<service_ID_name>
with the name that you want to assign to your service ID. This service ID is used to access your Key Protect instance from your Block Storage for Classic volume.ibmcloud iam service-id-create <service_ID_name>
Example output
OK Service ID test-id is created successfully ID ServiceId-a1a11111-bb11-1111-a11b-1111111a11ba Name test-id Description CRN crn:v1:bluemix:public:iam-identity::a/1a1111aa2b11111aaa1a1111aa2aa111::serviceid:ServiceId-a1a11111-bb11-1111-a11b-1111111a11bb Version 1-bb11aa11a0aa1a11a011a1aaaa11a1bb Locked false
-
Create an API key for your service ID. Replace
<api-key-name>
with a name for your API key and replace<service_ID_name>
with the name of the service ID that you created. Be sure to save your API key as it can't be retrieved later. This API key is stored in a Kubernetes secret in your cluster in a later step.ibmcloud iam service-api-key-create <api_key_name> <service_ID_name>
-
Retrieve a list of IAM-enabled services in your account and note the name of the Key Protect instance that you created.
ibmcloud resource service-instances
-
Retrieve the GUID of your Key Protect instance. The ID is used to create an IAM service policy for your service ID.
ibmcloud resource service-instance "<instance_name>" | grep GUID
-
Create an IAM service policy to grant your service ID access to your Key Protect instance. The following command grants your service ID
Reader
access to your Key Protect instance. The Reader access role is the minimum service access role that your service ID must have to retrieve Key Protect keys. For more information, see Managing user access for Key Protect.ibmcloud iam service-policy-create <service_ID_name> --roles Reader --service-name kms --service-instance <service_instance_GUID>
-
Create another IAM service access policy to give your service ID access to your cluster. The following command grants the Viewer platform access role and the Writer service access role to your service ID for your cluster. You can retrieve your cluster ID by running
ibmcloud oc cluster get <cluster_name>
.ibmcloud iam service-policy-create <service_ID_name> --roles Writer,Viewer --service-name containers-kubernetes --service-instance <cluster_ID>
-
If you already have the
ibmcloud-block-storage-plugin
Helm chart installed, you must remove the Helm chart and install a new version.If you installed the plug-in without using Helm, you must manually remove the block storage plug-in deployment and all associated resources before installing a new version.
helm uninstall <name> <namespace>
-
Install the
ibmcloud-block-storage-plugin
Helm chart.helm install <name> iks-charts/ibmcloud-block-storage-plugin
-
Create an
ibm-block-secrets
namespace.oc create ns ibm-block-secrets
-
Create a role binding in the
ibm-block-secrets
namespace for the block storage plug-in.oc create rolebinding ibmcloud-block-storage-plugin-byok --clusterrole=ibmcloud-block-storage-plugin-byok --serviceaccount=kube-system:ibmcloud-block-storage-plugin --group system:nodes --namespace=ibm-block-secrets
-
Create a Kubernetes secret named
secret.yaml
and that includes the credentials to access your root key in your Key Protect service instance.-
Create a configuration file for the secret.
apiVersion: v1 kind: Secret metadata: labels: kmsConfig: kpc-secretLabel name: <secret_name> # Enter a name for your secret. Example: my_secret namespace: <namespace> # Enter the name of the namespace where you want to create the secret. The secret must be in same namespace where your app is deployed. Example: default stringData: config: |- { "api_key":"<service_id_api_key>", # Enter the API key for the service ID that you created. Example: "AA1aAAaA1a21AAaA1aAAaAa-AA-1AAaaA1aA1aAaaaAA" "iam_endpoint":"https://iam.cloud.ibm.com", "key_protect_endpoint":"https://<region>.kms.cloud.ibm.com", # Example: "https://us-east.kms.cloud.ibm.com" "root_key_crn":"<rook_key_crn>", # Example: "crn:v1:bluemix:public:kms:<region>:a/1ab011ab2b11111aaa1a1111aa1aa111:11aa111a-1111-11a1-a111-a11a111aa111:key:11a11111-1a1a-111a-111a-11111a1a1aa1", "version":"" } type: ibm.io/kms-config
stringData.config.key_protect_endpoint
- Enter the regional endpoint of your Key Protect instance. For a list of Key Protect endpoints, see Regions and endpoints.
stringData.config.root_key_crn
- Enter the CRN of the root key that you created. To retrieve your root key CRN, complete the following steps.
- Navigate to the resource list in the IBM Cloud console.
- Click Services, then click your Key Protect instance.
- Find your root key on the Actions Menu, then click View CRN.
- Click the Copy button to copy the CRN.
-
Create the secret in your cluster.
oc apply -f secret.yaml
-
Verify that your secret was created.
oc get secrets
-
-
Choose between the following options to create a Block Storage for Classic instance that encrypts data with your root key.
Encrypting volume data by using your own storage class
You can deploy apps that use encrypted volumes by first creating your own storage class.
The following steps explain how to create a custom, encrypted storage class that you can use to create multiple encrypted block storage instances with the same configuration. If you want to create an encrypted PVC by using one of the IBM-provided storage classes, you can do this by referencing the Key Protect credentials directly in your PVC.
-
Create your own storage class that provisions an encrypted block storage instance by using one of the IBM-provided storage classes as the basis. You can retrieve the details a storage class by running
oc get sc <storageclass_name> -o yaml
. The following example is based on theibmc-block-retain-bronze
storage class.apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <name> # Enter the name of the storage class. Example: my_custom_storageclass parameters: billingType: hourly classVersion: "2" fsType: ext4 iopsPerGB: "2" sizeRange: '[20-12000]Gi' type: Endurance encrypted: "true" # Enter "true" to enable encryption. encryptionKeySecret: <secret_name> # # #nter the name of the secret that you created earlier.Example: my_secret encryptionKeyNamespace: <namespace> # # #nter the namespace where you created your secret. Example: default provisioner: ibm.io/ibmc-block reclaimPolicy: Delete volumeBindingMode: Immediate
-
Create the storage class in your cluster.
oc apply -f storageclass.yaml
-
Add Block Storage for Classic to your app by using your own storage class to create a PVC.
-
Verify the encryption of your Block Storage for Classic volumes.
Create a PVC that references your Block Storage for Classic secret
You can provision encrypted Block Storage for Classic by creating a PVC that specifies the Kubernetes secret that holds your Key Protect credentials.
The following steps show how you can reference your Key Protect credentials in your PVC to create an encrypted Block Storage for Classic instance. To create multiple encrypted volumes without specifying the Key Protect credentials in each PVC, you can create a custom, encrypted storage class.
-
Review the provided Block Storage for Classic storage classes to determine which storage class best meets your app requirements. If the provided storage classes don't meet your app requirements, you can create your own customized storage class.
-
Create a PVC configuration file that is named
pvc.yaml
and that references the Kubernetes secret where you stored the Key Protect service credentials. To create this secret, see Setting up encryption for Block Storage for Classic.kind: PersistentVolumeClaim apiVersion: v1 metadata: name: <pvc_name> # Enter a name for your PVC. annotations: volume.beta.kubernetes.io/storage-class: "<storage_class>" # Enter a storage class. To see a list of storageclasses run `kubectl get storageclasses`. labels: encrypted: "true" encryptionKeyNamespace: <namespace> # Enter the namespace where your secret was created. encryptionKeySecret: <secret_name> # Enter the name of the secret you created. spec: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi
-
Create the PVC in your cluster.
oc apply -f pvc.yaml
-
Check the status of your PVC.
oc get pvc
-
Wait for your PVC to bind, then create a deployment that uses your PVC.
-
Verify the encryption of your Block Storage for Classic volumes.
Verifying the encryption of your Block Storage for Classic volumes
You can verify the encryption of your volumes by checking the volume mount path.
-
Log in to your app pod. Replace
<pod_name>
with the name of the pod that mounts your encrypted Block Storage for Classic volume.oc exec <pod_name> -it bash
-
List the file system of your pod.
df -h
-
Review the file system path for your encrypted Block Storage for Classic volume.
-
Encrypted volumes have a path structure of
/dev/mapper/<pvc-ID_encrypted>
. In this example, the encrypted volume is mounted to the/test
file path in the pod.Filesystem Size Used Avail Use% Mounted on overlay 98G 8.2G 85G 9% / tmpfs 64M 0 64M 0% /dev tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup /dev/mapper/pvc-a011a111-1111-1111-111a-aaa1a1111a11_encrypted 20G 45M 20G 1% /test
-
Unencrypted volumes have a path structure of
dev/mapper/<random_string>
.Filesystem Size Used Avail Use% Mounted on overlay 98G 16G 78G 17% / tmpfs 64M 0 64M 0% /dev tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup /dev/mapper/3600a09803830476e733f4e477370716e 24G 45M 24G 1% /test
-
Removing your Kubernetes secret doesn't revoke access to the volume data. If you created a pod-only deployment, you must delete the pod. If you created a deployment, you must delete the deployment.
Adding block storage to apps
Create a persistent volume claim (PVC) to dynamically provision block storage for your cluster. Dynamic provisioning automatically creates the matching persistent volume (PV) and orders the actual storage device in your IBM Cloud infrastructure account.
Block storage comes with a ReadWriteOnce
access mode. You can mount it to only one pod on one worker node in the cluster at a time.
Before you begin:
- If you have a firewall, allow egress access for the IBM Cloud infrastructure IP ranges of the zones that your clusters are in so that you can create PVCs.
- Decide on a pre-defined storage class or create a customized storage class.
Looking to deploy block storage in a stateful set? For more information, see Using block storage in a stateful set.
To add block storage:
-
Create a configuration file to define your persistent volume claim (PVC) and save the configuration as a
.yaml
file.-
Example for bronze, silver, gold storage classes: The following
.yaml
file creates a claim that is namedblock-storage-pvc
of the"ibmc-block-silver"
storage class, billed hourly, with a gigabyte size of24Gi
.apiVersion: v1 kind: PersistentVolumeClaim metadata: name: block-storage-pvc labels: billingType: "hourly" region: us-south zone: dal13 spec: accessModes: - ReadWriteOnce resources: requests: storage: 24Gi storageClassName: ibmc-block-silver
-
Example for using your own storage class: The following
.yaml
file creates a claim that is namedblock-storage-pvc
of the storage classibmc-block-retain-custom
, billed hourly, with a gigabyte size of45Gi
and IOPS of"300"
.apiVersion: v1 kind: PersistentVolumeClaim metadata: name: block-storage-pvc labels: billingType: "hourly" region: us-south zone: dal13 spec: accessModes: - ReadWriteOnce resources: requests: storage: 45Gi iops: "300" storageClassName: ibmc-block-retain-custom
name
- Enter the name of the PVC.
billingType
- In the metadata labels section, specify the frequency for which your storage bill is calculated, "monthly" or "hourly". The default is "hourly".
region
- In the metadata labels section, specify the region where you want to provision your block storage. If you specify the region, you must also specify a zone. If you don't specify a region, or the specified region is not found, the storage is created in the same region as your cluster. This option is supported only with the IBM Cloud Block Storage plug-in version 1.0.1 or higher. For older plug-in versions, if you have a multizone cluster, the zone in which your storage is provisioned is selected on a round-robin basis to balance volume requests evenly across all zones. To specify the zone for your storage, you can create a customized storage class first. Then, create a PVC with your customized storage class.
zone
- In the metadata labels section, specify the zone where you want to provision your block storage. If you specify the zone, you must also specify a region. If you don't specify a zone or the specified zone is not found in a multizone cluster, the zone is selected on a round-robin basis. This option is supported only with the IBM Cloud Block Storage plug-in version 1.0.1 or higher. For older plug-in versions, if you have a multizone cluster, the zone in which your storage is provisioned is selected on a round-robin basis to balance volume requests evenly across all zones. To specify the zone for your storage, you can create a customized storage class first. Then, create a PVC with your customized storage class.
storage
- In the spec resources requests section, enter the size of the block storage, in gigabytes (Gi). After your storage is provisioned, you can't change the size of your block storage. Make sure to specify a size that matches the amount of data that you want to store.
iops
- This option is available for your own custom storage classes only (
ibmc-block-custom / ibmc-block-retain-custom
). In the spec resources requests section, specify the total IOPS for the storage, selecting a multiple of 100 within the allowable range. If you choose an IOPS other than one that is listed, the IOPS is rounded up. storageClassName
- In the spec section, enter the name of the storage class that you want to use to provision block storage. You can choose to use one of the IBM-provided storage classes or create your own storage class.
If you don't specify a storage class, the PV is created with the default storage class
ibmc-file-bronze
.
If you want to use a customized storage class, create your PVC with the corresponding storage class name, a valid IOPS, and size.
-
-
Create the PVC.
oc apply -f block-storage.yaml
-
Verify that your PVC is created and bound to the PV. This process can take a few minutes.
oc get pvc
Example output
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE block-storage-pvc Bound pvc-1aa1aaaa-11a1-48d1-ab11-11b11111f3bc 45Gi RWO ibmc-block-silver 150m
-
To mount the PV to your deployment, create a configuration
.yaml
file and specify the PVC that binds the PV.apiVersion: apps/v1 kind: Deployment metadata: name: <deployment_name> labels: app: <deployment_label> spec: selector: matchLabels: app: <app_name> template: metadata: labels: app: <app_name> spec: containers: - image: <image_name> name: <container_name> volumeMounts: - name: <volume_name> mountPath: /<file_path> volumes: - name: <volume_name> persistentVolumeClaim: claimName: <pvc_name>
app
- In the metadata, enter a label for the deployment.
matchLabels.app
andlabels.app
- In the spec selector and in the template metadata, enter a label for your app.
image
- The name of the container image that you want to use. To list available images in your IBM Cloud Container Registry account, run
ibmcloud cr image-list
. name
- The name of the container that you want to deploy to your cluster.
mountPath
- In the container volume mounts section, enter the absolute path of the directory to where the volume is mounted inside the container. Data written to the mount path is stored under the root directory in your physical block storage instance. If you want to share a volume between different apps, you can specify volume sub paths for each of your apps.
name
- In the container volume mounts section, enter the name of the volume to mount to your pod.
name
- In the volumes section, enter the name of the volume to mount to your pod. Typically this name is the same as
volumeMounts/name
. claimName
- In the volumes persistent volume claim section, enter the name of the PVC that binds the PV that you want to use.
-
Create the deployment.
oc apply -f <local_yaml_path>
-
Verify that the PV is successfully mounted.
oc describe deployment <deployment_name>
The mount point is in the Volume Mounts field and the volume is in the Volumes field.
Volume Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-tqp61 (ro) /volumemount from myvol (rw) ... Volumes: myvol: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: block-storage-pvc ReadOnly: false
Using existing block storage in your cluster
If you have an existing physical storage device that you want to use in your cluster, you can manually create the PV and PVC to statically provision the storage.
Before you can start to mount your existing storage to an app, you must retrieve all necessary information for your PV.
Retrieving the information of your existing block storage
-
Retrieve or generate an API key for your IBM Cloud infrastructure account.
- Log in to the IBM Cloud infrastructure portal.
- Select Account, then Users, and then User List.
- Find your user ID.
- In the API KEY column, click Generate to generate an API key or View to view your existing API key.
-
Retrieve the API username for your IBM Cloud infrastructure account.
- From the User List menu, select your user ID.
- In the API Access Information section, find your API Username.
-
Log in to the IBM Cloud infrastructure CLI plug-in.
ibmcloud sl init
-
Choose to authenticate by using the username and API key for your IBM Cloud infrastructure account.
-
Enter the username and API key that you retrieved in the previous steps.
-
List available block storage devices.
ibmcloud sl block volume-list
Example output
id username datacenter storage_type capacity_gb bytes_used lunId 11111111 IBM01AAA1111111-1 wdc07 endurance_block_storage 45 - 2
-
Retrieve the volume details. Replace
<volume_ID>
with the ID of the Block storage volume that you retrieved in step 6.ibmcloud sl block volume-detail <volume_ID>
Example output
ID 11111111 User name IBM01AAA1111111-1 Type endurance_block_storage Capacity (GB) 45 LUN Id 2 IOPs 100 Datacenter wdc07 Target IP 10.XXX.XX.XXX # of Active Transactions 0 Replicant Count 0
-
Make a note of the
ID
,Capacity
,LUN Id
, theDatacenter
, andTarget IP
of the volume that you want to mount to your cluster. Note: To mount existing storage to a cluster, you must have a worker node in the same zone as your storage. To verify the zone of your worker node, runibmcloud oc worker ls --cluster <cluster_name_or_ID>
.
Creating a persistent volume (PV) and a matching persistent volume claim (PVC)
-
Optional: If you have storage that you provisioned with a
retain
storage class, when you remove the PVC, the PV and the physical storage device are not removed. To reuse the storage in your cluster, you must remove the PV first. List existing PVs and look for the PV that belongs to your persistent storage. The PV is in areleased
state.oc get pv
-
Remove the PV.
oc delete pv <pv_name>
-
Verify the PV is removed.
oc get pv
-
Create a configuration file for your PV. Include the parameters you retrieved earlier.
apiVersion: v1 kind: PersistentVolume metadata: name: "block-storage-pv" # Enter a name for your PV. For example, my-static-pv. labels: failure-domain.beta.kubernetes.io/region: "<region>" # Example us-east. failure-domain.beta.kubernetes.io/zone: "<zone>" # Example: wdc04. See /docs/openshift?topic=openshift-regions-and-zones#zones-sz spec: capacity: storage: "<storage>" accessModes: - ReadWriteOnce flexVolume: driver: "ibm/ibmc-block" fsType: "<fs_type>" # Enter ext or xfs options: "Lun": "<Lun_ID>" "TargetPortal": "<TargetPortal>" "VolumeID": "<VolumeID>" "volumeName": "block-storage-pv" # Enter the same value as your PV name from metadata.name
name
- Give your PV a name. For example,
block-storage-pv
. Note that you must also enter this value inspec.FlexVolume.options
as thevolumeName
. labels
- Enter the region and the zone that you retrieved earlier. You must have at least one worker node in the same region and zone as your persistent storage to mount the storage in your cluster. To retrieve your volume details, run
ibmcloud sl block volume-list
to get the volume ID, then runibmcloud sl block volume-detail <volume_ID>
to get the details of your volume. region
- Enter the region where your block storage is located. Note that your cluster and block storage must be in the same region. To find your cluster location, run
ibmcloud oc cluster ls
. For more information about the available regions and zones, see regions and zones. For example,us-east
. zone
- Enter the zone where your storage volume is located. To retrieve your volume details, run
ibmcloud sl block volume-list
to get the volume ID, then runibmcloud sl block volume-detail <volume_ID>
to get the details of your volume. Note that to attach block storage to your cluster, you must have a worker node available in the same zone as the volume that you want to attach. To find the zones of your worker nodes, runibmcloud oc worker ls -c <cluster>
. For example,wdc04
. storage
- Enter the storage size of the existing block storage volume that you want to attach to your cluster. The storage size must be written in gigabytes, for example, 20Gi (20 GB) or 1000Gi (1 TB). To retrieve your volume details, run
ibmcloud sl block volume-list
to get the volume ID, and then runibmcloud sl block volume-detail <volume_ID>
to get the details of your volume. fsType
- Enter the file system type that is configured for your existing block storage. Choose between
ext4
orxfs
. If you don't specify this option, the PV defaults toext4
. When the wrongfsType
is defined, then the PV creation succeeds, but the mounting of the PV to a pod fails. To retrieve your volume details, runibmcloud sl block volume-list
to get the volume ID, then runibmcloud sl block volume-detail <volume_ID>
to get the details of your volume. Lun
- Enter the LUN ID of your block storage volume. To retrieve your volume details, run
ibmcloud sl block volume-list
to get the volume ID, then runibmcloud sl block volume-detail <volume_ID>
to get the details of your volume. TargetPortal
- Enter the IP address of your block storage. To retrieve the
TargetPortal
parameter, runibmcloud sl block volume-list
to get the volume ID, then runibmcloud sl block volume-detail <volume_ID>
and make a note of theTarget IP
in the output. VolumeId
- Enter the ID of your block storage. To retrieve your volume details, run
ibmcloud sl block volume-list
. volumeName
- Enter the same value as your PV name. For example,
block-storage-pv
.
-
Create the PV in your cluster.
oc apply -f pv.yaml
-
Verify that the PV is created.
oc get pv
-
Create another configuration file to create your PVC. In order for the PVC to match the PV that you created earlier, you must choose the same value for
storage
andaccessMode
. Thestorage-class
field must be an empty string. If any of these fields don't match the PV, then a new PV is created automatically instead.kind: PersistentVolumeClaim apiVersion: v1 metadata: name: block-storage-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: "20Gi" storageClassName: ""
-
Create your PVC.
oc apply -f static-pvc.yaml
-
Verify that your PVC is created and bound to the PV that you created earlier. This process can take a few minutes.
oc describe pvc static-pvc
Example output
Name: static-pvc Namespace: default StorageClass: Status: Bound
-
Optional Save the following example pod configuration as a file called
pod.yaml
.apiVersion: v1 kind: Pod metadata: name: block-storage labels: app: block-storage spec: containers: - name: block-storage image: nginx command: ["/bin/sh"] args: ["-c", "while true; do date \"+%Y-%m-%d %H:%M:%S\"; sleep 3600; done"] workingDir: /home imagePullPolicy: Always ports: - containerPort: 80 volumeMounts: - name: block-storage-pv mountPath: /home volumes: - name: block-storage-pv persistentVolumeClaim: claimName: block-storage-pvc
-
Create the pod in your cluster.
oc create -f pod.yaml
-
After the pod is in
Running
status, get the logs.oc logs
Example output
2022-01-21 16:11:00
You successfully created a PV and bound it to a PVC. Then, you deployed and app that uses block storage. Cluster users can now mount the PVC to their deployments and start reading from and writing to the persistent volume.
Using block storage in a stateful set
If you have a stateful app such as a database, you can create stateful sets that use block storage to store your app's data. Alternatively, you can use an IBM Cloud database-as-a-service and store your data in the cloud.
- What do I need to be aware of when adding block storage to a stateful set?
- To add storage to a stateful set, you specify your storage configuration in the
volumeClaimTemplates
section of your stateful set YAML. ThevolumeClaimTemplates
is the basis for your PVC and can include the storage class and the size or IOPS of your block storage that you want to provision. However, if you want to include labels in yourvolumeClaimTemplates
, Kubernetes does not include these labels when creating the PVC. Instead, you must add the labels directly to your stateful set.
You can't deploy two stateful sets at the same time. If you try to create a stateful set before a different one is fully deployed, then the deployment of your stateful set might lead to unexpected results.
- How can I create my stateful set in a specific zone?
- In a multizone cluster, you can specify the zone and region where you want to create your stateful set in the
spec.selector.matchLabels
andspec.template.metadata.labels
section of your stateful set YAML. Alternatively, you can add those labels to a customized storage class and use this storage class in thevolumeClaimTemplates
section of your stateful set. - Can I delay binding of a PV to my stateful pod until the pod is ready?
- Yes, you can create your own storage class for your PVC that includes the
volumeBindingMode: WaitForFirstConsumer
field. - What options do I have to add block storage to a stateful set?
- If you want to automatically create your PVC when you create the stateful set, use dynamic provisioning. You can also choose to pre-provision your PVCs or use existing PVCs with your stateful set.
Creating the PVC by using dynamic provisioning when you create a stateful set
Use this option if you want to automatically create the PVC when you create the stateful set.
Before you begin: Access your Red Hat OpenShift cluster.
Complete the following steps to verify that all existing stateful sets in your cluster are fully deployed. If a stateful set is still being deployed, you can't start creating your stateful set. You must wait until all stateful sets in your cluster are fully deployed to avoid unexpected results.
-
List existing stateful sets in your cluster.
oc get statefulset --all-namespaces
Example output
NAME DESIRED CURRENT AGE mystatefulset 3 3 6s
-
View the Pods Status of each stateful set to ensure that the deployment of the stateful set is finished.
oc describe statefulset <statefulset_name>
Example output
Name: nginx Namespace: default CreationTimestamp: Fri, 05 Oct 2022 13:22:41 -0400 Selector: app=nginx,billingType=hourly,region=us-south,zone=dal10 Labels: app=nginx billingType=hourly region=us-south zone=dal10 Annotations: oc.kubernetes.io/last-applied-configuration={"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{},"name":"nginx","namespace":"default"},"spec":{"podManagementPolicy":"Par..." Replicas: 3 desired | 3 total Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=nginx billingType=hourly region=us-south zone=dal10 ...
A stateful set is fully deployed when the number of replicas that you find in the Replicas section of your CLI output equals the number of Running pods in the Pods Status section. If a stateful set is not fully deployed yet, wait until the deployment is finished before you proceed.
-
Create a configuration file for your stateful set and the service that you use to expose the stateful set. The following example shows how to deploy NGINX as a stateful set with three replicas. For each replica, a 20 gigabyte block storage device is provisioned based on the specifications that are defined in the
ibmc-block-retain-bronze
storage class. All storage devices are provisioned in thedal10
zone. Because block storage can't be accessed from other zones, all replicas of the stateful set are also deployed onto worker nodes that are located indal10
.apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx spec: ports: - port: 80 name: web clusterIP: None selector: app: nginx --- apiVersion: apps/v1 kind: StatefulSet metadata: name: nginx spec: serviceName: "nginx" replicas: 3 podManagementPolicy: Parallel selector: matchLabels: app: nginx billingType: "hourly" region: "us-south" # Enter the region where your cluster is located. zone: "dal10" template: metadata: labels: app: nginx billingType: "hourly" region: "us-south" zone: "dal10" spec: containers: - name: nginx image: nginx ports: - containerPort: 80 name: web volumeMounts: - name: myvol mountPath: /usr/share/nginx/html volumeClaimTemplates: - metadata: name: myvol spec: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi iops: "300" #required only for performance storage storageClassName: ibmc-block-retain-bronze
The following example shows how to deploy NGINX as a stateful set with three replicas. The stateful set does not specify the region and zone where the block storage is created. Instead, the stateful set uses an anti-affinity rule to ensure that the pods are spread across worker nodes and zones. By defining
topologykey: failure-domain.beta.kubernetes.io/zone
, the Kubernetes scheduler can't schedule a pod on a worker node if the worker node is in the same zone as a pod that has theapp: nginx
label. For each stateful set pod, two PVCs are created as defined in thevolumeClaimTemplates
section, but the creation of the block storage instances is delayed until a stateful set pod that uses the storage is scheduled. This setup is referred to as topology-aware volume scheduling.apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ibmc-block-bronze-delayed parameters: billingType: hourly classVersion: "2" fsType: ext4 iopsPerGB: "2" sizeRange: '[20-12000]Gi' type: Endurance provisioner: ibm.io/ibmc-block reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer --- apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx spec: ports: - port: 80 name: web clusterIP: None selector: app: nginx --- apiVersion: apps/v1 kind: StatefulSet metadata: name: web spec: serviceName: "nginx" replicas: 3 podManagementPolicy: "Parallel" selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - nginx topologyKey: failure-domain.beta.kubernetes.io/zone containers: - name: nginx image: k8s.gcr.io/nginx-slim:0.8 ports: - containerPort: 80 name: web volumeMounts: - name: myvol1 mountPath: /usr/share/nginx/html - name: myvol2 mountPath: /tmp1 volumeClaimTemplates: - metadata: name: myvol1 spec: accessModes: - ReadWriteOnce # access mode resources: requests: storage: 20Gi storageClassName: ibmc-block-bronze-delayed - metadata: name: myvol2 spec: accessModes: - ReadWriteOnce # access mode resources: requests: storage: 20Gi storageClassName: ibmc-block-bronze-delayed
name
- Enter a name for your stateful set. The name that you enter is used to create the name for your PVC in the format:
<volume_name>-<statefulset_name>-<replica_number>
. serviceName
- Enter the name of the service that you want to use to expose your stateful set.
replicas
- Enter the number of replicas for your stateful set.
podManagementPolicy
- Enter the pod management policy that you want to use for your stateful set.
- OrderedReady: With this option, stateful set replicas are deployed one after another. For example, if you specified three replicas, then Kubernetes creates the PVC for your first replica, waits until the PVC is bound,
deploys the stateful set replica, and mounts the PVC to the replica. After the deployment is finished, the second replica is deployed. For more information about this option, see
OrderedReady
Pod Management - Parallel: With this option, the deployment of all stateful set replicas is started at the same time. If your app supports parallel deployment of replicas, then use this option to save deployment time for your PVCs and stateful set replicas.
- OrderedReady: With this option, stateful set replicas are deployed one after another. For example, if you specified three replicas, then Kubernetes creates the PVC for your first replica, waits until the PVC is bound,
deploys the stateful set replica, and mounts the PVC to the replica. After the deployment is finished, the second replica is deployed. For more information about this option, see
matchLabels
- In the spec selector section, enter all labels that you want to include in your stateful set and your PVC. Labels that you include in the
volumeClaimTemplates
of your stateful set are not recognized by Kubernetes. Sample labels that you might want to include are:- region and zone: If you want all your stateful set replicas and PVCs to be created in one specific zone, add both labels. You can also specify the zone and region in the storage class that you use. If you don't specify a zone and region and you have a multizone cluster, the zone in which your storage is provisioned is selected on a round-robin basis to balance volume requests evenly across all zones.
billingType
: Enter the billing type that you want to use for your PVCs. Choose betweenhourly
ormonthly
. If you don't specify this label, all PVCs are created with an hourly billing type.
labels
- In the spec template metadata section, enter the same labels that you added to the
spec.selector.matchLabels
section. affinity
- In the spec template spec section, specify your anti-affinity rule to ensure that your stateful set pods are distributed across worker nodes and zones. The example shows an anti-affinity rule where the stateful set pod prefers not to
be scheduled on a worker node where a pod runs that has the
app: nginx
label. Thetopologykey: failure-domain.beta.kubernetes.io/zone
restricts this anti-affinity rule even more and prevents the pod to be scheduled on a worker node if the worker node is in the same zone as a pod that has theapp: nginx
label. By using this anti-affinity rule, you can achieve anti-affinity across worker nodes and zones. name
- In the spec volume claim templates metadata section, enter a name for your volume. Use the same name that you defined in the
spec.containers.volumeMount.name
section. The name that you enter here is used to create the name for your PVC in the format:<volume_name>-<statefulset_name>-<replica_number>
. storage
- In the spec volume claim templates spec resources requests section, enter the size of the block storage in gigabytes (Gi).
iops
- In the spec volume claim templates spec resources requests section, if you want to provision performance storage, enter the number of IOPS. If you use an endurance storage class and specify a number of IOPS, the number of IOPS is ignored. Instead, the IOPS that is specified in your storage class is used.
storageClassName
- In the spec volume claim templates spec section, enter the storage class that you want to use. To list existing storage classes, run
oc get sc | grep block
. If you don't specify a storage class, the PVC is created with the default storage class that is set in your cluster. Make sure that the default storage class uses theibm.io/ibmc-block
provisioner so that your stateful set is provisioned with block storage.
-
Create your stateful set.
oc apply -f statefulset.yaml
-
Wait for your stateful set to be deployed.
oc describe statefulset <statefulset_name>
To see the current status of your PVCs, run
oc get pvc
. The name of your PVC is formatted as<volume_name>-<statefulset_name>-<replica_number>
.
Static provisioning by using existing PVCs with a stateful set
You can pre-provision your PVCs before creating your stateful set or use existing PVCs with your stateful set.
When you dynamically provision your PVCs when creating the stateful set, the name of the PVC is assigned based on the values that you used in the stateful set YAML file. In order for the stateful set to use existing PVCs, the name of your PVCs must match the name that would automatically be created when using dynamic provisioning.
Before you begin: Access your Red Hat OpenShift cluster.
-
If you want to pre-provision the PVC for your stateful set before you create the stateful set, follow steps 1-3 in Adding block storage to apps to create a PVC for each stateful set replica. Make sure that you create your PVC with a name that follows the following format:
<volume_name>-<statefulset_name>-<replica_number>
.volume_name
- Use the name that you want to specify in the
spec.volumeClaimTemplates.metadata.name
section of your stateful set, such asnginxvol
. statefulset_name
- Use the name that you want to specify in the
metadata.name
section of your stateful set, such asnginx_statefulset
. replica_number
- Enter the number of your replica, starting with 0.
For example, if you must create three stateful set replicas, create three PVCs with the following names:
nginxvol-nginx_statefulset-0
,nginxvol-nginx_statefulset-1
, andnginxvol-nginx_statefulset-2
.Looking to create a PVC and PV for an existing storage device? Create your PVC and PV by using static provisioning.
-
Follow the steps in Dynamic provisioning: Creating the PVC when you create a stateful set to create your stateful set. The name of your PVC follows the format
<volume_name>-<statefulset_name>-<replica_number>
. Make sure to use the following values from your PVC name in the stateful set specification:spec.volumeClaimTemplates.metadata.name
: Enter the<volume_name>
of your PVC name.metadata.name
- Enter the
<statefulset_name>
of your PVC name. spec.replicas
- Enter the number of replicas that you want to create for your stateful set. The number of replicas must equal the number of PVCs that you created earlier.
If your PVCs are in different zones, don't include a region or zone label in your stateful set.
-
Verify that the PVCs are used in your stateful set replica pods by listing the pods in your cluster. Identify the pods that belong to your stateful set.
oc get pods
-
Verify that your existing PVC is mounted to your stateful set replica. Review the
ClaimName
in theVolumes
section of your CLI output.oc describe pod <pod_name>
Example output
Name: nginx-0 Namespace: default Node: 10.xxx.xx.xxx/10.xxx.xx.xxx Start Time: Fri, 05 Oct 2022 13:24:59 -0400 ... Volumes: myvol: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: myvol-nginx-0 ...
Changing the size and IOPS of your existing storage device
If you want to increase storage capacity or performance, you can modify your existing volume.
For questions about billing and to find the steps for how to use the IBM Cloud console to modify your storage, see Expanding Block Storage capacity and
Adjusting IOPS. Updates that you make from the console are not reflected in the persistent volume (PV). To add this information to the PV, run oc patch pv <pv_name>
and manually update the size and IOPS in the Labels and Annotation section of your PV.
-
List the PVCs in your cluster and note the name of the associated PV from the VOLUME column.
oc get pvc
Example output
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE myvol Bound pvc-01ac123a-123b-12c3-abcd-0a1234cb12d3 20Gi RWO ibmc-block-bronze 147d
-
If you want to change the IOPS and the size for your block storage, edit the IOPS in the
metadata.labels.IOPS
section of your PV first. You can increase or decrease the IOPS value. Make sure that you enter an IOPS that is supported for the storage type that you have. For example, if you have endurance block storage with 4 IOPS, you can change the IOPS to either 2 or 10. For more supported IOPS values, see Deciding on your block storage configuration.oc edit pv <pv_name>
To change the IOPS from the CLI, you must also change the size of your block storage. If you want to change only the IOPS, but not the size, you must request the IOPS change from the console.
-
Edit the PVC and add the new size in the
spec.resources.requests.storage
section of your PVC. You can change to a greater size only up to the maximum capacity that is set by your storage class. You can't downsize your existing storage. To see available sizes for your storage class, see Deciding on the block storage configuration.oc edit pvc <pvc_name>
-
Verify that the volume expansion is requested. The volume expansion is successfully requested when you see a
FileSystemResizePending
message in the Conditions section of your CLI output.oc describe pvc <pvc_name>
Example output
... Conditions: Type Status LastProbeTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- FileSystemResizePending True Mon, 01 Jan 0001 00:00:00 +0000 Thu, 25 Apr 2022 15:52:49 -0400 Waiting for user to (re-)start a pod to finish file system resize of volume on node.
-
List all the pods that mount the PVC. If your PVC is mounted by a pod, the volume expansion is automatically processed. If your PVC is not mounted by a pod, you must mount the PVC to a pod so that the volume expansion can be processed.
oc get pods --all-namespaces -o=jsonpath='{range .items[*]}{"\n"}{.metadata.name}{":\t"}{range .spec.volumes[*]}{.persistentVolumeClaim.claimName}{" "}{end}{end}' | grep "<pvc_name>"
Mounted pods are returned in the format:
<pod_name>: <pvc_name>
. -
If your PVC is not mounted by a pod, create a pod or deployment and mount the PVC. If your PVC is mounted by a pod, continue with the next step.
-
Verify that the size and IOPS are changed in the Labels section of your CLI output. This process might take a few minutes to complete.
oc describe pv <pv_name>
Example output
... Labels: CapacityGb=50 Datacenter=dal10 IOPS=500
-
Log in to the pod that mounts the PVC.
oc exec <pod-name> -it -- bash
-
Run the following command to use host binaries.
chroot /host
-
Resize the file system.
sudo resize2fs <filesystem-path>
Example command
sudo resize2fs /dev/vdg
-
Verify the file system is resized.
df -h
Backing up and restoring data
Block storage is provisioned into the same location as the worker nodes in your cluster. The storage is hosted on clustered servers by IBM to provide availability in case a server goes down. However, block storage is not backed up automatically and might be inaccessible if the entire location fails. To protect your data from being lost or damaged, you can set up periodic backups that you can use to restore your data when needed.
Review the following backup and restore options for your block storage:
Setting up periodic snapshots
You can set up periodic snapshots for your block storage, which is a read-only image that captures the state of the instance at a point in time.
To store the snapshot, you must request snapshot space on your block storage. Snapshots are stored on the existing storage instance within the same zone. You can restore data from a snapshot if a user accidentally removes important data from the volume. \n \n **To create a snapshot for your volume, complete the following steps.
-
Log in to the
ibmcloud sl
CLI.ibmcloud sl init
-
List existing PVs in your cluster.
oc get pv
-
Get the details for the PV for which you want to create snapshot space and note the volume ID, the size, and the IOPS. The size and IOPS are shown in the Labels section of your CLI output.
oc describe pv <pv_name>
-
To find the volume ID, review the
ibm.io/network-storage-id
annotation of your CLI output. -
Create the snapshot size for your existing volume with the parameters that you retrieved in the previous step.
ibmcloud sl block snapshot-order <volume_ID> --size <size> --tier <iops>
-
Wait for the snapshot size to create. The snapshot size is successfully provisioned when the Snapshot Size (GB) in your CLI output changes from 0 to the size that you ordered.
ibmcloud sl block volume-detail <volume_ID>
-
Create the snapshot for your volume and note the ID of the snapshot that is created for you.
ibmcloud sl block snapshot-create <volume_ID>
-
Verify that the snapshot is created successfully.
ibmcloud sl block snapshot-list <volume_ID>
-
Set the snapshot schedule. For more information on the options available for your snapshot schedule, refer to the CLI documentation.
ibmcloud sl block snapshot-enable VOLUME_ID <OPTIONS>
-
To restore data from a snapshot to an existing volume, run the following command.
ibmcloud sl block snapshot-restore <volume_ID> <snapshot_ID>
Replicating snapshots to another zone
To protect your data from a zone failure, you can replicate snapshots to a block storage instance that is set up in another zone.
Data can be replicated from the primary storage to the backup storage only. You can't mount a replicated block storage instance to a cluster. When your primary storage fails, you can manually set your replicated backup storage to be the primary one. Then, you can mount it to your cluster. After your primary storage is restored, you can restore the data from the backup storage.
Duplicating storage
You can duplicate your block storage instance in the same zone as the original storage instance.
A duplicate has the same data as the original storage instance at the point in time that you create the duplicate. Unlike replicas, use the duplicate as an independent storage instance from the original. To duplicate, first set up snapshots for the volume.
Backing up data to IBM Cloud® Object Storage
You can use the ibm-backup-restore Helm chart to spin up a backup and restore pod in your cluster.
This pod contains a script to run a one-time or periodic backup for any persistent volume claim (PVC) in your cluster. Data is stored in your IBM Cloud® Object Storage instance that you set up in a zone.
Block storage is mounted with an RWO access mode. This access allows only one pod to be mounted to the block storage at a time. To back up your data, you must unmount the app pod from the storage, mount it to your backup pod, back up the data, and remount the storage to your app pod.
To make your data even more highly available and protect your app from a zone failure, set up a second Object Storage instance and replicate data across zones. If you need to restore data from your Object Storage instance, use the restore pod that is provided with the Helm chart.
Copying data to and from pods and containers
You can use the oc cp
command to copy files and directories to and from pods or specific containers in your
cluster.
Access your Red Hat OpenShift cluster.
When you run the oc cp
command, if you don't specify a container with -c
, the command uses the first available container in the pod.
Copy data from your local machine to a pod in your cluster.
oc cp <local_filepath>/<filename> <namespace>/<pod>:<pod_filepath>
Copy data from a pod in your cluster to your local machine.
oc cp <namespace>/<pod>:<pod_filepath>/<filename> <local_filepath>/<filename>
Copy data from your local machine to a specific container that runs in a pod in your cluster.
oc cp <local_filepath>/<filename> <namespace>/<pod>:<pod_filepath> -c CONTAINER
Storage class reference
Bronze
- Name
ibmc-block-bronze
ibmc-block-retain-bronze
- Type
- Endurance storage
- File system
ext4
- IOPS per gigabyte
- 2
- Size range in gigabytes
- 20-12000 Gi
- Hard disk
- SSD
- Reclaim policy
ibmc-block-bronze
: Deleteibmc-block-retain-bronze
: Retain
Silver
- Name
ibmc-block-silver
ibmc-block-retain-silver
- Type
- Endurance storage
- File system
ext4
- IOPS per gigabyte
- 4
- Size range in gigabytes
- 20-12000 Gi
- Hard disk
- SSD
- Reclaim policy
ibmc-block-silver
: Deleteibmc-block-retain-silver
: Retain
Gold
- Name
ibmc-block-gold
ibmc-block-retain-gold
- Type
- Endurance storage
- File system
ext4
- IOPS per gigabyte
- 10
- Size range in gigabytes
- 20-4000 Gi
- Hard disk
- SSD
- Reclaim policy
ibmc-block-gold
: Deleteibmc-block-retain-gold
: Retain
Custom
- Name
ibmc-block-custom
ibmc-block-retain-custom
- Type
- Performance File system
ext4
- IOPS and size
- Size range in gigabytes / IOPS range in multiples of 100
- 20-39 Gi / 100-1000 IOPS
- 40-79 Gi / 100-2000 IOPS
- 80-99 Gi / 100-4000 IOPS
- 100-499 Gi / 100-6000 IOPS
- 500-999 Gi / 100-10000 IOPS
- 1000-1999 Gi / 100-20000 IOPS
- 2000-2999 Gi / 200-40000 IOPS
- 3000-3999 Gi / 200-48000 IOPS
- 4000-7999 Gi / 300-48000 IOPS
- 8000-9999 Gi / 500-48000 IOPS
- 10000-12000 Gi / 1000-48000 IOPS
- Hard disk
- The IOPS to gigabyte ratio determines the type of hard disk that is provisioned. To determine your IOPS to gigabyte ratio, you divide the IOPS by the size of your storage.
- Example: You chose 500Gi of storage with 100 IOPS. Your ratio is 0.2 (100 IOPS/500Gi).
- Overview of hard disk types per ratio:
- Less than or equal to 0.3: SATA
- Greater than 0.3: SSD
- Reclaim policy
ibmc-block-custom
: Deleteibmc-block-retain-custom
: Retain
Sample customized storage classes
You can create a customized storage class and use the storage class in your PVC.
Red Hat OpenShift on IBM Cloud provides pre-defined storage classes to provision block storage with a particular tier and configuration. Sometimes, you might want to provision storage with a different configuration that is not covered in the pre-defined storage classes. You can use the examples in this topic to find sample customized storage classes.
To create your customized storage class, see Customizing a storage class. Then, use your customized storage class in your PVC.
Creating topology-aware storage
To use block storage in a multizone cluster, your pod must be scheduled in the same zone as your block storage instance so that you can read and write to the volume. Before topology-aware volume scheduling was introduced by Kubernetes, the dynamic provisioning of your storage automatically created the block storage instance when a PVC was created. Then, when you created your pod, the Kubernetes scheduler tried to deploy the pod to the same data center as your block storage instance.
Creating the block storage instance without knowing the constraints of the pod can lead to unwanted results. For example, your pod might not be able to be scheduled to the same worker node as your storage because the worker node has insufficient resources or the worker node is tainted and does not allow the pod to be scheduled. With topology-aware volume scheduling, the block storage instance is delayed until the first pod that uses the storage is created.
To use topology-aware volume scheduling, make sure that you installed the IBM Cloud Block Storage plug-in version 1.2.0 or later.
The following examples show how to create storage classes that delay the creation of the block storage instance until the first pod that uses this storage is ready to be scheduled. To delay the creation, you must include the volumeBindingMode: WaitForFirstConsumer
option. If you don't include this option, the volumeBindingMode
is automatically set to Immediate
and the block storage instance is created when you create the PVC.
Example for Endurance block storage.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ibmc-block-bronze-delayed
parameters:
billingType: hourly
classVersion: "2"
fsType: ext4
iopsPerGB: "2"
sizeRange: '[20-12000]Gi'
type: Endurance
provisioner: ibm.io/ibmc-block
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
Example for Performance block storage.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ibmc-block-performance-storageclass
labels:
kubernetes.io/cluster-service: "true"
provisioner: ibm.io/ibmc-block
parameters:
billingType: "hourly"
classVersion: "2"
sizeIOPSRange: |-
"[20-39]Gi:[100-1000]"
"[40-79]Gi:[100-2000]"
"[80-99]Gi:[100-4000]"
"[100-499]Gi:[100-6000]"
"[500-999]Gi:[100-10000]"
"[1000-1999]Gi:[100-20000]"
"[2000-2999]Gi:[200-40000]"
"[3000-3999]Gi:[200-48000]"
"[4000-7999]Gi:[300-48000]"
"[8000-9999]Gi:[500-48000]"
"[10000-12000]Gi:[1000-48000]"
type: "Performance"
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
Specifying the zone and region
If you want to create your block storage in a specific zone, you can specify the zone and region in a customized storage class.
Use the customized storage class if you use the IBM Cloud Block Storage plug-in version 1.0.0 or if you want to statically provision block storage in a specific zone. In all other cases, specify the zone directly in your PVC.
The following .yaml
file customizes a storage class that is based on the ibm-block-silver
non-retaining storage class: the type
is "Endurance"
, the iopsPerGB
is 4
,
the sizeRange
is "[20-12000]Gi"
, and the reclaimPolicy
is set to "Delete"
. The zone is specified as dal12
. To use a different storage class as your base,
see the storage class reference.
Create the storage class in the same region and zone as your cluster and worker nodes. To get the region of your cluster, run ibmcloud oc cluster get --cluster <cluster_name_or_ID>
and look for the region prefix in the Master URL,
such as eu-de
in https://c2.eu-de.containers.cloud.ibm.com:11111
. To get the zone of your worker node, run ibmcloud oc worker ls --cluster <cluster_name_or_ID>
.
Example for Endurance block storage.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ibmc-block-silver-mycustom-storageclass
labels:
kubernetes.io/cluster-service: "true"
provisioner: ibm.io/ibmc-block
parameters:
zone: "dal12"
region: "us-south"
type: "Endurance"
iopsPerGB: "4"
sizeRange: "[20-12000]Gi"
reclaimPolicy: "Delete"
Example for Performance block storage.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ibmc-block-performance-storageclass
labels:
kubernetes.io/cluster-service: "true"
provisioner: ibm.io/ibmc-block
parameters:
zone: "dal12"
region: "us-south"
type: "Performance"
sizeIOPSRange: |-
"[20-39]Gi:[100-1000]"
"[40-79]Gi:[100-2000]"
"[80-99]Gi:[100-4000]"
"[100-499]Gi:[100-6000]"
"[500-999]Gi:[100-10000]"
"[1000-1999]Gi:[100-20000]"
"[2000-2999]Gi:[200-40000]"
"[3000-3999]Gi:[200-48000]"
"[4000-7999]Gi:[300-48000]"
"[8000-9999]Gi:[500-48000]"
"[10000-12000]Gi:[1000-48000]"
reclaimPolicy: "Delete"
Mounting block storage with an XFS
file system
The following examples create a storage class that provisions block storage with an XFS
file system.
Example for Endurance block storage.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ibmc-block-custom-xfs
labels:
addonmanager.kubernetes.io/mode: Reconcile
provisioner: ibm.io/ibmc-block
parameters:
type: "Endurance"
iopsPerGB: "4"
sizeRange: "[20-12000]Gi"
fsType: "xfs"
reclaimPolicy: "Delete"
Example for Performance block storage.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ibmc-block-custom-xfs
labels:
addonmanager.kubernetes.io/mode: Reconcile
provisioner: ibm.io/ibmc-block
parameters:
classVersion: "2"
type: "Performance"
sizeIOPSRange: |-
[20-39]Gi:[100-1000]
[40-79]Gi:[100-2000]
[80-99]Gi:[100-4000]
[100-499]Gi:[100-6000]
[500-999]Gi:[100-10000]
[1000-1999]Gi:[100-20000]
[2000-2999]Gi:[200-40000]
[3000-3999]Gi:[200-48000]
[4000-7999]Gi:[300-48000]
[8000-9999]Gi:[500-48000]
[10000-12000]Gi:[1000-48000]
fsType: "xfs"
reclaimPolicy: "Delete"
Removing persistent storage from a cluster
When you set up persistent storage in your cluster, you have three main components: the Kubernetes persistent volume claim (PVC) that requests storage, the Kubernetes persistent volume (PV) that is mounted to a pod and described in the PVC, and the IBM Cloud infrastructure instance, such as classic file or block storage. Depending on how you created your storage, you might need to delete all three components separately.
Understanding your storage removal options
Removing persistent storage from your IBM Cloud account varies depending on how you provisioned the storage and what components you already removed.
- Is my persistent storage deleted when I delete my cluster?
- During cluster deletion, you have the option to remove your persistent storage. However, depending on how your storage was provisioned, the removal of your storage might not include all storage components. If you dynamically provisioned
storage with a storage class that sets
reclaimPolicy: Delete
, your PVC, PV, and the storage instance are automatically deleted when you delete the cluster. For storage that was statically provisioned or storage that you provisioned with a storage class that setsreclaimPolicy: Retain
, the PVC and the PV are removed when you delete the cluster, but your storage instance and your data remain. You are still charged for your storage instance. Also, if you deleted your cluster in an unhealthy state, the storage might still exist even if you chose to remove it. - How do I delete the storage when I want to keep my cluster?
- When you dynamically provisioned the storage with a storage class that sets
reclaimPolicy: Delete
, you can remove the PVC to start the deletion process of your persistent storage. Your PVC, PV, and storage instance are automatically removed. For storage that was statically provisioned or storage that you provisioned with a storage class that setsreclaimPolicy: Retain
, you must manually remove the PVC, PV, and the storage instance to avoid further charges. - How does the billing stop after I delete my storage?
- Depending on what storage components you delete and when, the billing cycle might not stop immediately. If you delete the PVC and PV, but not the storage instance in your IBM Cloud account, that instance still exists and you are charged for it.
If you delete the PVC, PV, and the storage instance, the billing cycle stops depending on the billingType
that you chose when you provisioned your storage and how you chose to delete the storage.
-
When you manually cancel the persistent storage instance from the IBM Cloud console or the CLI, billing stops as follows:
- Hourly storage: Billing stops immediately. After your storage is canceled, you might still see your storage instance in the console for up to 72 hours.
- Monthly storage: You can choose between immediate cancellation or cancellation on the anniversary date. In both cases, you are billed until the end of the current billing cycle, and billing stops for the next billing cycle. After your storage is canceled, you might still see your storage instance in the console or the CLI for up to 72 hours.
- Immediate cancellation: Choose this option to immediately remove your storage. Neither you nor your users can use the storage anymore or recover the data.
- Anniversary date: Choose this option to cancel your storage on the next anniversary date. Your storage instances remain active until the next anniversary date and you can continue to use them until this date, such as to give your team time to make backups of your data.
-
When you dynamically provisioned the storage with a storage class that sets
reclaimPolicy: Delete
and you choose to remove the PVC, the PV and the storage instance are immediately removed. For hourly billed storage, billing stops immediately. For monthly billed storage, you are still charged for the remainder of the month. After your storage is removed and billing stops, you might still see your storage instance in the console or the CLI for up to 72 hours.
- What do I need to be aware of before I delete persistent storage?
- When you clean up persistent storage, you delete all the data that is stored in it. If you need a copy of the data, make a backup.
- I deleted my storage instance. Why can I still see my instance?
- After you remove persistent storage, it can take up to 72 hours for the removal to be fully processed and for the storage to disappear from your IBM Cloud console or CLI.
Cleaning up persistent storage
Remove the PVC, PV, and the storage instance from your IBM Cloud account to avoid further charges for your persistent storage.
Before you begin:
- Make sure that you backed up any data that you want to keep.
- Access your Red Hat OpenShift cluster.
To clean up persistent data:
-
List the PVCs in your cluster and note the
NAME
of the PVC, theSTORAGECLASS
, and the name of the PV that is bound to the PVC and shown asVOLUME
.oc get pvc
Example output
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE claim1 Bound pvc-06886b77-102b-11e8-968a-f6612bb731fb 20Gi RWO class 78d claim2 Bound pvc-457a2b96-fafc-11e7-8ff9-b6c8f770356c 4Gi RWX class 105d claim3 Bound pvc-1efef0ba-0c48-11e8-968a-f6612bb731fb 24Gi RWX class 83d
-
Review the
ReclaimPolicy
andbillingType
for the storage class.oc describe storageclass <storageclass_name>
If the reclaim policy says
Delete
, your PV and the physical storage are removed when you remove the PVC. If the reclaim policy saysRetain
, or if you provisioned your storage without a storage class, then your PV and physical storage are not removed when you remove the PVC. You must remove the PVC, PV, and the physical storage separately.If your storage is charged monthly, you still get charged for the entire month, even if you remove the storage before the end of the billing cycle.
-
Remove any pods that mount the PVC. List the pods that mount the PVC. If no pod is returned in your CLI output, you don't have a pod that uses the PVC.
oc get pods --all-namespaces -o=jsonpath='{range .items[*]}{"\n"}{.metadata.name}{":\t"}{range .spec.volumes[*]}{.persistentVolumeClaim.claimName}{" "}{end}{end}' | grep "<pvc_name>"
Example output
depl-12345-prz7b: claim1
-
Remove the pod that uses the PVC. If the pod is part of a deployment, remove the deployment.
oc delete pod <pod_name>
-
Verify that the pod is removed.
oc get pods
-
Remove the PVC.
oc delete pvc <pvc_name>
-
Review the status of your PV. Use the name of the PV that you retrieved earlier as
VOLUME
. When you remove the PVC, the PV that is bound to the PVC is released. Depending on how you provisioned your storage, your PV goes into aDeleting
state if the PV is deleted automatically, or into aReleased
state, if you must manually delete the PV. Note: For PVs that are automatically deleted, the status might briefly sayReleased
before it is deleted. Rerun the command after a few minutes to see whether the PV is removed.oc get pv <pv_name>
-
If your PV is not deleted, manually remove the PV.
oc delete pv <pv_name>
-
Verify that the PV is removed.
oc get pv
-
List the physical storage instance that your PV pointed to and note the
id
of the physical storage instance.ibmcloud sl block volume-list --columns id --columns notes | grep <pv_name>
Example output
12345678 {"plugin":"ibmcloud-block-storage-plugin-689df949d6-4n9qg","region":"us-south","cluster":"aa1a11a1a11b2b2bb22b22222c3c3333","type":"Endurance","ns":"default","pvc":"block-storage-pvc","pv":"pvc-d979977d-d79d-77d9-9d7d-d7d97ddd99d7","storageclass":"ibmc-block-silver","reclaim":"Delete"}
Understanding the Notes field information:
"plugin":"ibm-file-plugin-5b55b7b77b-55bb7"
- The storage plug-in that the cluster uses.
"region":"us-south"
- The region that your cluster is in.
"cluster":"aa1a11a1a11b2b2bb22b22222c3c3333"
- The cluster ID that is associated with the storage instance.
"type":"Endurance"
- The type of file or block storage, either
Endurance
orPerformance
. "ns":"default"
- The namespace that the storage instance is deployed to.
"pvc":"block-storage-pvc"
- The name of the PVC that is associated with the storage instance.
"pv":"pvc-d979977d-d79d-77d9-9d7d-d7d97ddd99d7"
- The PV that is associated with the storage instance.
"storageclass":"ibmc-file-gold"
- The type of storage class: bronze, silver, gold, or custom.
-
Remove the physical storage instance.
ibmcloud sl block volume-cancel <classic_block_id>
-
Verify that the physical storage instance is removed.
The deletion process might take up to 72 hours to complete.
ibmcloud sl block volume-list
Setting up monitoring for limited
connectivity PVs
When you create a pod and PVC that use Block Storage for Classic, 2 target ports are assigned to the underlying persistent volume (PV) where the storage is mounted. Multiple target ports allow for failover in case one port goes down.
In previous versions of the Block Storage for Classic driver, the inability to find 2 target ports when mounting a PV during rollout caused deployment failure.
However, sometimes, such as during IaaS maintenance windows, you might want your pods to deploy successfully with only 1 target port available on the persistent volume.
Beginning in version 2.4.12
of the Block Storage for Classic driver, pods will deploy successfully even if only 1 target port can be assigned by the PV. In addition to this behavior change, PVs now include a new label to indicate
the network availability where a label of healthy
means 2 targets ports were assigned and limited
means only 1 target port could be assigned during mounting.
To monitor for instances where pod connectivity to Block Storage for Classic is limited, you can set up a custom alert that looks for the limited
label. Then, configure the alert threshold to >0
.
-
From the IBM Cloud Monitoring dashboard, select New alert > Metric.
-
Select Prom query and enter
kube_persistentvolume_labels{label_ibm_io_pv_connectivity_status='limited'}
. -
Set the threshold to
>0
and set the severity you want to use for this alert. -
Select your notification channel and save the alert.