IBM Cloud Docs
Setting up Block Storage for VPC

Setting up Block Storage for VPC

Block Storage for VPC provides hypervisor-mounted, high-performance data storage for your virtual server instances that you can provision within a VPC.

You can choose between predefined storage tiers with GB sizes and IOPS that meet the requirements of your workloads. To find out if Block Storage for VPC is the right storage option for you, see Choosing a storage solution. For pricing information, see Pricing for Block Storage for VPC.

The Block Storage for VPC add-on is enabled by default on VPC clusters.

Quick start for IBM Cloud Block Storage for VPC

In this quick start guide, you create a 10Gi 5IOPS tier Block Storage for VPC volume in your cluster by creating a PVC to dynamically provision the volume. Then, you create an app deployment that mounts your PVC.

Your Block Storage for VPC volumes can be mounted by multiple pods as long as those pods are scheduled on the same node.

  1. Create a file for your PVC and name it pvc.yaml.

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: my-pvc
    spec:
      storageClassName: ibmc-vpc-block-5iops-tier
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 10Gi
    
  2. Create the PVC in your cluster.

    kubectl apply -f pvc.yaml
    
  3. After your PVC is bound, create an app deployment that uses your PVC. Create a file for your deployment and name it deployment.yaml.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
        name: my-deployment
        labels:
          app: my-app
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: my-app
      template:
        metadata:
          labels:
            app: my-app
        spec:
          containers:
          - image: ngnix # Your containerized app image.
            name: my-container
            volumeMounts:
            - name: my-volume
              mountPath: /mount-path
          volumes:
          - name: my-volume
            persistentVolumeClaim:
              claimName: my-pvc
    
  4. Create the deployment in your cluster.

    kubectl apply -f deployment.yaml
    

For more information, see the following links.

Adding Block Storage for VPC to your apps

Choose your Block Storage for VPC profile and create a persistent volume claim to dynamically provision Block Storage for VPC for your cluster. Dynamic provisioning automatically creates the matching persistent volume and orders the physical storage device in your IBM Cloud account.

  1. Decide on the Block Storage for VPC profile that best meets the capacity and performance requirements that you want.

  2. Select the corresponding storage class for your Block Storage for VPC profile.

    All IBM pre-defined storage classes set up Block Storage for VPC with an ext4 file system by default. If you want to use a different file system, such as xfs or ext3, create a customized storage class.

    • 10 IOPS/GB: ibmc-vpc-block-10iops-tier or ibmc-vpc-block-retain-10iops-tier
    • 5 IOPS/GB: ibmc-vpc-block-5iops-tier or ibmc-vpc-block-retain-5iops-tier
    • 3 IOPS/GB: ibmc-vpc-block-general-purpose or ibmc-vpc-block-retain-general-purpose
    • Custom: ibmc-vpc-block-custom or ibmc-vpc-block-retain-custom
  3. Decide on your Block Storage for VPC configuration.

    1. Choose a size for your storage. Make sure that the size is supported by the Block Storage for VPC profile that you chose.
    2. Choose if you want to keep your data after the cluster or the persistent volume claim (PVC) is deleted.
      • If you want to keep your data, then choose a retain storage class. When you delete the PVC, only the PVC is deleted. The persistent volume (PV), the physical storage device in your IBM Cloud account, and your data still exist. To reclaim the storage and use it in your cluster again, you must remove the PV and follow the steps for using existing Block Storage for VPC.
      • If you want the PV, the data, and your physical Block Storage for VPC device to be deleted when you delete the PVC, choose a storage class without retain.
  4. Create a configuration file to define your persistent volume claim and save the configuration as a YAML file.

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: <pvc_name> # Enter a name for your PVC.
    spec:
      accessModes:
      - <access-mode> # ReadWriteOnce or ReadWriteOncePod
      resources:
        requests:
          storage: 10Gi # Enter the size. Make sure that the size is supported in the profile that you chose.
      storageClassName: <storage_class> # Enter the storage class name that you selected earlier.
    
  5. Create the PVC in your cluster.

    kubectl apply -f pvc.yaml
    
  6. Verify that your PVC is created and bound to the PV. This process can take a few minutes.

    kubectl describe pvc <pvc_name>
    

    Example output

    Name:          mypvv
    Namespace:     default
    StorageClass:  ibmc-vpc-block-5iops-tier
    Status:        Bound
    Volume:        
    Labels:        <none>
    Annotations:   kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"csi-block-pvc-good","namespace":"default"},"spec":{...
                volume.beta.kubernetes.io/storage-provisioner: vpc.block.csi.ibm.io
    Finalizers:    [kubernetes.io/pvc-protection]
    Capacity: 10Gi   
    Access Modes:  
    VolumeMode:    Filesystem
    Events:
        Type       Reason                Age               From                         Message
        ----       ------                ----              ----                         -------
        Normal     ExternalProvisioning  9s (x3 over 18s)  persistentvolume-controller  waiting for a volume to be created, either by external provisioner "vpc.block.csi.ibm.io" or manually created by system administrator
    Mounted By:  <none>
    
  7. Create a deployment configuration file for your app and mount the PVC to your app.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: <deployment_name>
      labels:
        app: <deployment_label>
    spec:
      selector:
        matchLabels:
          app: <app_name>
      template:
        metadata:
          labels:
            app: <app_name>
        spec:
          containers:
          - image: <image_name>
            name: <container_name>
            volumeMounts:
            - name: <volume_name>
              mountPath: /<file_path>
          volumes:
          - name: <volume_name>
            persistentVolumeClaim:
              claimName: <pvc_name>
    
    labels.app
    In the metadata section, enter a label for the deployment.
    matchLabels.app and labels.app
    In the spec selector and template metadata sections, enter a label for your app.
    image
    Specify the name of the container image that you want to use. To list available images in your IBM Cloud Container Registry account, run ibmcloud cr image-list.
    name
    Specify the name of the container that you want to deploy in your pod.
    mountPath
    In the container volume mounts section, specify the absolute path of the directory to where the PVC is mounted inside the container.
    name
    In the container volume mounts section, enter the name of the volume to mount to your pod. You can enter any name that you want.
    name
    In the volumes section, enter the name of the volume to mount to your pod. Typically this name is the same as volumeMounts.name.
    claimName
    In the volumes persistent volume claim section, enter the name of the PVC that you created earlier.
  8. Create the deployment in your cluster.

    kubectl apply -f deployment.yaml
    
  9. Verify that the PVC is successfully mounted to your app. It might take a few minutes for your pods to get into a Running state.

    During the deployment of your app, you might see intermittent Unable to mount volumes errors in the Events section of your CLI output. The Block Storage for VPC add-on automatically retries mounting the storage to your apps. Wait a few more minutes for the storage to mount to your app.

    kubectl describe deployment <deployment_name>
    

    Example output

    ...
    Volumes:
    myvol:
        Type:    PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
        ClaimName:    mypvc
        ReadOnly:    false
    

Using an existing Block Storage for VPC instance

If you have an existing physical Block Storage for VPC device that you want to use in your cluster, you can manually create the PV and PVC to statically provision the storage.

You can attach a volume to one worker node only. Make sure that the volume is in the same zone as the worker node for the attachment to succeed.

  1. Determine the volume that you want to attach to a worker node in your VPC cluster. Note the volume ID.

    ibmcloud is volumes
    
  2. List the details of your volume. Note the Size, Zone, and IOPS. These values are used to create your PV.

    ibmcloud is volume <volume_id>
    
  3. Retrieve a list of worker nodes in your VPC cluster. Note the Zone of the worker node that is in the same zone as your storage volume.

    ibmcloud ks worker ls -c <cluster_name>
    
  4. Optional: If you provisioned your physical Block Storage for VPC instance by using a retain storage class, the PV and the physical storage is not removed when you remove the PVC. To use your physical Block Storage for VPC device in your cluster, you must remove the existing PV first.

    1. List the PVs in your cluster and look for the PV that belongs to your Block Storage for VPC device. The PV is in a released state.

      kubectl get pv
      
    2. Remove the PV.

      kubectl delete pv <pv_name>
      
  5. Create a configuration file for your PV. Include the ID, Size, Zone, and IOPS that you retrieved earlier.

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: <pv_name> # Example: my-persistent-volume
    spec:
      accessModes:
      - ReadWriteOnce
      capacity:
        storage: <vpc_block_storage_size> # Example: 20Gi
      csi:
        driver: vpc.block.csi.ibm.io
        fsType: ext4
        volumeAttributes:
          iops: "<vpc_block_storage_iops>" # Example: "3000"
          volumeId: <vpc_block_storage_ID> # Example: a1a11a1a-a111-1111-1a11-1111a11a1a11
          zone: "<vpc_block_zone>" # Example: "eu-de-3"
          region: "<vpc_block_region>"
        volumeHandle: <vpc_block_storage_ID>
      nodeAffinity:
        required:
          nodeSelectorTerms:
          - matchExpressions:
            - key: failure-domain.beta.kubernetes.io/zone
              operator: In
              values:
              - <worker_node_zone> # Example: eu-de-3
            - key: failure-domain.beta.kubernetes.io/region
              operator: In
              values:
              - <worker_node_region> # Example: eu-de
            - key: kubernetes.io/hostname
              operator: In
              values:
              - <worker_node_primary_IP>
      persistentVolumeReclaimPolicy: Retain
      storageClassName: ""
      volumeMode: Filesystem
    
    name
    In the metadata section, enter a name for your PV.
    storage
    In the spec capacity section, enter the size of your Block Storage for Classic volume in gigabytes (Gi) that you retrieved earlier. For example, if the size of your device is 100 GB, enter 100Gi.
    iops
    In the spec CSI volume attributes section, enter the Max IOPS of the Block Storage for Classic volume that you retrieved earlier.
    zone
    In the spec CSI volume attributes section, enter the VPC block zone that matches the location that you retrieved earlier. For example, if your location is Washington DC-1, then use us-east-1 as your zone. To list available zones, run ibmcloud is zones. To find an overview of available VPC zones and locations, see Creating a VPC in a different region. Please mention "region" parameter when "zone" is specified.
    region
    The region of the worker node where you want to attach storage.
    worker_node_primary_IP
    The primary IP of the worker node where you want to attach storage. You can find the primary IP of your worker node by running ibmcloud ks worker ls.
    volumeId and spec.csi.volumeHandle
    In the spec CSI volume attributes section, enter the ID of the Block Storage for Classic volume that you retrieved earlier.
    storageClassName
    For the spec storage class name, enter an empty string.
    matchExpressions
    In the spec node affinity section, enter the node selector terms to match the zone. For the key, enter failure-domain.beta.kubernetes.io/zone. For the value, enter the zone of your worker node where you want to attach storage.
    matchExpressions
    In the spec node affinity section, enter the node selector terms to match the region. For the key, enter failure-domain.beta.kubernetes.io/region. For the value, enter the region of the worker node where you want to attach storage.
  6. Create the PV in your cluster.

    kubectl apply -f pv.yaml
    
  7. Verify that the PV is created in your cluster.

    kubectl get pv
    
  8. Create another configuration file for your PVC. In order for the PVC to match the PV that you created earlier, you must choose the same value for the storage size and access mode. In your storage class field, enter an empty string value to match your PV. If any of these fields don't match the PV, then a new PV and a Block Storage for Classic instance are created automatically via dynamic provisioning.

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: <pvc_name>
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: <vpc_block_storage_size>
      storageClassName: ""
    
  9. Create your PVC.

    kubectl apply -f pvc.yaml
    
  10. Verify that your PVC is created and bound to the PV that you created earlier. This process can take a few minutes.

    kubectl describe pvc <pvc_name>
    
  11. Create a deployment or a pod that uses your PVC.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: <deployment_name>
      labels:
        app: <deployment_label>
    spec:
      selector:
        matchLabels:
          app: <app_name>
      template:
        metadata:
          labels:
            app: <app_name>
        spec:
          containers:
          - image: <image_name>
            name: <container_name>
            volumeMounts:
            - name: <volume_name>
              mountPath: /<file_path>
          volumes:
          - name: <volume_name>
            persistentVolumeClaim:
              claimName: <pvc_name>
          nodeSelector:
            kubernetes.io/hostname: "<worker_node_primary_IP>"
    

Updating the Block Storage for VPC add-on

You can update the Block Storage for VPC add-on by using the addon update command.

Before updating the add-on review the change log.

  1. Check to see if an update is available. If an update is available, the plug-in version is flagged with an asterisk and the latest version is shown. Note the latest version as this value is used later.

    ibmcloud ks cluster addons --cluster <cluster_name_or_ID>
    

    Example output

    Name                   Version                 Health State   Health Status   
    vpc-block-csi-driver   1.0.0* (2.0.0 latest)   normal         Addon Ready
    
  2. Update the add-on. Note that the update commands are different depending on the version that you have installed.

    5.0 and later Run the addon update command.

    ibmcloud ks cluster addon update vpc-block-csi-driver --cluster CLUSTER [-f] [-q] [--version VERSION] [-y]
    

    All versions before version 5.0 Disable and enable the add-on.

    ibmcloud ks cluster addon disable vpc-block-csi-driver --cluster CLUSTER [-f] [-q]
    
    ibmcloud ks cluster addon enable vpc-block-csi-driver --cluster CLUSTER [-f] [-q] [--version VERSION] [-y]
    
  3. Verify that the add-on is in the Addon Ready state. The add-on might take a few minutes to become ready.

    ibmcloud ks cluster addon ls --cluster <cluster_name_or_ID>
    

    Example output

    Name                   Version   Health State   Health Status   
    vpc-block-csi-driver   2.0.0     normal         Addon Ready
    

    If you use a default storage class other than the ibmc-vpc-block-10iops-tier storage class you must change the default storage class settings in the addon-vpc-block-csi-driver-configmap ConfigMap. For more information, see Changing the default storage class.

  4. If you created custom storage classes based on the default Block Storage for VPC storage classes, you must recreate those storage classes to update the parameters. For more information, see Recreating custom storage classes after updating to version 4.2.

Recreating custom storage classes after updating to version 4.2

With version 4.2, the default parameters for storage classes has changed. The sizeRange or iopsRange parameters are no longer used. If you created any custom storage classes that use these parameters, you must edit your custom storage classes to remove these parameters. To change the parameters in custom storage classes, you must delete and recreate them. Previously, sizeRange and iopsRange were provided each storage class as reference information. With version 4.2, these references have been removed. Now, for information about block storage profiles, sizes, and IOPs, see the block storage profiles reference.

  1. To find the details of your custom storage classes, run the following command.

    kubectl describe sc STORAGECLASS
    
  2. If the storage class uses the sizeRange or iopsRange, get the storage class YAML and save it to a file.

    kubectl get sc STORAGECLASS -o yaml
    
  3. In the file that you saved from the output of the previous command, remove the sizeRange or iopsRange parameters.

  4. Delete the storage class from your cluster.

    kubectl delete sc STORAGECLASS
    
  5. Recreate the storage class in your cluster by using the file you created earlier.

    kubectl apply -f custom-storage-class.yaml
    

Setting up encryption for Block Storage for VPC

Use a key management service (KMS) provider, such as IBM® Key Protect, to create a private root key that you use in your Block Storage for VPC instance to encrypt data as it is written to the storage. After you create the private root key, create a custom storage class or a Kubernetes secret with your root key and then use this storage class or secret to provision your Block Storage for VPC instance.

Enabling encryption for Block Storage for VPC impacts performance by approximately 20%. However, the exact impact depends on your worker node and storage volume configuration. Consider allowing for performance impacts when enabling encryption.

  1. Create a Key Protect service instance.

  2. Create a root key. By default, the root key is created without an expiration date.

  3. Retrieve the service CRN for your root key.

    1. From the Key Protect details page, select the Keys tab to find the list of your keys.

    2. Find the root key that you created and from the actions menu, click View CRN.

    3. Note the CRN of your root key.

  4. Authorize Block Storage for VPC to access IBM® Key Protect.

    1. From the IBM Cloud menu, select Manage > Access (IAM).

    2. From the menu, select Authorizations.

    3. Click Create.

    4. Select Cloud Block Storage for Classic as your source service.

    5. Select Key Protect as your target service.

    6. Select the Reader service access role and click Authorize.

  5. Decide if you want to store the Key Protect root key CRN in a customized storage class or in a Kubernetes secret. Then, follow the steps to create a customized storage class or a Kubernetes secret.

    Example customized storage class.

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: <storage_class_name> # Enter a name for your storage class.
    provisioner: vpc.block.csi.ibm.io
    parameters:
      profile: "5iops-tier"
      csi.storage.k8s.io/fstype: "ext4"
      billingType: "hourly"
      encrypted: "true"
      encryptionKey: "<encryption_key>"
      resourceGroup: ""
      zone: ""
      tags: ""
      generation: "gc"
      classVersion: "1"
    reclaimPolicy: "Delete"
    
    encrypted
    In the parameters, enter true to create a storage class that sets up encryption for your Block Storage for Classic volumes. If you set this option to true, you must provide the root key CRN of your Key Protect service instance that you want to use in parameters.encryptionKey.
    encryptionKey
    In the parameters, enter the root key CRN that you retrieved earlier.

    Example Kubernetes secret.

    apiVersion: v1
    kind: Secret
    type: vpc.block.csi.ibm.io
    metadata:
      name: <secret_name>
      namespace: <namespace_name>
    stringData:
      encrypted: <true_or_false>
    data
      encryptionKey: <encryption_key>
    
    name
    Enter a name for your secret.
    namespace
    Enter the namespace where you want to create your secret.
    encrypted
    In the parameters, enter true to set up encryption for your Block Storage for Classic volumes.
    encryptionKey
    In the parameters, enter the root key CRN of your Key Protect service instance that you want to use to encrypt your Block Storage for Classic volume. To use your root key CRN in a secret, you must first convert it to base64 by running echo -n "<root_key_CRN>" | base64.
  6. Follow steps 4-9 in Adding Block Storage for VPC to your apps to create a PVC with your customized storage class to provision Block Storage for VPC that is configured for encryption with your Key Protect root key. Then, mount this storage to an app pod.

    Your app might take a few minutes to mount the storage and get into a Running state.

  7. Verify that your data is encrypted. List your Block Storage for Classic volumes and note the ID of the instance that you created. The storage instance Name equals the name of the PV that was automatically created when you created the PVC.

    ibmcloud is vols
    

    Example output

    ID                                     Name                                       Status      Capacity   IOPS   Profile           Attachment type   Created                     Zone         Resource group
    a395b603-74bf-4703-8fcb-b68e0b4d6960   pvc-479d590f-ca72-4df2-a30a-0941fceeca42   available   10         3000   5iops-tier        data              2019-08-17T12:29:18-05:00   us-south-1   a8a12accd63b437bbd6d58fb6a462ca7
    
  8. Using the volume ID, list the details for your Block Storage for Classic instance to ensure that your Key Protect root key is stored in the storage instance. You can find the root key in the Encryption key field of your CLI output.

    ibmcloud is vol <volume_ID>
    

    Example output

    ID                                     a395b603-74bf-4703-8fcb-b68e0b4d6960   
    Name                                   pvc-479d590f-ca72-4df2-a30a-0941fceeca42   
    Status                                 available   
    Capacity                               10   
    IOPS                                   3000   
    Profile                                5iops-tier   
    Encryption key                         crn:v1:bluemix:public:kms:us-south:a/6ef045fd2b43266cfe8e6388dd2ec098:53369322-958b-421c-911a-c9ae8d5156d1:key:47a985d1-5f5e-4477-93fc-12ce9bae343f   
    Encryption                             user_managed   
    Resource group                         a8a12accd63b437bbd6d58fb6a462ca7
    Created                                2019-08-17T12:29:18-05:00
    Zone                                   us-south-1   
    Volume Attachment Instance Reference
    

Customizing the default storage settings

You can change some default PVC settings by using a customized storage class or a Kubernetes secret to create Block Storage for VPC with your customized settings.

What is the benefit of using a secret and specifying my parameters in a customized storage class?
As a cluster admin, create a customized storage class when you want all the PVCs that your cluster users create to be provisioned with a specific configuration and you don't want to enable your cluster users to override the default configuration.
However, when multiple configurations are required and you don't want to create a customized storage class for every possible PVC configuration, you can create one customized storage class with the default PVC settings and a reference to a generic Kubernetes secret. If your cluster users must override the default settings of your customized storage class, they can do so by creating a Kubernetes secret that holds their custom settings.

When you want to set up encryption for your Block Storage for Classic instance, you can also use a Kubernetes secret if you want to encode the Key Protect root key CRN to base64 instead of providing the key directly in the customized storage class.

Changing the default storage class

With version 4.2 the Block Storage for Classic add-on sets the default storage class to the ibmc-vpc-block-10iops-tier class. If you have a default storage class other than ibmc-vpc-block-10iops-tier and your PVCs use the default storage class, this can result in multiple default storage classes which can cause PVC creation failures. To use a default storage class other than ibmc-vpc-block-10iops-tier, you can update the addon-vpc-block-csi-driver-configmap to change the IsStorageClassDefault to false.

The default storage class for the Block Storage for Classic add-on is the ibmc-vpc-block-10iops-tier storage class.

  1. Edit the addon-vpc-block-csi-driver-configmap

    kubectl edit cm addon-vpc-block-csi-driver-configmap -n kube-system
    
  2. Change the IsStorageClassDefault setting to false.

  3. Save and exit.

  4. Wait 15 minutes and verify the change by getting the details of the ibmc-vpc-block-10iops-tier storage class.

    kubectl get sc ibmc-vpc-block-10iops-tier -o yaml
    

Creating a custom storage class

Create your own customized storage class with the preferred settings for your Block Storage for Classic instance.

You might create a custom storage class if you want to:

  • Set a custom IOPs value.
  • Set up Block Storage for VPC with a file system type other than ext4.
  • Set up encryption.

To create your own storage class:

  1. Review the Storage class reference to determine the profile that you want to use for your storage class. You can also review the custom profiles if you want to specify custom IOPs for your Block Storage for VPC.

    If you want to use a pre-installed storage class as a template, you can get the details of a storage class by using the kubectl get sc <storageclass> -o yaml command.

  2. Create a customized storage class configuration file.

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: <storage_class_name>
    provisioner: vpc.block.csi.ibm.io
    parameters:
      profile: "<profile>"
      csi.storage.k8s.io/fstype: "<file_system_type>"
      billingType: "hourly"
      encrypted: "<encrypted_true_false>"
      encryptionKey: "<encryption_key>"
      resourceGroup: ""
      zone: "<zone>"
      region: "<region>"
      tags: "<tags>"
      generation: "gc"
      classVersion: "1"
      iops: "<iops>" # Only specify this parameter if you are using a "custom" profile.
    allowVolumeExpansion: (true|false) # Select true or false. Only supported on version 3.0.1 and later
    volumeBindingMode: <volume_binding_mode>
      # csi.storage.k8s.io/provisioner-secret-name: # Uncomment and add secret parameters to enforce encryption. 
      # csi.storage.k8s.io/provisioner-secret-namespace: 
    reclaimPolicy: "<reclaim_policy>"
    
    name
    Enter a name for your storage class.
    profile
    Enter the profile that you selected in the previous step, or enter custom to use a custom IOPs value. To find supported storage sizes for a specific profile, see Tiered IOPS profile. Any PVC that uses this storage class must specify a size value that is within this range.
    csi.storage.k8s.io/fstype
    In the parameters, enter the file system for your Block Storage for Classic instance. Choose xfs, ext3, or ext4. If you want to modify the ownership or permissions of your volume you must specify the csi.storage.k8s.io/fstype in your custom storage class and your PVC must have ReadWriteOnce as the accessMode. The Block Storage for VPC driver uses the ReadWriteOnceWithFSType fsGroupPolicy. For more information, see CSI driver documentation.
    encrypted
    In the parameters, enter true to create a storage class that sets up encryption for your Block Storage for Classic volume. If you set this option to true, you must provide the root key CRN of your Key Protect service instance that you want to use in parameterencryptionKey. For more information about encrypting your data, see Setting up encryption for your Block Storage for VPC.
    encryptionKey
    If you entered true for parameters.encrypted, then enter the root key CRN of your Key Protect service instance that you want to use to encrypt your Block Storage for Classic volume. For more information about encrypting your data, see Setting up encryption for your Block Storage for VPC.
    zone
    In the parameters, enter the VPC zone where you want to create the Block Storage for VPC instance. Make sure that you use a zone that your worker nodes are connected to. To list VPC zones that your worker nodes use, run ibmcloud ks cluster get --cluster <cluster_name_or_ID> and look at the Worker Zones field in your CLI output. If you don't specify a zone, one of the worker node zones is automatically selected for your Block Storage for VPC instance.
    region
    The region of the worker node where you want to attach storage.
    tags
    In the parameters, enter a space-separated list of tags to apply to your Block Storage for VPC instance. Tags can help you find instances more easily or group your instances based on common characteristics, such as the app or the environment that it is used for.
    iops
    If you entered custom for the profile, enter a value for the IOPs that you want your Block Storage for VPC to use. Refer to the Block Storage for VPC custom IOPs profile table for a list of supported IOPs ranges by volume size.
    reclaimPolicy
    Enter the reclaim policy for your storage class. If you want to keep the PV, the physical storage device and your data when you remove the PVC, enter Retain. If you want to delete the PV, the physical storage device and your data when you remove the PVC, enter Delete.
    allowVolumeExpansion
    Enter the volume expansion policy for your storage class. If you want to allow volume expansion, enter true. If you don't want to allow volume expansion, enter false.
    volumeBindingMode
    Choose if you want to delay the creation of the Block Storage for VPC instance until the first pod that uses this storage is ready to be scheduled. To delay the creation, enter WaitForFirstConsumer. To create the instance when you create the PVC, enter Immediate.
  3. Create the customized storage class in your cluster.

    kubectl apply -f custom-storageclass.yaml
    
  4. Verify that your storage class is available in the cluster.

    kubectl get storageclasses
    

    Example output

    NAME                                    PROVISIONER            AGE
    ibmc-vpc-block-10iops-tier              vpc.block.csi.ibm.io   4d21h
    ibmc-vpc-block-5iops-tier               vpc.block.csi.ibm.io   4d21h
    ibmc-vpc-block-custom                   vpc.block.csi.ibm.io   4d21h
    ibmc-vpc-block-general-purpose          vpc.block.csi.ibm.io   4d21h
    ibmc-vpc-block-retain-10iops-tier       vpc.block.csi.ibm.io   4d21h
    ibmc-vpc-block-retain-5iops-tier        vpc.block.csi.ibm.io   4d21h
    ibmc-vpc-block-retain-custom            vpc.block.csi.ibm.io   4d21h
    ibmc-vpc-block-retain-general-purpose   vpc.block.csi.ibm.io   4d21h
    <custom-storageclass>             vpc.block.csi.ibm.io   4m26s
    
  5. Follow the steps in Adding Block Storage for VPC to your apps to create a PVC with your customized storage class to provision Block Storage for VPC. Then, mount this storage to a sample app.

  6. Optional: Verify your Block Storage for VPC file system type.

Verifying your Block Storage for VPC file system

You can create a customized storage class to provision Block Storage for VPC with a different file system, such as xfs or ext3. By default, all Block Storage for VPC instances are provisioned with an ext4 file system.

  1. Follow the steps to create a customized storage class with the file system that you want to use.

  2. Follow steps 4-9 in Adding Block Storage for VPC to your apps to create a PVC with your customized storage class to provision Block Storage for Classic with a different file system. Then, mount this storage to an app pod.

    Your app might take a few minutes to mount the storage and get into a Running state.

  3. Verify that your storage is mounted with the correct file system. List the pods in your cluster and note the Name of the pod that you used to mount your storage.

    kubectl get pods
    
  4. Log in to your pod.

    kubectl exec <pod_name> -it bash
    
  5. List the mount paths inside your pod.

    mount | grep /dev/xvdg
    

    Example output for xfs.

    /dev/xvdg on /test type xfs (rw,relatime,attr2,inode64,noquota)
    
  6. Exit your pod.

    exit
    

Updating the VolumeAttachLimit

In versions 5.2 and later of the Block Storage for VPC add-on, you can edit the maximum number of volumes that can be attached to each node by editing the configmap. The default value is 12.

Your account must be approved to use this feature.

  1. Log in to your account. If applicable, target the appropriate resource group. Set the context for your cluster.

  2. Edit the configmap. Replace VALUE with the volume attachment limit that you want to set.

    kubectl patch configmap/addon-vpc-block-csi-driver-configmap \       
    -n kube-system \
    --type merge \
    -p '{"data":{"VolumeAttachmentLimit":"VALUE"}}'
    
  3. Wait for the ibm-vpc-block-csi-node pods in the kube-system namespace to restart. Verify that the pods have restarted.

    kubectl get pods -n kube-system -w| grep block-csi
    
  4. You can now attach volumes to your worker nodes by using dynamic provisioning or by manually creating the attachments. For more information, see Adding Block Storage for VPC to your apps or Using an existing Block Storage for VPC instance.

Storing your custom PVC settings in a Kubernetes secret

Specify your PVC settings in a Kubernetes secret and reference this secret in a customized storage class. Then, use the customized storage class to create a PVC with the custom parameters that you set in your secret.

What options do I have to use the Kubernetes secret?
As a cluster admin, you can choose if you want to allow each cluster user to override the default settings of a storage class, or if you want to create one secret that everyone in your cluster must use and that enforces base64 encoding for your Key Protect root key CRN.
Every user can customize the default settings
In this scenario, the cluster admin creates one customized storage class with the default PVC settings and a reference to a generic Kubernetes secret. Cluster users can override the default settings of the storage class by creating a Kubernetes secret with the PVC settings that they want. In order for the customized settings in the secret to get applied to your Block Storage for Classic instance, you must create a PVC with the same name as your Kubernetes secret.
Enforce base64 encoding for the Key Protect root key
In this scenario, you create one customized storage class with the default PVC settings and a reference to a static Kubernetes secret that overrides or enhances the default settings of the customized storage class. Your cluster users can't override the default settings by creating their own Kubernetes secret. Instead, cluster users must provision Block Storage for VPC with the configuration that you chose in your customized storage class and secret. The benefit of using this method over creating a customized storage class only is that you can enforce base64 encoding for the root key CRN of your Key Protect service instance when you want to encrypt the data in your Block Storage for Classic instance.
What do I need to be aware of before I start using the Kubernetes secret for my PVC settings?
Some PVC settings, such as the reclaimPolicy, fstype, or the volumeBindingMode can't be set in the Kubernetes secret and must be set in the storage class. As the cluster admin, if you want to enable your cluster users to override your default settings, you must ensure that you set up enough customized storage classes that reference a generic Kubernetes secret so that your users can provision Block Storage for VPC with different reclaimPolicy, fstype, and volumeBindingMode settings.

Enabling every user to customize the default PVC settings

  1. As the cluster admin, follow the steps to create a customized storage class. In the customized storage class YAML file, reference the Kubernetes secret in the metadata.parameters section as follows. Make sure to add the code as-is and not to change variables names.

    csi.storage.k8s.io/provisioner-secret-name: ${pvc.name}
    csi.storage.k8s.io/provisioner-secret-namespace: ${pvc.namespace}
    
  2. As the cluster user, create a Kubernetes secret that customizes the default settings of the storage class.

    apiVersion: v1
    kind: Secret
    type: vpc.block.csi.ibm.io
    metadata:
      name: <secret_name>
      namespace: <namespace_name>
    stringData:
      iops: "<IOPS_value>"
      zone: "<zone>"
      tags: "<tags>"
      encrypted: <true_or_false>
      resourceGroup: "<resource_group>"
    data
      encryptionKey: <encryption_key>
    
    name
    Enter a name for your Kubernetes secret.
    namespace
    Enter the namespace where you want to create your secret. To reference the secret in your PVC, the PVC must be created in the same namespace.
    iops
    In the string data section, enter the range of IOPS that you want to allow for your Block Storage for Classic instance. The range that you enter must match the Block Storage for VPC tier that you plan to use.
    zone
    In the string data section, enter the VPC zone where you want to create the Block Storage for Classic instance. Make sure that you use a zone that your worker nodes are connected to. To list VPC zones that your worker nodes use, run ibmcloud ks cluster get --cluster <cluster_name_or_ID> and look at the Worker Zones field in your CLI output. If you don't specify a zone, one of the worker node zones is automatically selected for your Block Storage for Classic instance.
    tags
    In the string data section, enter a comma-separated list of tags to use when the PVC is created. Tags can help you find your storage instance after it is created.
    resourceGroup
    In the string data section, enter the resource group that you want your Block Storage for Classic instance to get access to. If you don't enter a resource group, the instance is automatically authorized to access resources of the resource group that your cluster belongs to.
    encrypted
    In the string data section, enter true to create a secret that sets up encryption for Block Storage for Classic volumes. If you set this option to true, you must provide the root key CRN of your Key Protect service instance that you want to use in parameters.encryptionKey. For more information about encrypting your data, see Setting up encryption for your Block Storage for VPC.
    encryptionKey
    In the data section, if you entered true for parameters.encrypted, then enter the root key CRN of your Key Protect service instance that you want to use to encrypt your Block Storage for Classic volumes. To use your root key CRN in a secret, you must first convert it to base64 by running echo -n "<root_key_CRN>" | base64. For more information about encrypting your data, see Setting up encryption for your Block Storage for VPC.
  3. Create your Kubernetes secret.

    kubectl apply -f secret.yaml
    
  4. Follow the steps in Adding Block Storage for VPC to your apps to create a PVC with your custom settings. Make sure to create the PVC with the customized storage class that the cluster admin created and use the same name for your PVC that you used for your secret. Using the same name for the secret and the PVC triggers the storage provider to apply the settings of the secret in your PVC.

Enforcing base64 encoding for the Key Protect root key CRN

  1. As the cluster admin, create a Kubernetes secret that includes the base64 encoded value for your Key Protect root key CRN. To retrieve the root key CRN, see Setting up encryption for your Block Storage for VPC.

    apiVersion: v1
    kind: Secret
    type: vpc.block.csi.ibm.io
    metadata:
      name: <secret_name>
      namespace: <namespace_name>
    stringData:
      encrypted: <true_or_false>
      resourceGroup: "<resource_group>"
    data:
      encryptionKey: <encryption_key>
    
    name
    Enter a name for your Kubernetes secret.
    namespace
    Enter the namespace where you want to create your secret. To reference the secret in your PVC, the PVC must be created in the same namespace.
    encrypted
    In the string data section, enter true to create a secret that sets up encryption for Block Storage for Classic volumes. If you set this option to true, you must provide the root key CRN of your Key Protect service instance that you want to use in parameters.encryptionKey. For more information about encrypting your data, see Setting up encryption for your Block Storage for VPC.
    encryptionKey
    In the data section, if you entered truefor parameters.encrypted, then enter the root key CRN of your Key Protect service instance that you want to use to encrypt your Block Storage for Classic volume. To use your root key CRN in a secret, you must first convert it to base 64 by running echo -n "<root_key_CRN>" | base64. For more information about encrypting your data, see Setting up encryption for your Block Storage for VPC.
  2. Create the Kubernetes secret.

    kubectl apply -f secret.yaml
    
  3. Follow the steps to create a customized storage class. In the customized storage class YAML file, reference the Kubernetes secret in the metadata.parameters section as follows. Make sure to enter the name of the Kubernetes secret that you created earlier and the namespace where you created the secret.

    csi.storage.k8s.io/provisioner-secret-name: <secret_name>
    csi.storage.k8s.io/provisioner-secret-namespace: <secret_namespace>
    
  4. As the cluster user, follow the steps in Adding Block Storage for VPC to your apps to create a PVC from your customized storage class.

Setting up volume expansion

To provision volumes that support expansion, you must use storage class that has allowVolumeExpansion set to true.

You can only expand volumes that are mounted by an app pod.

Log in to your account. If applicable, target the appropriate resource group. Set the context for your cluster.

  1. If you are not using version 4.2 or later of the add-on, update the Block Storage for VPC add-on in your cluster.

  2. Create a PVC that uses a storage class that supports volume expansion.

  3. Deploy an app that uses your PVC. When you create your app, make a note of the mountPath that you specify.

  4. After your PVC is mounted by an app pod, you can expand your volume by editing the value of the spec.resources.requests.storage field in your PVC. To expand your volume, edit your PVC and increase the value in the spec.resources.requests.storage field.

    kubectl edit pvc <pvc-name>
    

    Example

    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 10Gi
    
  5. Save and close the PVC.

  6. Optional: Verify that your volume is expanded. Get the details of your PVC and make a note of the PV name.

    kubectl get pvc <pvc-name>
    
  7. Describe your PV and make a note of the volume ID.

    kubectl describe PV
    
  8. Get the details of your Block Storage for VPC volume and verify the capacity.

    ibmcloud is vol <volume-ID>
    

Manually expanding volumes before add-on version 4.2

Complete the following steps to manually expand your existing Block Storage for VPC volumes that were created before version 4.2 of the add-on.

You can only expand volumes that are mounted by an app pod.

  1. Get the details of your app and make a note of the PVC name and mountPath.

    kubectl get pod <pod-name> -n <pod-namespace> -o yaml
    
  2. Get the details of your PVC and make a note of the PV name.

    kubectl get pvc
    
  3. Describe your PV and get the volumeId.

    kubectl describe pv `pv-name` | grep volumeId 
    

    Example output for a volume ID of r011-a1aaa1f1-3aaa-4a73-84aa-0aa32e11a1a1.

    volumeId=r011-a1aaa1f1-3aaa-4a73-84aa-0aa32e11a1a1
    
  4. Resize the volume by using a PATCH request. The following example resizes a volume to 250 GiB.

    curl -sS -X PATCH -H "Authorization: <iam_token>" "https://<region>.iaas.cloud.ibm.com/v1/volumes/<volumeId>?generation=2&version=2020-06-16" -d '{"capacity":250}'
    
    <iam_token>
    Your IAM token. To retrieve your IAM token, run ibmcloud iam oauth-tokens.
    <region>
    The region your cluster is in, for example us-south.
    <volumeId>
    The volume ID that you retrieved earlier. For example r011-a1aaa1f1-3aaa-4a73-84aa-0aa32e11a1a1.
    <capacity>
    The increased capacity in GiB, for example 250.
  5. Log in to your app pod.

    kubectl exec <pod-name> -it -- bash
    
  6. Run the following command to use host binaries.

    chroot /host
    
  7. Get the file system details and make a note of the Filesystem path that you want to update. You can also grep for the mount path as specified in your application pod. df -h | grep <mount-path>.

    df -h
    

    Example output

    Filesystem      Size  Used Avail Use% Mounted on
    overlay          98G   64G   29G  70% /
    tmpfs            64M     0   64M   0% /dev
    tmpfs            32G     0   32G   0% /sys/fs/cgroup
    shm              64M     0   64M   0% /dev/shm
    /dev/vda2        98G   64G   29G  70% /etc/hosts
    /dev/vdg        9.8G   37M  9.8G   1% /mount-path # Note the Filesystem path that corresponds to the mountPath that you specified in your app.
    tmpfs            32G   40K   32G   1% /run/secrets/kubernetes.io/serviceaccount
    tmpfs            32G     0   32G   0% /proc/acpi
    tmpfs            32G     0   32G   0% /proc/scsi
    tmpfs            32G     0   32G   0% /sys/firmware
    
  8. Resize the file system.

    sudo resize2fs <filesystem-path>
    

    Example command

    sudo resize2fs /dev/vdg
    
  9. Verify the file system is resized.

    df -h
    

Backing up and restoring data

Data on Block Storage for VPC is secured across redundant fault zones in your region. To manually back up your data, use the Kubernetes kubectl cp command.

You can use the kubectl cp command to copy files and directories to and from pods or specific containers in your cluster

Before you begin: Log in to your account. If applicable, target the appropriate resource group. Set the context for your cluster.

To back up or restore data, choose between the following options:

Copy data from your local machine to a pod in your cluster.

kubectl cp <local_filepath>/<filename> <namespace>/<pod>:<pod_filepath>

Copy data from a pod in your cluster to your local machine.

kubectl cp <namespace>/<pod>:<pod_filepath>/<filename> <local_filepath>/<filename>

Copy data from your local machine to a specific container that runs in a pod in your cluster.

kubectl cp <local_filepath>/<filename> <namespace>/<pod>:<pod_filepath> -c <container>

Storage class reference

For more information about pricing, see Pricing information.

10 IOPs tier

Name
ibmc-vpc-block-10iops-tier
ibmc-vpc-block-retain-10iops-tier
ibmc-vpc-block-metro-10iops-tier
ibmc-vpc-block-metro-retain-10iops-tier
ibmc-vpcblock-odf-10iops
ibmc-vpcblock-odf-ret-10iops
File system
ext4
Corresponding Block Storage for VPC tier
10 IOPS/GB
Volume binding mode
ibmc-vpc-block-10iops-tier: Immediate
ibmc-vpc-block-retain-10iops-tier: Immediate
ibmc-vpc-block-metro-10iops-tier: WaitForFirstConsumer
ibmc-vpc-block-metro-retain-10iops-tier: WaitForFirstConsumer
ibmc-vpcblock-odf-10iops: WaitForFirstConsumer
ibmc-vpcblock-odf-ret-10iops: WaitForFirstConsumer
Reclaim policy
ibmc-vpc-block-10iops-tier: Delete
ibmc-vpc-block-retain-10iops-tier: Retain
ibmc-vpc-block-metro-10iops-tier: Delete
ibmc-vpc-block-metro-retain-10iops-tier: Retain
ibmc-vpcblock-odf-10iops: Delete
ibmc-vpcblock-odf-ret-10iops: Retain
Billing
Hourly

5 IOPs tier

Name
ibmc-vpc-block-5iops-tier
ibmc-vpc-block-retain-5iops-tier
ibmc-vpcblock-odf-5iops
ibmc-vpcblock-odf-ret-5iops created
File system
ext4
Corresponding Block Storage for VPC tier
5 IOPS/GB
Volume binding mode
ibmc-vpc-block-5iops-tier: Immediate
ibmc-vpc-block-retain-5iops-tier: Immediate
ibmc-vpc-block-metro-5iops-tier: WaitforFirstConsumer
ibmc-vpc-block-metro-retain-5iops-tier: WaitForFirstConsumer
ibmc-vpcblock-odf-5iops: WaitForFirstConsumer
ibmc-vpcblock-odf-ret-5iops created: WaitForFirstConsumer
Reclaim policy
ibmc-vpc-block-5iops-tier: Delete
ibmc-vpc-block-retain-5iops-tier: Retain
ibmc-vpc-block-metro-5iops-tier: Delete
ibmc-vpc-block-metro-retain-5iops-tier: Retain
ibmc-vpcblock-odf-5iops: Delete
ibmc-vpcblock-odf-ret-5iops created: Retain
Billing
Hourly

Custom

Name

ibmc-vpc-block-custom

ibmc-vpc-block-retain-custom

ibmc-vpcblock-odf-custom

ibmc-vpcblock-odf-ret-custom

File system

ext4

Corresponding Block Storage for VPC tier

Custom

Volume binding mode

ibmc-vpc-block-custom: Immediate

ibmc-vpc-block-retain-custom: Immediate

ibmc-vpc-block-metro-custom: WaitforFirstConsumer

ibmc-vpc-block-metro-retain-custom: WaitForFirstConsumer

ibmc-vpcblock-odf-custom: WaitForFirstConsumer

ibmc-vpcblock-odf-ret-custom: WaitForFirstConsumer

Reclaim policy

ibmc-vpc-block-custom: Delete

ibmc-vpc-block-retain-custom: Retain

ibmc-vpc-block-metro-custom: Delete

ibmc-vpc-block-metro-retain-custom: Retain

ibmc-vpcblock-odf-custom: Delete

ibmc-vpcblock-odf-ret-custom: Retain

Billing

Hourly

General purpose

Name
ibmc-vpc-block-general-purpose
ibmc-vpc-block-retain-general-purpose
ibmc-vpc-block-metro-general-purpose
ibmc-vpc-block-metro-retain-general-purpose
ibmc-vpcblock-odf-ret-general
ibmc-vpcblock-odf-general
File system
ext4
Corresponding Block Storage for VPC tier
3 IOPS/GB
Volume binding mode
ibmc-vpc-block-general-purpose: Immediate
ibmc-vpc-block-retain-general-purpose: Immediate
ibmc-vpc-block-metro-general-purpose: WaitforFirstConsumer
ibmc-vpc-block-metro-retain-general-purpose: WaitForFirstConsumer
ibmc-vpcblock-odf-ret-general: WaitForFirstConsumer
ibmc-vpcblock-odf-general: WaitForFirstConsumer
Reclaim policy
ibmc-vpc-block-general-purpose: Delete
ibmc-vpc-block-retain-general-purpose: Retain
ibmc-vpc-block-metro-general-purpose: Delete
ibmc-vpc-block-metro-retain-general-purpose: Retain
ibmc-vpcblock-odf-ret-general: Retain
ibmc-vpcblock-odf-general: Delete
Billing
Hourly