IBM Cloud Docs
Local Storage Operator - File

Local Storage Operator - File

Set up Persistent storage using local volumes for IBM Cloud Satellite® clusters.You can use Satellite storage templates to create storage configurations. When you assign a storage configuration to your clusters, the storage drivers of the selected storage provider are installed in your cluster.

When you create a local file storage configuration, you specify the local storage devices that you want to make available as persistent volumes (PVs) in your clusters. After you assign the storage configuration to a cluster, Satellite deploys the local storage operator which mounts the local disks that you specified in your configuration. The operator further creates the persistent volumes with the file system type that you specify, and creates the sat-local-file-gold storage class which you can use to create persistent volume claims (PVCs). You can then reference your PVCs in your Kubernetes workloads.

Before you can deploy storage templates to clusters in your location, make sure you set up Satellite Config by selecting the Enable cluster admin access for Satellite Config option in the console or including the --enable-config-admin option when you create your cluster.

You cannot scope Satellite storage service to resource groups. However, if you are scoping other resources such as location and cluster to resource groups, you need to add Satellite reader and link administrator role for all resources in the account.

Prerequisites for local file storage

Before you can create a local file storage configuration, you must identify the worker nodes in your clusters that have the required available disks. Then, label these worker nodes so that the local storage drivers are installed on only these worker nodes.

  1. Create a Satellite location.

  2. If you do not have any clusters in your location, create a Red Hat OpenShift on IBM Cloud cluster or attach existing Red Hat OpenShift on IBM Cloud clusters to your location. Ensure that the worker nodes in your cluster that you want to use in your storage configuration have at least one available local disk in addition to the disks required by Satellite. The extra disks must be unformatted.

  3. Get the device details of your worker nodes.

  4. Label the worker nodes that have an available disk and that you want to use in your configuration. The local storage drivers are installed only on the labeled worker nodes.

Getting the device details for your local file storage configuration

When you create your file storage configuration, you must specify which devices that you want to use. The device paths that you retrieve in the following steps are specified as parameters when you create your configuration.

  1. Log in to your cluster and get a list of available worker nodes. Make a note of the worker nodes that you want to use in your configuration.

    oc get nodes
    
  2. Log in to each worker node that you want to use for your local storage configuration.

    oc debug node/<node-name>
    
  3. When the debug pod is deployed on the worker node, run the following commands to list the available disks on the worker node.

    1. Allow host binaries.

      chroot /host
      
    2. List your devices.

      lsblk
      
    3. Get the details of your devices. Verify that the devices that you want to use are unmounted and unformatted.

      fdisk -l
      
  4. Review the command output for available disks. You must use unmounted disks for the local storage configuration. In the following example output from the lsblk command, the xvdc disk is unmounted and has no partitions.

    NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
    xvda    202:0    0  100G  0 disk 
    |-xvda1 202:1    0    1G  0 part /boot
    `-xvda2 202:2    0   99G  0 part /
    xvdb    202:16   0    2G  0 disk 
    `-xvdb1 202:17   0    2G  0 part 
    xvdc    202:32   0   100G  0 disk 
    xvde    202:64   0   50G  0 disk /var/data
    xvdh    202:112  0   64M  0 disk
    
  5. Repeat the previous steps for each worker node that you want to use for your local file storage configuration.

Labeling your worker nodes when using local file storage

After you have retrieved the device paths for the disks that you want to use in your configuration, label the worker nodes where the disks are located.

  1. Get the worker node IP addresses.

    oc get nodes
    
  2. Label the worker nodes that you retrieved earlier. The local storage drivers are deployed to the worker nodes with this label. You can use the storage=local-file label in the example command or you can create your own label in the key=value format.

    oc label nodes <worker-IP> <worker-IP> "storage=local-file"
    

    Example output

    node/<worker-IP> labeled
    node/<worker-IP> labeled
    
  3. Verify that the label is added to the worker nodes that you want to use. Run the following command to display the labels on your worker nodes and highlight the label that you added in the previous step.

    oc get nodes --show-labels | grep --color=always storage=local-file
    

Creating and assigning a configuration in the console

  1. Review the parameter reference.

  2. From the Locations console, select the location where you want to create a storage configuration.

  3. Select Storage > Create storage configuration

  4. Enter a name for your configuration.

  5. Select the Storage type.

  6. Select the Version and click Next

  7. If the Storage type that you selected accepts custom parameters, enter them on the Parameters tab.

  8. If the Storage type that you selected requires secrets, enter them on the Secrets tab.

  9. On the Storage classes tab, review the storage classes that are deployed by the configuration or create a custom storage class.

  10. On the Assign to service tab, select the service that you want to assign your configuration to.

  11. Click Complete to assign your storage configuration.

Creating a configuration in the CLI

  1. Review the parameter reference for the template version that you want to use.

  2. Log in to the IBM Cloud CLI.

    ibmcloud login
    
  3. List your Satellite locations and note the Managed from column.

    ibmcloud sat location ls
    
  4. Target the Managed from region of your Satellite location. For example, for wdc target us-east. For more information, see Satellite regions.

    ibmcloud target -r us-east
    
  5. If you use a resource group other than default, target it.

    ibmcloud target -g <resource-group>
    
  6. Copy one of the following example command for the template version that you want to use. For more information about the command, see ibmcloud sat storage config create in the command reference.

    Example command to create a version 4.9 configuration.

    ibmcloud sat storage config create --location LOCATION --name NAME --template-name local-volume-file --template-version 4.9 --param "auto-discover-devices=AUTO-DISCOVER-DEVICES"  --param "label-key=LABEL-KEY"  --param "label-value=LABEL-VALUE"  [--param "devicepath=DEVICEPATH"]  --param "fstype=FSTYPE" 
    

    Example command to create a version 4.10 configuration.

    ibmcloud sat storage config create --location LOCATION --name NAME --template-name local-volume-file --template-version 4.10 --param "auto-discover-devices=AUTO-DISCOVER-DEVICES"  --param "label-key=LABEL-KEY"  --param "label-value=LABEL-VALUE"  [--param "devicepath=DEVICEPATH"]  --param "fstype=FSTYPE" 
    

    Example command to create a version 4.11 configuration.

    ibmcloud sat storage config create --location LOCATION --name NAME --template-name local-volume-file --template-version 4.11 --param "auto-discover-devices=AUTO-DISCOVER-DEVICES"  --param "label-key=LABEL-KEY"  --param "label-value=LABEL-VALUE"  [--param "devicepath=DEVICEPATH"]  --param "fstype=FSTYPE" 
    

    Example command to create a version 4.12 configuration.

    ibmcloud sat storage config create --location LOCATION --name NAME --template-name local-volume-file --template-version 4.12 --param "auto-discover-devices=AUTO-DISCOVER-DEVICES"  --param "label-key=LABEL-KEY"  --param "label-value=LABEL-VALUE"  [--param "devicepath=DEVICEPATH"]  --param "fstype=FSTYPE" 
    

    Example command to create a version 4.13 configuration.

    ibmcloud sat storage config create --location LOCATION --name NAME --template-name local-volume-file --template-version 4.13 --param "auto-discover-devices=AUTO-DISCOVER-DEVICES"  --param "label-key=LABEL-KEY"  --param "label-value=LABEL-VALUE"  [--param "devicepath=DEVICEPATH"]  --param "fstype=FSTYPE" 
    
  7. Customize the command based on the settings that you want to use.

  8. Run the command to create a configuration.

  9. Verify your configuration was created.

    ibmcloud sat storage config get --config CONFIG
    

Creating a configuration in the API

  1. Generate an API key, then request a refresh token. For more information, see Generating an IBM Cloud IAM token by using an API key.

  2. Review the parameter reference for the template version that you want to use.

  3. Copy one of the following example requests and replace the variables that you want to use.

    Example request to create a version 4.9 configuration.

    curl -X POST "https://containers.cloud.ibm.com/global/v2/storage/satellite/createStorageConfigurationByController" -H "accept: application/json" -H "Authorization: TOKEN" -H "Content-Type: application/json" -d "{ \"config-name\": \"string\", \"controller\": \"string\", \"storage-class-parameters\": [ { \"additionalProp1\": \"string\", \"additionalProp2\": \"string\", \"additionalProp3\": \"string\" } ], \"storage-template-name\": \"local-volume-file\", \"storage-template-version\": \"4.9\", \"update-assignments\": true, \"user-config-parameters\": { \"entry.name\": \"AUTO-DISCOVER-DEVICES\", { \"entry.name\": \"LABEL-KEY\", { \"entry.name\": \"LABEL-VALUE\", { \"entry.name\": \"DEVICEPATH\", { \"entry.name\": \"FSTYPE\",\"user-secret-parameters\": }
    

    Example request to create a version 4.10 configuration.

    curl -X POST "https://containers.cloud.ibm.com/global/v2/storage/satellite/createStorageConfigurationByController" -H "accept: application/json" -H "Authorization: TOKEN" -H "Content-Type: application/json" -d "{ \"config-name\": \"string\", \"controller\": \"string\", \"storage-class-parameters\": [ { \"additionalProp1\": \"string\", \"additionalProp2\": \"string\", \"additionalProp3\": \"string\" } ], \"storage-template-name\": \"local-volume-file\", \"storage-template-version\": \"4.10\", \"update-assignments\": true, \"user-config-parameters\": { \"entry.name\": \"AUTO-DISCOVER-DEVICES\", { \"entry.name\": \"LABEL-KEY\", { \"entry.name\": \"LABEL-VALUE\", { \"entry.name\": \"DEVICEPATH\", { \"entry.name\": \"FSTYPE\",\"user-secret-parameters\": }
    

    Example request to create a version 4.11 configuration.

    curl -X POST "https://containers.cloud.ibm.com/global/v2/storage/satellite/createStorageConfigurationByController" -H "accept: application/json" -H "Authorization: TOKEN" -H "Content-Type: application/json" -d "{ \"config-name\": \"string\", \"controller\": \"string\", \"storage-class-parameters\": [ { \"additionalProp1\": \"string\", \"additionalProp2\": \"string\", \"additionalProp3\": \"string\" } ], \"storage-template-name\": \"local-volume-file\", \"storage-template-version\": \"4.11\", \"update-assignments\": true, \"user-config-parameters\": { \"entry.name\": \"AUTO-DISCOVER-DEVICES\", { \"entry.name\": \"LABEL-KEY\", { \"entry.name\": \"LABEL-VALUE\", { \"entry.name\": \"DEVICEPATH\", { \"entry.name\": \"FSTYPE\",\"user-secret-parameters\": }
    

    Example request to create a version 4.12 configuration.

    curl -X POST "https://containers.cloud.ibm.com/global/v2/storage/satellite/createStorageConfigurationByController" -H "accept: application/json" -H "Authorization: TOKEN" -H "Content-Type: application/json" -d "{ \"config-name\": \"string\", \"controller\": \"string\", \"storage-class-parameters\": [ { \"additionalProp1\": \"string\", \"additionalProp2\": \"string\", \"additionalProp3\": \"string\" } ], \"storage-template-name\": \"local-volume-file\", \"storage-template-version\": \"4.12\", \"update-assignments\": true, \"user-config-parameters\": { \"entry.name\": \"AUTO-DISCOVER-DEVICES\", { \"entry.name\": \"LABEL-KEY\", { \"entry.name\": \"LABEL-VALUE\", { \"entry.name\": \"DEVICEPATH\", { \"entry.name\": \"FSTYPE\",\"user-secret-parameters\": }
    

    Example request to create a version 4.13 configuration.

    curl -X POST "https://containers.cloud.ibm.com/global/v2/storage/satellite/createStorageConfigurationByController" -H "accept: application/json" -H "Authorization: TOKEN" -H "Content-Type: application/json" -d "{ \"config-name\": \"string\", \"controller\": \"string\", \"storage-class-parameters\": [ { \"additionalProp1\": \"string\", \"additionalProp2\": \"string\", \"additionalProp3\": \"string\" } ], \"storage-template-name\": \"local-volume-file\", \"storage-template-version\": \"4.13\", \"update-assignments\": true, \"user-config-parameters\": { \"entry.name\": \"AUTO-DISCOVER-DEVICES\", { \"entry.name\": \"LABEL-KEY\", { \"entry.name\": \"LABEL-VALUE\", { \"entry.name\": \"DEVICEPATH\", { \"entry.name\": \"FSTYPE\",\"user-secret-parameters\": }
    

Creating an assignment in the CLI

  1. List your storage configurations and make a note of the storage configuration that you want to assign to your clusters.

    ibmcloud sat storage config ls
    
  2. Get the ID of the cluster, cluster group, or service that you want to assign storage to.

    To make sure that your cluster is registered with Satellite Config or to create groups, see Setting up clusters to use with Satellite Config.

    Example command to list cluster groups.

    ibmcloud sat group ls
    

    Example command to list clusters.

    ibmcloud oc cluster ls --provider satellite
    

    Example command to list Satellite services.

    ibmcloud sat service ls --location <location>
    
  3. Assign your storage configuration to the cluster, group, or service that you retrieved earlier. For more information, see the ibmcloud sat storage assignment create command.

    Example command to assign a configuration to a cluster group.

    ibmcloud sat storage assignment create --group GROUP --config CONFIG --name NAME
    

    Example command to assign a configuration to a cluster.

    ibmcloud sat storage assignment create --cluster CLUSTER --config CONFIG --name NAME
    

    Example command to assign a configuration to a service cluster.

    ibmcloud sat storage assignment create --service-cluster-id CLUSTER --config CONFIG --name NAME
    
  4. Verify that your assignment is created.

    ibmcloud sat storage assignment ls (--cluster CLUSTER | --config CONFIG | --location LOCATION | --service-cluster-id CLUSTER)
    

Creating a storage assignment in the API

  1. Copy one of the following example requests.

    Example request to assign a configuration to a cluster.

    curl -X POST "https://containers.cloud.ibm.com/global/v2/storage/satellite/createAssignmentByCluster" -H "accept: application/json" -H "Authorization: Bearer TOKEN" -H "Content-Type: application/json" -d "{ \"channelName\": \"CONFIGURATION-NAME\", \"cluster\": \"CLUSTER-ID\", \"controller\": \"LOCATION-ID\", \"name\": \"ASSIGNMENT-NAME\"}"
    

    Example request to assign configuration to a cluster group.

    curl -X POST "https://containers.cloud.ibm.com/global/v2/storage/satellite/createAssignment" -H "accept: application/json" -H "Authorization: Bearer TOKEN" -H "Content-Type: application/json" -d "{ \"channelName\": \"CONFIGURATION-NAME\", \"cluster\": \"string\", \"groups\": [ \"CLUSTER-GROUP\" ], \"name\": \"ASSIGNMENT-NAME\"}"
    
  2. Replace the variables with your details and run the request.

  3. Verify the assignment was created by listing your assignments.

    curl -X GET "https://containers.cloud.ibm.com/global/v2/storage/satellite/getAssignments" -H "accept: application/json" -H "Authorization: Bearer TOKEN"
    

Updating storage assignments in the console

You can use the Satellite console to apply the latest patch updates to your assignments.

  1. From the Locations page in the Satellite console, select your location.

  2. Click the Storage tab to view your configurations.

  3. Click the configuration you want to update.

  4. Click information Information (i) icon to apply the latest revision or patch.

  5. Optional: Enable automatic patch updates for your storage assignment. Enabling automatic patch updates ensures that your assignment always has the latest security fixes.

If you enable automatic patch updates, you must still apply major updates manually.

Manually upgrading assignments in the CLI

Upgrade an assignment to use the latest storage template revision.

  1. List your Satellite storage assignments, make a note of the Satellite assignment you want to upgrade.

    ibmcloud sat storage assignment ls
    
  2. List the Satellite storage templates to see the latest available versions.

    ibmcloud sat storage template ls
    
  3. Upgrade the Satellite assignment.

    Example command to upgrade an assignment.

    ibmcloud sat storage assignment upgrade --assignment ASSIGNMENT
    

Enabling automatic patch updates for configurations and assignments in the CLI

You can use the sat storage assignment autopatch enable CLI to enable automatic patch updates for your assignments. Enabling automatic patch updates applies the latest storage template revisions (patches) automatically. You must still apply major updates manually.

  1. List your Satellite storage configurations. Make a note of the configuration ID.

    ibmcloud sat storage assignment ls
    
  2. Run one of the following example commands to enable automatic patch updates for your configuration and its associated assignments. Enter the configuration ID that you retrieved in the previous step.

    Example command to enable automatic patch updates for an assignment.

    ibmcloud sat storage assignment autopatch enable --config CONFIG  (--all | --assignment ASSIGNMENT-ID [--assignment ASSIGNMENT-ID])
    

    Example command to enable automatic patch updates for all storage assignments under a given configuration.

    ibmcloud sat storage assignment autopatch enable --config CONFIG --all
    

    Example command to disable automatic patch updates for all assignments under a specific configuration.

    ibmcloud sat storage assignment autopatch disable --config CONFIG --all
    

    Example command to disable automatic patch updates for an single assignment and a specific configuration.

    ibmcloud sat storage assignment autopatch disable --config CONFIG --assignment ASSIGNMENT-ID
    

    Example command to disable automatic patch updates for an multiple assignment and a specific configuration.

    ibmcloud sat storage assignment autopatch disable --config CONFIG --assignment ASSIGNMENT-ID --assignment ASSIGNMENT-ID
    

Manually upgrading configurations in the CLI

You can upgrade your Satellite storage configurations to get the latest storage template revision within the same major version.

  1. List your Satellite storage configurations, make a note of the Satellite configuration you want to upgrade.

    ibmcloud sat storage config ls
    
  2. Upgrade the Satellite configuration. Note, only the configuration is updated. If you want to upgrade the assignments that use this configuration, you can specify the --include-assignments option or you can manually update each assignment using the assignment update command.

    Example command to upgrade a configuration to the latest revision.

    ibmcloud sat storage config upgrade --config CONFIG [--include-assignments]
    

    Example command to upgrade a configuration and it's associated assignments to the latest revision.

    ibmcloud sat storage config upgrade --config CONFIG --include-assignments
    

Upgrading a configuration and assignments in the API

You can use the /v2/storage/satellite/updateAssignment API to update your assignments with new clusters or cluster groups. Set updateConfigVersion to true to apply the revision update.

  1. Copy the following example request and replace the variables for the cluster groups and assignments that you want to update.

    curl -X PATCH "https://containers.cloud.ibm.com/global/v2/storage/satellite/updateAssignment" -H "accept: application/json" -H "Authorization: Bearer TOKEN" -H "Content-Type: application/json" -d "{ \"groups\": [ \"CLUSTER-GROUPS\" ], \"name\": \"ASSIGNMENT-NAME\", \"updateConfigVersion\": true, \"uuid\": \"ASSIGNMENT-ID\"}"
    
  2. Run the request.

  3. Get the details of you assignment to verify the update.

    curl -X GET "https://containers.cloud.ibm.com/global/v2/storage/satellite/getAssignment?uuid=ASSIGNMENT-ID" -H "accept: application/json" -H "Authorization: Bearer TOKEN"
    

Enabling automatic patch updates for assignments in the API

You can use the /v2/storage/satellite/setAssignmentAutoupgrade API to enable automatic patch updates for your assignments. Enabling automatic patch updates applies the latest storage template revisions (patches) automatically. You must still apply major updates manually.

  1. Copy the following example request and replace the variables for the cluster groups and assignments that you want to update.

    curl -X PATCH "https://containers.cloud.ibm.com/global/v2/storage/satellite/setAssignmentAutoupgrade" -H "accept: application/json" -H "Authorization: Bearer TOKEN" -H "Content-Type: application/json" -d { "config": "string", "controller": "string", "autopatch": boolean,"assignment" : { "all": boolean, "uuid": ["string", "string", ...], } }
    
  2. Run the request.

  3. Get the details of you assignment to verify the upgrade.

    curl -X GET "https://containers.cloud.ibm.com/global/v2/storage/satellite/getAssignment?uuid=ASSIGNMENT-ID" -H "accept: application/json" -H "Authorization: Bearer TOKEN"
    
  4. Verify that the storage configuration resources are deployed. Get a list of all the resources in the local-storage namespace.

    oc get all -n local-storage
    

    Example output

    NAME                                         READY   STATUS    RESTARTS   AGE
    pod/local-disk-local-diskmaker-cpk4r         1/1     Running   0          30s
    pod/local-disk-local-provisioner-xstjh       1/1     Running   0          30s
    pod/local-storage-operator-96c444dfc-ttpmq   1/1     Running   0          35s
    
    NAME                             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)     AGE
    service/local-storage-operator   ClusterIP   172.21.173.238   <none>        60000/TCP   32s
    
    NAME                                          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
    daemonset.apps/local-disk-local-diskmaker     1         1         1       1            1           <none>          31s
    daemonset.apps/local-disk-local-provisioner   1         1         1       1            1           <none>          31s
    
    NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/local-storage-operator   1/1     1            1           36s
    
    NAME                                               DESIRED   CURRENT   READY   AGE
    replicaset.apps/local-storage-operator-96c444dfc   1         1         1       37s
    
  5. List the storage classes that are available.

    oc get sc -n local-storage | grep local
    

    Example output

    sat-local-file-gold       kubernetes.io/no-provisioner   Delete          WaitForFirstConsumer   false                  21m
    
  6. List the PVs and verify that the status is Available. The local disks that you specified when you created your configuration are available as persistent volumes.

    oc get pv
    

    Example output

    NAME               CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS          REASON   AGE
    local-pv-1d14680   50Gi       RWO            Delete           Available           sat-local-file-gold            50s
    
  7. Create a PVC that references your local PV, then deploy an app that uses your local storage.

Deploying an app that uses your local file storage

After you create a local file storage configuration and assign it to your clusters, you can then create an app that uses your local file storage.

You can map your PVCs to specific persistent volumes by adding labels to your persistent volumes. For more information, see the Kubernetes documentation for selectors.

  1. Save the following YAML to a file on your local machine called local-pvc.yaml.

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: local-pvc
    spec:
      accessModes:
      - ReadWriteOnce
      volumeMode: Filesystem
      resources:
        requests:
          storage: 20Gi # Important: Ensure that size of your claim is not larger than the local disk.
      storageClassName: sat-local-file-gold
    
  2. Create the PVC in your cluster.

    oc create -f local-pvc.yaml
    
  3. Verify that your PVC is created.

    oc get pvc | grep local
    

    To ensure that your pods are scheduled to worker nodes with storage, or to ensure that the apps that require storage are not preempted by other pods, you can specify nodeAffinity and set up pod priority. For more information, see the Kubernetes documentation for pod priority and preemption and setting node affinity.

  4. Deploy an app pod that uses your local storage PVC. Save the following example app YAML as a file on your local machine called app.yaml. This pod writes the date to a file called test.txt. Be sure to enter the name of the PVC that you created earlier. In this example, the nodeAffinity spec ensures that this pod is only scheduled to a worker node with the label is the specified.

    apiVersion: v1
    kind: Pod
    metadata:
      name: app
    spec:
      affinity: null
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
            - matchExpressions:
                - key: storage
                  operator: In
                  values:
                    - local-file
      volumes:
        - name: local-pvc
          persistentVolumeClaim:
            claimName: local-pvc
      containers:
        - name: local-disks
          image: nginx
          ports:
            - containerPort: 80
              name: http-server
          volumeMounts:
            - mountPath: <mount-path-to-local-disk>
              name: local-pvc
    
  5. Create the app pod in your cluster.

    oc create -f app.yaml
    
  6. Log in to your app pod and verify that you can write to your local disk.

    kubectl exec <app-pod> -it bash
    
  7. Run the following command to change directories to the location of your local disk, write the test.txt file, and display the contents of the file.

    cd /<mount-path-to-local-disk> && echo "This is a test." >> test.txt && cat test.txt
    

    Example output

    This is a test.
    
  8. Remove the test file and log out of the pod.

    rm test.txt && exit
    

Removing the local file storage configuration from your cluster

If you no longer plan on using local file storage in your cluster, you can unassign your cluster from the storage configuration.

Note that if you remove the storage configuration, the local storage operator resources and the sat-local-file-gold storage class is then uninstalled from all assigned clusters. Your PVCs, PVs and data are not removed. However, you might not be able to access your data until you re-install the driver in your cluster again.

Remove the local file storage configuration from the console

Use the console to remove a storage configuration.

  1. From the Satellite storage dashboard, select the storage configuration you want to delete.
  2. Select Actions > Delete
  3. Enter the name of your storage configuration.
  4. Select Delete.

Remove the local file storage configuration from the command line

  1. List the resources in the local-storage namespace. When you delete your storage assignment, these resources are removed.

    oc get all -n local-storage
    

    Example output

    NAME                                         READY   STATUS    RESTARTS   AGE
    pod/local-disk-local-diskmaker-clvg6         1/1     Running   0          29h
    pod/local-disk-local-diskmaker-kqddq         1/1     Running   0          29h
    pod/local-disk-local-diskmaker-p6z9q         1/1     Running   0          29h
    pod/local-disk-local-provisioner-dw5g7       1/1     Running   0          29h
    pod/local-disk-local-provisioner-hxd9n       1/1     Running   0          29h
    pod/local-disk-local-provisioner-tfg95       1/1     Running   0          29h
    pod/local-storage-operator-df4994656-7826l   1/1     Running   0          29h
    
    NAME                             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)     AGE
    service/local-storage-operator   ClusterIP   172.21.147.17   <none>        60000/TCP   29h
    
    NAME                                          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
    daemonset.apps/local-disk-local-diskmaker     3         3         3       3            3           <none>          29h
    daemonset.apps/local-disk-local-provisioner   3         3         3       3            3           <none>          29h
    
    NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/local-storage-operator   1/1     1            1           29h
    
    NAME                                               DESIRED   CURRENT   READY   AGE
    replicaset.apps/local-storage-operator-df4994656   1         1         1       29h
    
  2. List your storage assignments and find the one that you used for your cluster.

    ibmcloud sat storage assignment ls (--cluster CLUSTER | --config CONFIG | --location LOCATION | --service-cluster-id CLUSTER)
    
  3. Remove the assignment. After the assignment is removed, the local storage driver pods and storage classes are removed from all clusters that were part of the storage assignment.

    ibmcloud sat storage assignment rm --assignment <assignment_ID>
    
  4. List the resources in the local-storage namespace and verify that the local storage driver pods are removed.

    oc get all -n local-storage
    

    Example output

    No resources found in local-storage namespace.
    
  5. List of the storage classes in your cluster and verify that the local storage classes are removed.

    oc get sc
    
  6. Optional: Remove the storage configuration.

    1. List the storage configurations.
      ibmcloud sat storage config ls
      
    2. Remove the storage configuration.
      ibmcloud sat storage config rm --config <config_name>
      
  7. List your PVCs and note the name of the PVC that you want to remove.

    oc get pvc
    
  8. Remove any pods that currently mount the PVC.

    1. List all the pods that currently mount the PVC that you want to delete. If no pods are returned, you do not have any pods that currently use your PVC.

      oc get pods --all-namespaces -o=jsonpath='{range .items[*]}{"\n"}{.metadata.name}{":\t"}{range .spec.volumes[*]}{.persistentVolumeClaim.claimName}{" "}{end}{end}' | grep "<pvc_name>"
      

      Example output

      app    sat-local-file-gold
      
    2. Remove the pod that uses the PVC. If the pod is part of a deployment, remove the deployment.

      oc delete pod <pod_name>
      
      oc delete deployment <deployment-name>
      
    3. Verify that the pod or the deployment is removed.

      oc get pods
      
      oc get deployments
      
  9. Delete the PVC. Because all IBM-provided local file storage classes are specified with a Retain reclaim policy, the PV and PVC are not automatically deleted when you delete your app or deployment.

    oc delete pvc <pvc-name>
    
  10. Verify that your PVC is removed.

    oc get pvc
    
  11. List your PVs and note the name of the PVs that you want to remove.

    oc get pv
    
  12. Delete the PVs. Deleting your PVs will make your disks available for other workloads.

    oc delete pv <pv-name>
    
  13. Verify that your PV is removed.

    oc get pv
    

Parameter reference

4.9 parameter reference

4.9 parameter reference
Display name CLI option Type Description Required? Default value
Automatic storage volume discovery auto-discover-devices Config Set to true if you want to automatically discover and use the storage volumes on your worker nodes. true false
Node Label Key label-key Config The key of the worker node key=value label. true N/A
Node Label Key Value label-value Config The value of the worker node key=value label. true N/A
Device Path devicepath Config The local storage device path. Example: /dev/sdc. Required when auto-discover-devices is set to false. false N/A
File System type fstype Config The file system type. Specify ext3, ext4, or xfs. true ext4

4.10 parameter reference

4.10 parameter reference
Display name CLI option Type Description Required? Default value
Automatic storage volume discovery auto-discover-devices Config Set to true if you want to automatically discover and use the storage volumes on your worker nodes. true false
Node Label Key label-key Config The key of the worker node key=value label. true N/A
Node Label Key Value label-value Config The value of the worker node key=value label. true N/A
Device Path devicepath Config The local storage device path. Example: /dev/sdc. Required when auto-discover-devices is set to false. false N/A
File System type fstype Config The file system type. Specify ext3, ext4, or xfs. true ext4

4.11 parameter reference

4.11 parameter reference
Display name CLI option Type Description Required? Default value
Automatic storage volume discovery auto-discover-devices Config Set to true if you want to automatically discover and use the storage volumes on your worker nodes. true false
Node Label Key label-key Config The key of the worker node key=value label. true N/A
Node Label Key Value label-value Config The value of the worker node key=value label. true N/A
Device Path devicepath Config The local storage device path. Example: /dev/sdc. Required when auto-discover-devices is set to false. false N/A
File System type fstype Config The file system type. Specify ext3, ext4, or xfs. true ext4

4.12 parameter reference

4.12 parameter reference
Display name CLI option Type Description Required? Default value
Automatic storage volume discovery auto-discover-devices Config Set to true if you want to automatically discover and use the storage volumes on your worker nodes. true false
Node Label Key label-key Config The key of the worker node key=value label. true N/A
Node Label Key Value label-value Config The value of the worker node key=value label. true N/A
Device Path devicepath Config The local storage device path. Example: /dev/sdc. Required when auto-discover-devices is set to false. false N/A
File System type fstype Config The file system type. Specify ext3, ext4, or xfs. true ext4

4.13 parameter reference

4.13 parameter reference
Display name CLI option Type Description Required? Default value
Automatic storage volume discovery auto-discover-devices Config Set to true if you want to automatically discover and use the storage volumes on your worker nodes. true false
Node Label Key label-key Config The key of the worker node key=value label. true N/A
Node Label Key Value label-value Config The value of the worker node key=value label. true N/A
Device Path devicepath Config The local storage device path. Example: /dev/sdc. Required when auto-discover-devices is set to false. false N/A
File System type fstype Config The file system type. Specify ext3, ext4, or xfs. true ext4

Storage class reference for local file storage

Review the Satellite storage classes for local file storage. You can describe storage classes in the command line with the oc describe sc <storage-class-name> command.

Local file storage class reference.
Storage class name File system Reclaim policy
sat-local-file-gold ext4 or xfs Retain

Getting help and support for local file storage

  1. Review the FAQs in the Red Hat OpenShift docs.
  2. Review the troubleshooting documentation to troubleshoot and resolve common issues.
  3. Check the status of the IBM Cloud platform and resources by going to the Status page.
  4. Review Stack Overflow to see whether other users experienced the same problem. Tag any questions with ibm-cloud, so that it's seen by the IBM Cloud development teams.
  5. If you run into an issue with the Local File Storage Operator, you can open an issue in the Red Hat Customer Portal.