IBM Cloud Docs
Installing Portworx in your cluster

Installing Portworx in your cluster

Provision a Portworx service instance from the IBM Cloud catalog. After you create the service instance, the latest Portworx enterprise edition (px-enterprise) is installed on your cluster by using Helm. In addition, Stork is also installed on your Red Hat OpenShift on IBM Cloud cluster. Stork is the Portworx storage scheduler. With Stork, you can co-locate pods with their data and create and restore snapshots of Portworx volumes.

Looking for instructions about how to update or remove Portworx? See Updating Portworx and Removing Portworx.

Before you begin:

To install Portworx:

  1. Open the Portworx service from the IBM Cloud catalog and complete the fields as follows:

    1. Select the region where your Red Hat OpenShift on IBM Cloud cluster is located.

    2. Review the Portworx pricing information.

    3. Enter a name for your Portworx service instance.

    4. Select the resource group that your cluster is in.

    5. In the Tag field, enter the name of the cluster where you want to install Portworx. After you create the Portworx service instance, you can't see the cluster that you installed Portworx into. To find the cluster more easily later, make sure that you enter the cluster name and any additional information as tags.

    6. Enter an IBM Cloud API key to retrieve the list of clusters that you have access to. If you don't have an API key, see Managing user API keys. After you enter the API key, the Kubernetes or OpenShift cluster name field appears.

    7. Enter a unique Portworx cluster name.

    8. In the Cloud Drives menu:

      1. Select Use Cloud Drives (VPC Clusters only) to dynamically provision Block Storage for VPC for Portworx. After selecting Use Cloud Drives, select the Storage class name and the Size of the block storage drives that you want to provision.
      2. Select Use Already Attached Drives (Classic, VPC, or Satellite) to use the block storage that is already attached to your worker nodes.
    9. From the Portworx metadata key-value store drop down, choose the type of key-value store that you want to use to store Portworx metadata. Select Portworx KVDB to automatically create a key-value store during the Portworx installation, or select Databases for etcd if you want to use an existing Databases for etcd instance. If you choose Databases for etcd, the Etcd API endpoints and Etcd secret name fields appear.

    10. Namespace: Enter the namespace where you want to deploy the Portworx resources.

    11. Required for Databases for etcd only: Enter the information of your Databases for etcd service instance.

      1. Retrieve the etcd endpoint, and the name of the Kubernetes secret that you created for your Databases for etcd service instance.
      2. In the Etcd API endpoints field, enter the API endpoint of your Databases for etcd service instance that you retrieved earlier. Make sure to enter the endpoint in the format etcd:<etcd_endpoint1>;etcd:<etcd_endpoint2>. If you have more than one endpoint, include all endpoints and separate them with a semicolon (;).
      3. In the Etcd secret name field, enter the name of the Kubernetes secret that you created in your cluster to store the Databases for etcd service credentials.
    12. From the Kubernetes or OpenShift cluster name drop down list, select the cluster where you want to install Portworx. If your cluster is not listed, make sure that you select the correct IBM Cloud region. If the region is correct, verify that you have the correct permissions to view and work with your cluster. Make sure that you select a cluster that meets the minimum hardware requirements for Portworx.

    13. Optional: From the Portworx secret store type drop down list, choose the secret store type that you want to use to store the volume encryption key.

      • Kubernetes Secret: Choose this option if you want to store your own custom key to encrypt your volumes in a Kubernetes Secret in your cluster. The secret must not be present before you install Portworx. You can create the secret after you install Portworx. For more information, see the Portworx documentation.
      • IBM Key Protect: Choose this option if you want to use root keys in IBM Key Protect to encrypt your volumes. Make sure that you follow the instructions to create your IBM Key Protect service instance, and to store the credentials for how to access your service instance in a Kubernetes secret in the portworx project before you install Portworx.
    14. Optional: If you want to set up a journal device or KVDB devices, enter the device details in the Advanced Options field. Choose from the following options for journal devices.

      • Enter j;auto to allow Portworx to automatically create a 3 GB partition on one of your block storage devices to use for the journal.
      • Enter j;</device/path> to use a specific device for the journal. For example, enter j;/dev/vde to use the disk located at /dev/vde. To find the path of the device that you want to use for the journal, log in to a worker node and run lsblk.
      • Enter kvdb_dev;<device path> to specify the device where you want to store internal KVDB data. For example, kvdb_dev;/dev/vdd. To find the path of the device that you want to use, log in to a worker node and run lsblk. To use a specific device for KVDB data, you must have an available storage device of 3GB or on at least 3 worker nodes. The devices must also and on the same path on each worker node. For example: /dev/vdd.
  2. Click Create to start the Portworx installation in your cluster. This process might take a few minutes to complete. The service details page opens with instructions for how to verify your Portworx installation, create a persistent volume claim (PVC), and mount the PVC to an app.

  3. From the IBM Cloud resource list, find the Portworx service that you created.

  4. Review the Status column to see if the installation succeeded or failed. The status might take a few minutes to update.

  5. If the Status changes to Provision failure, follow the instructions to start troubleshooting why your installation failed.

  6. If the Status changes to Provisioned, verify that your Portworx installation completed successfully and that all your local disks were recognized and added to the Portworx storage layer.

    1. List the Portworx pods in the kube-system project. The installation is successful when you see one or more portworx, stork, and stork-scheduler pods. The number of pods equals the number of worker nodes that are in your Portworx cluster. All pods must be in a Running state.

      oc get pods -n kube-system | grep 'portworx\|stork'
      

      Example output

      portworx-594rw                          1/1       Running     0          20h
      portworx-rn6wk                          1/1       Running     0          20h
      portworx-rx9vf                          1/1       Running     0          20h
      stork-6b99cf5579-5q6x4                  1/1       Running     0          20h
      stork-6b99cf5579-slqlr                  1/1       Running     0          20h
      stork-6b99cf5579-vz9j4                  1/1       Running     0          20h
      stork-scheduler-7dd8799cc-bl75b         1/1       Running     0          20h
      stork-scheduler-7dd8799cc-j4rc9         1/1       Running     0          20h
      stork-scheduler-7dd8799cc-knjwt         1/1       Running     0          20h
      
    2. Log in to one of your portworx pods and list the status of your Portworx cluster.

      oc exec <portworx_pod> -it -n kube-system -- /opt/pwx/bin/pxctl status
      

      Example output

      Status: PX is operational
      License: Trial (expires in 30 days)
      Node ID: 10.176.48.67
      IP: 10.176.48.67
      Local Storage Pool: 1 pool
      POOL    IO_PRIORITY    RAID_LEVEL    USABLE    USED    STATUS    ZONE    REGION
        0    LOW        raid0        20 GiB    3.0 GiB    Online    dal10    us-south
      Local Storage Devices: 1 device
      Device    Path                        Media Type        Size        Last-Scan
          0:1    /dev/mapper/3600a09803830445455244c4a38754c66    STORAGE_MEDIUM_MAGNETIC    20 GiB        17 Sep 18 20:36 UTC
              total                            -            20 GiB
      Cluster Summary
      Cluster ID: mycluster
          Cluster UUID: a0d287ba-be82-4aac-b81c-7e22ac49faf5
      Scheduler: kubernetes
      Nodes: 2 node(s) with storage (2 online), 1 node(s) without storage (1 online)
        IP        ID        StorageNode    Used    Capacity    Status    StorageStatus    Version        Kernel            OS
        10.184.58.11    10.184.58.11    Yes        3.0 GiB    20 GiB        Online    Up        1.5.0.0-bc1c580    4.4.0-133-generic    Ubuntu 20.04.5 LTS
        10.176.48.67    10.176.48.67    Yes        3.0 GiB    20 GiB        Online    Up (This node)    1.5.0.0-bc1c580    4.4.0-133-generic    Ubuntu 20.04.5 LTS
        10.176.48.83    10.176.48.83    No        0 B    0 B        Online    No Storage    1.5.0.0-bc1c580    4.4.0-133-generic    Ubuntu 20.04.5 LTS
      Global Storage Pool
        Total Used        :  6.0 GiB
        Total Capacity    :  40 GiB
      
    3. Verify that all worker nodes that you wanted to include in your Portworx storage layer are included by reviewing the StorageNode column in the Cluster Summary section of your CLI output. Worker nodes that are in the storage layer are displayed with Yes in the StorageNode column.

      Because Portworx runs as a DaemonSet in your cluster, existing worker nodes are automatically inspected for raw block storage and added to the Portworx data layer when you deploy Portworx. If you add worker nodes to your cluster and add raw block storage to those workers, restart the Portworx pods on the new worker nodes so that your storage volumes are detected by the DaemonSet.

    4. Verify that each storage node is listed with the correct amount of raw block storage by reviewing the Capacity column in the Cluster Summary section of your CLI output.

    5. Review the Portworx I/O classification that was assigned to the disks that are part of the Portworx cluster. During the setup of your Portworx cluster, every disk is inspected to determine the performance profile of the device. The profile classification depends on how fast the network is that your worker node is connected to and the type of storage device that you have. Disks of SDS worker nodes are classified as high. If you manually attach disks to a virtual worker node, then these disks are classified as low due to the slower network speed that comes with virtual worker nodes.

      oc exec -it <portworx_pod> -n kube-system -- /opt/pwx/bin/pxctl cluster provision-status
      

      Example output

      NODE        NODE STATUS    POOL    POOL STATUS    IO_PRIORITY    SIZE    AVAILABLE    USED    PROVISIONED    RESERVEFACTOR    ZONE    REGION        RACK
      10.184.58.11    Up        0    Online        LOW        20 GiB    17 GiB        3.0 GiB    0 B        0        dal12    us-south    default
      10.176.48.67    Up        0    Online        LOW        20 GiB    17 GiB        3.0 GiB    0 B        0        dal10    us-south    default
      10.176.48.83    Up        0    Online        HIGH        3.5 TiB    3.5 TiB        10 GiB    0 B        0        dal10    us-south    default
      

Creating a Portworx volume

Start creating Portworx volumes by using Kubernetes dynamic provisioning.

  1. List available storage classes in your cluster and check whether you can use an existing Portworx storage class that was set up during the Portworx installation. The pre-defined storage classes are optimized for database usage and to share data across pods.

    oc get storageclasses | grep portworx
    

    To view the details of a storage class, run oc describe storageclass <storageclass_name>.

  2. If you don't want to use an existing storage class, create a customized storage class. For a full list of supported options that you can specify in your storage class, see Using Dynamic Provisioning.

    1. Create a configuration file for your storage class.

      kind: StorageClass
      apiVersion: storage.k8s.io/v1
      metadata:
        name: <storageclass_name>
      provisioner: kubernetes.io/portworx-volume
      parameters:
        repl: "<replication_factor>"
        secure: "<true_or_false>"
        priority_io: "<io_priority>"
        shared: "<true_or_false>"
      
      metadata.name
      Enter a name for your storage class.
      parameters.repl
      Enter the number of replicas for your data that you want to store across different worker nodes. Allowed numbers are 1,2, or 3. For example, if you enter 3, then your data is replicated across three different worker nodes in your Portworx cluster. To store your data highly available, use a multizone cluster and replicate your data across three worker nodes in different zones. You must have enough worker nodes to fulfill your replication requirement. For example, if you have two worker nodes, but you specify three replicas, then the creation of the PVC with this storage class fails.
      parameters.secure
      Specify whether you want to encrypt the data in your volume with IBM Key Protect. Choose between the following options.
      • true: Enter true to enable encryption for your Portworx volumes. To encrypt volumes, you must have an IBM Key Protect service instance and a Kubernetes secret that holds your customer root key. For more information about how to set up encryption for Portworx volumes, see Encrypting your Portworx volumes.
      • false: When you enter false, your Portworx volumes are not encrypted. If you don't specify this option, your Portworx volumes are not encrypted by default. You can choose to enable volume encryption in your PVC, even if you disabled encryption in your storage class. The setting that you make in the PVC take precedence over the settings in the storage class.
      parameters.priority_io
      Enter the Portworx I/O priority that you want to request for your data. Available options are high, medium, and low. During the setup of your Portworx cluster, every disk is inspected to determine the performance profile of the device. The profile classification depends on the network bandwidth of your worker node and the type of storage device. Disks of SDS worker nodes are classified as high. If you manually attach disks to a virtual worker node, then these disks are classified as low due to the slower network speed that comes with virtual worker nodes.
      When you create a PVC with a storage class, the number of replicas that you specify in parameters/repl overrides the I/O priority. For example, when you specify three replicas that you want to store on high-speed disks, but you have only one worker node with a high-speed disk in your cluster, then your PVC creation still succeeds. Your data is replicated across both high and low speed disks.
      parameters.shared
      Define whether you want to allow multiple pods to access the same volume. Choose between the following options:
      • True: If you set this option to true, then you can access the same volume by multiple pods that are distributed across worker nodes in different zones.
      • False: If you set this option to false, you can access the volume from multiple pods only if the pods are deployed onto the worker node that attaches the physical disk that backs the volume. If your pod is deployed onto a different worker node, the pod can't access the volume.
    2. Create the storage class.

      oc apply -f storageclass.yaml
      
    3. Verify that the storage class is created.

      oc get storageclasses
      
  3. Create a persistent volume claim (PVC).

    1. Create a configuration file for your PVC.

      kind: PersistentVolumeClaim
      apiVersion: v1
      metadata:
        name: mypvc
      spec:
        accessModes:
          - <access_mode>
        resources:
          requests:
            storage: <size>
        storageClassName: portworx-shared-sc
      
      metadata.name
      Enter a name for your PVC, such as mypvc.
      spec.accessModes
      Enter the Kubernetes access mode that you want to use.
      resources.requests.storage
      Enter the amount of storage in gigabytes that you want to assign from your Portworx cluster. For example, to assign 2 gigabytes from your Portworx cluster, enter 2Gi. The amount of storage that you can specify is limited by the amount of storage that is available in your Portworx cluster. If you specified a replication factor in your storage class higher than 1, then the amount of storage that you specify in your PVC is reserved on multiple worker nodes.
      spec.storageClassName
      Enter the name of the storage class that you chose or created earlier and that you want to use to provision your PV. The example YAML file uses the portworx-shared-sc storage class.
    2. Create your PVC.

      oc apply -f pvc.yaml
      
    3. Verify that your PVC is created and bound to a persistent volume (PV). This process might take a few minutes.

      oc get pvc
      

Mounting the volume to your app

To access the storage from your app, you must mount the PVC to your app.

  1. Create a configuration file for a deployment that mounts the PVC.

    For tips on how to deploy a stateful set with Portworx, see StatefulSets. The Portworx documentation also includes examples for how to deploy Cassandra, Kafka, ElasticSearch with Kibana, and WordPress with MySQL.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: <deployment_name>
      labels:
        app: <deployment_label>
    spec:
      selector:
        matchLabels:
          app: <app_name>
      template:
        metadata:
          labels:
            app: <app_name>
        spec:
          schedulerName: stork
          containers:
          - image: <image_name>
            name: <container_name>
          securityContext:
              fsGroup: <group_ID>
            volumeMounts:
            - name: <volume_name>
              mountPath: /<file_path>
          volumes:
          - name: <volume_name>
            persistentVolumeClaim:
              claimName: <pvc_name>
    
    metadata.labels.app
    A label for the deployment.
    spec.selector.matchLabels.app and spec.template.metadata.labels.app
    A label for your app.
    template.metadata.labels.app
    A label for the deployment.
    spec.schedulerName
    Use Stork as the scheduler for your Portworx cluster. With Stork, you can co-locate pods with their data, provides seamless migration of pods in case of storage errors and makes it easier to create and restore snapshots of Portworx volumes.
    spec.containers.image
    The name of the image that you want to use. To list available images in your IBM Cloud Container Registry account, run ibmcloud cr image-list.
    spec.containers.name
    The name of the container that you want to deploy to your cluster.
    spec.containers.securityContext.fsGroup
    Optional: To access your storage with a non-root user, specify the security context for your pod and define the set of users that you want to grant access in the fsGroup section on your deployment YAML. For more information, see Accessing Portworx volumes with a non-root user.
    spec.containers.volumeMounts.mountPath
    The absolute path of the directory to where the volume is mounted inside the container. If you want to share a volume between different apps, you can specify volume sub paths for each of your apps.
    spec.containers.volumeMounts.name
    The name of the volume to mount to your pod.
    volumes.name
    The name of the volume to mount to your pod. Typically this name is the same as volumeMounts/name.
    volumes.persistentVolumeClaim.claimName
    The name of the PVC that binds the PV that you want to use.
  2. Create your deployment.

    oc apply -f deployment.yaml
    
  3. Verify that the PV is successfully mounted to your app.

    oc describe deployment <deployment_name>
    

    The mount point is in the Volume Mounts field and the volume is in the Volumes field.

    Volume Mounts:
            /var/run/secrets/kubernetes.io/serviceaccount from default-token-tqp61 (ro)
            /volumemount from myvol (rw)
    ...
    Volumes:
        myvol:
        Type:    PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
        ClaimName:    mypvc
        ReadOnly:    false
    
  4. Verify that you can write data to your Portworx cluster.

    1. Log in to the pod that mounts your PV.
      oc exec <pod_name> -it bash
      
    2. Navigate to your volume mount path that you defined in your app deployment.
    3. Create a text file.
      echo "This is a test" > test.txt
      
    4. Read the file that you created.
      cat test.txt