IBM Cloud Docs
Planning your Portworx setup

Planning your Portworx setup

Before you create your cluster and install Portworx, review the following planning steps.

  • Decide where you want to store the Portworx metadata. You can use KVDB or an external Database instance. For more information, see Understanding the key-value store. To learn more about what the key value does, see the Portworx documentation.
  • Decide whether you want encryption. You can use Hyper Protect Crypto Services or IBM Key Protect. For more information, see Understanding encryption for Portworx.
  • Decide whether you want to use journal devices. Journal devices allow Portworx to write logs directly to a local disk on your worker node.
  • VPC or Satellite clusters only - Decide whether you want to use cloud drives. Cloud drives allow you to dynamically provision the Portworx volumes. If you don’t want to use cloud drives, you must manually attach volumes to worker nodes.
  • Review the Limitations.


Review the following Portworx limitations.

Portworx limitations
Limitation Description
Classic clusters Pod restart required when adding worker nodes. Because Portworx runs as a DaemonSet in your cluster, existing worker nodes are automatically inspected for raw block storage and added to the Portworx data layer when you deploy Portworx. If you add or update worker nodes to your cluster and add raw block storage to those workers, restart the Portworx pods on the new or updated worker nodes so that your storage volumes are detected by the DaemonSet.
VPC clusters Storage volume reattachment required when updating worker nodes. When you update a worker node in a VPC cluster, the worker node is removed from your cluster and replaced with a new worker node. If Portworx volumes are attached to the worker node that is replaced, you must attach the volumes to the new worker node. You can attach storage volumes with the API or the CLI. Note this limitation does not apply to Portworx deployments that are using cloud drives.
The Portworx experimental InitializerConfiguration feature is not supported. Red Hat OpenShift on IBM Cloud does not support the Portworx experimental InitializerConfiguration admission controller.
Private clusters To install Portworx in a cluster that doesn't have VRF or access to private cloud service endpoints (CSEs), you must create a rule in the default security group to allow inbound and outbound traffic for the following IP addresses:,, and For more information, see Updating the default security group.
Portworx Backup Portworx backup is not supported for Satellite clusters.

Overview of the Portworx lifecycle

  1. Create a multizone cluster.
    1. Infrastructure provider: For Satellite clusters, make sure to add block storage volumes to your hosts before attaching them to your location. If you use classic infrastructure, you must choose a bare metal flavor for the worker nodes. For classic clusters, virtual machines have only 1000 Mbps of networking speed, which is not sufficient to run production workloads with Portworx. Instead, provision Portworx on bare metal machines for the best performance.
    2. Worker node flavor: Choose an SDS or bare metal flavor. If you want to use virtual machines, use a worker node with 8 vCPU and 8 GB memory or more.
    3. Minimum number of workers: Two worker nodes per zone across three zones, for a minimum total of six worker nodes.
  2. VPC and non-SDS classic worker nodes only: Create raw, unformatted, and unmounted block storage.
  3. For production workloads, create an external Databases for etcd instance for your Portworx metadata key-value store.
  4. Optional Set up encryption.
  5. Install Portworx.
  6. Maintain the lifecycle of your Portworx deployment in your cluster.
    1. When you update worker nodes in VPC clusters, you must take additional steps to re-attach your Portworx volumes. You can attach your storage volumes by using the API or CLI.
    2. To remove a Portworx volume, storage node, or the entire Portworx cluster, see Portworx cleanup.

Creating a secret to store the KMS credentials

Before you begin: Set up encryption

  1. Encode the credentials that you retrieved in the previous section to base64 and note all the base64 encoded values. Repeat this command for each parameter to retrieve the base64 encoded value.
    echo -n "<value>" | base64
  2. Create a project in your cluster called portworx.
    oc create ns portworx
  3. Create a Kubernetes secret named px-ibm in the portworx project of your cluster to store your IBM Key Protect information.
    1. Create a configuration file for your Kubernetes secret with the following content.

      apiVersion: v1
      kind: Secret
        name: px-ibm
        namespace: portworx
      type: Opaque
        IBM_SERVICE_API_KEY: <base64_apikey>
        IBM_INSTANCE_ID: <base64_guid>
        IBM_CUSTOMER_ROOT_KEY: <base64_rootkey>
        IBM_BASE_URL: <base64_endpoint>
      Enter px-ibm as the name for your Kubernetes secret. If you use a different name, Portworx does not recognize the secret during installation.
      Enter the base64 encoded IBM Key Protect or Hyper Protect Crypto Services API key that you retrieved earlier.
      Enter the base64 encoded service instance GUID that you retrieved earlier.
      Enter the base64 encoded root key that you retrieved earlier.
      IBM Key Protect: Enter the base64 encoded API endpoint of your service instance.
      Hyper Protect Crypto Services: Enter the base64 encoded Key Management public endpoint.
    2. Create the secret in the portworx project of your cluster.

      oc apply -f secret.yaml
    3. Verify that the secret is created successfully.

      oc get secrets -n portworx
  4. If you set up encryption before your installed Portworx, you can now install Portworx in your cluster. To add encryption to your cluster after you installed Portworx, update the Portworx DaemonSet to add "-secret_type" and "ibm-kp" as additional options to the Portworx container definition.
    1. Update the Portworx DaemonSet.

      oc edit daemonset portworx -n kube-system

      Example updated DaemonSet

      - args:
      - -c
      - testclusterid
      - -s
      - /dev/sdb
      - -x
      - kubernetes
      - -secret_type
      - ibm-kp
      name: portworx

      After you edit the DaemonSet, the Portworx pods are restarted and automatically update the config.json file on the worker node to reflect that change.

    2. List the Portworx pods in your kube-system project.

      oc get pods -n kube-system | grep portworx
    3. Log in to one of your Portworx pods.

      oc exec -it <pod_name> -it -n kube-system
    4. Navigate in to the pwx directory.

      cd etc/pwx
    5. Review the config.json file to verify that "secret_type": "ibm-kp" is added to the secret section of your CLI output.

      cat config.json

      Example output

      "alertingurl": "",
      "clusterid": "px-kp-test",
      "dataiface": "",
      "kvdb": [
      "mgtiface": "",
      "password": "ABCDEFGHIJK",
      "scheduler": "kubernetes",
      "secret": {
          "cluster_secret_key": "",
          "secret_type": "ibm-kp"
      "storage": {
          "devices": [
          "journal_dev": "",
          "max_storage_nodes_per_zone": 0,
          "system_metadata_dev": ""
      "username": "root",
      "version": "1.0"
    6. Exit the pod.

Check out how to encrypt the secrets in your cluster, including the secret where you stored your Key Protect CRK for your Portworx storage cluster.