IBM Cloud Docs
Planning your Portworx setup

Planning your Portworx setup

Before you create your cluster and install Portworx, review the following planning steps.

Limitations

Review the following Portworx limitations.

Portworx limitations
Limitation Description
Classic clusters Pod restart required when adding worker nodes. Because Portworx runs as a DaemonSet in your cluster, existing worker nodes are automatically inspected for raw block storage and added to the Portworx data layer when you deploy Portworx. If you add or update worker nodes to your cluster and add raw block storage to those workers, restart the Portworx pods on the new or updated worker nodes so that your storage volumes are detected by the DaemonSet.
VPC clusters Storage volume reattachment required when updating worker nodes. When you update a worker node in a VPC cluster, the worker node is removed from your cluster and replaced with a new worker node. If Portworx volumes are attached to the worker node that is replaced, you must attach the volumes to the new worker node. You can attach storage volumes with the API or the CLI. Note this limitation does not apply to Portworx deployments that are using cloud drives.
The Portworx experimental InitializerConfiguration feature is not supported. IBM Cloud Kubernetes Service does not support the Portworx experimental InitializerConfiguration admission controller.
Private clusters To install Portworx in a cluster that doesn't have VRF or access to private cloud service endpoints (CSEs), you must create a rule in the default security group to allow inbound and outbound traffic for the following IP addresses: 166.9.24.81, 166.9.22.100, and 166.9.20.178. For more information, see Updating the default security group.
Portworx Backup Portworx backup is not supported for Satellite clusters.

Overview of the Portworx lifecycle

  1. Create a multizone cluster.
    1. Infrastructure provider: For Satellite clusters, make sure to add block storage volumes to your hosts before attaching them to your location. If you use classic infrastructure, you must choose a bare metal flavor for the worker nodes. For classic clusters, virtual machines have only 1000 Mbps of networking speed, which is not sufficient to run production workloads with Portworx. Instead, provision Portworx on bare metal machines for the best performance.
    2. Worker node flavor: Choose an SDS or bare metal flavor. If you want to use virtual machines, use a worker node with 16 vCPU and 64 GB memory or more, such as b3c.16x64. Virtual machines with a flavor of b3c.4x16 or u3c.2x4 don't provide the required resources for Portworx to work properly.
    3. Minimum number of workers: Two worker nodes per zone across three zones, for a minimum total of six worker nodes.
  2. VPC and non-SDS classic worker nodes only: Create raw, unformatted, and unmounted block storage.
  3. For production workloads, create an external Databases for etcd instance for your Portworx metadata key-value store.
  4. Optional Set up encryption.
  5. Install Portworx.
  6. Maintain the lifecycle of your Portworx deployment in your cluster.
    1. When you update worker nodes in VPC clusters, you must take additional steps to re-attach your Portworx volumes. You can attach your storage volumes by using the API or CLI.
    2. To remove a Portworx volume, storage node, or the entire Portworx cluster, see Portworx cleanup.

Creating a secret to store the KMS credentials

Before you begin: Set up encryption

  1. Encode the credentials that you retrieved in the previous section to base64 and note all the base64 encoded values. Repeat this command for each parameter to retrieve the base64 encoded value.
    echo -n "<value>" | base64
    
  2. Create a namespace in your cluster called portworx.
    kubectl create ns portworx
    
  3. Create a Kubernetes secret named px-ibm in the portworx namespace of your cluster to store your IBM Key Protect information.
    1. Create a configuration file for your Kubernetes secret with the following content.

      apiVersion: v1
      kind: Secret
      metadata:
        name: px-ibm
        namespace: portworx
      type: Opaque
      data:
        IBM_SERVICE_API_KEY: <base64_apikey>
        IBM_INSTANCE_ID: <base64_guid>
        IBM_CUSTOMER_ROOT_KEY: <base64_rootkey>
        IBM_BASE_URL: <base64_endpoint>
      
      metadata.name
      Enter px-ibm as the name for your Kubernetes secret. If you use a different name, Portworx does not recognize the secret during installation.
      data.IBM_SERVICE_API_KEY
      Enter the base64 encoded IBM Key Protect or Hyper Protect Crypto Services API key that you retrieved earlier.
      data.IBM_INSTANCE_ID
      Enter the base64 encoded service instance GUID that you retrieved earlier.
      data.IBM_CUSTOMER_ROOT_KEY
      Enter the base64 encoded root key that you retrieved earlier.
      data.IBM_BASE_URL
      IBM Key Protect: Enter the base64 encoded API endpoint of your service instance.
      Hyper Protect Crypto Services: Enter the base64 encoded Key Management public endpoint.
    2. Create the secret in the portworx namespace of your cluster.

      kubectl apply -f secret.yaml
      
    3. Verify that the secret is created successfully.

      kubectl get secrets -n portworx
      
  4. If you set up encryption before your installed Portworx, you can now install Portworx in your cluster. To add encryption to your cluster after you installed Portworx, update the Portworx DaemonSet to add "-secret_type" and "ibm-kp" as additional options to the Portworx container definition.
    1. Update the Portworx DaemonSet.

      kubectl edit daemonset portworx -n kube-system
      

      Example updated DaemonSet

      containers:
      - args:
      - -c
      - testclusterid
      - -s
      - /dev/sdb
      - -x
      - kubernetes
      - -secret_type
      - ibm-kp
      name: portworx
      

      After you edit the DaemonSet, the Portworx pods are restarted and automatically update the config.json file on the worker node to reflect that change.

    2. List the Portworx pods in your kube-system namespace.

      kubectl get pods -n kube-system | grep portworx
      
    3. Log in to one of your Portworx pods.

      kubectl exec -it <pod_name> -it -n kube-system
      
    4. Navigate in to the pwx directory.

      cd etc/pwx
      
    5. Review the config.json file to verify that "secret_type": "ibm-kp" is added to the secret section of your CLI output.

      cat config.json
      

      Example output

      {
      "alertingurl": "",
      "clusterid": "px-kp-test",
      "dataiface": "",
      "kvdb": [
        "etcd:https://portal-ssl748-34.bmix-dal-yp-12a2312v5-123a-44ac-b8f7-5d8ce1d123456.123456789.composedb.com:56963",
        "etcd:https://portal-ssl735-35.bmix-dal-yp-12a2312v5-123a-44ac-b8f7-5d8ce1d123456.12345678.composedb.com:56963"
      ],
      "mgtiface": "",
      "password": "ABCDEFGHIJK",
      "scheduler": "kubernetes",
      "secret": {
          "cluster_secret_key": "",
          "secret_type": "ibm-kp"
      },
      "storage": {
          "devices": [
        "/dev/sdc1"
          ],
          "journal_dev": "",
          "max_storage_nodes_per_zone": 0,
          "system_metadata_dev": ""
      },
      "username": "root",
      "version": "1.0"
      }
      
    6. Exit the pod.

Check out how to encrypt the secrets in your cluster, including the secret where you stored your Key Protect CRK for your Portworx storage cluster.