IBM Cloud Docs
Provisioning File Storage for Classic for use as a VMware datastore

Provisioning File Storage for Classic for use as a VMware datastore

This tutorial guides you through the steps of ordering and configuring IBM Cloud® File Storage for Classic in a vSphere environment at IBM Cloud®. File Storage for Classic is designed to support high I/O applications that require predictable levels of performance. The predictable performance is achieved through the allocation of protocol-level input/output operations per second (IOPS) to individual volumes.

If you require more than eight hosts to access your VMware® datastore, then choosing NFS File Storage for Classic is the best practice.

The File Storage for Classic offering is accessed and mounted through an NFS connection. In a VMware® deployment, a single volume can be mounted to up to 64 ESXi hosts as shared storage. You can also mount multiple volumes to create a storage cluster to use vSphere Storage Distributed Resource Scheduler (DRS).

You can also familiarize yourself with the VMware vSphere 8.0 - NFS Datastore Concepts and Operations in vSphere Environment.

Before you begin

Ordering considerations

Pricing and configuration options for File Storage for Classic are charged based on a combination of the reserved space and the offered IOPS.

When you order File Storage for Classic, consider the following information:

  • When you decide on the size, consider the size of the workload and the throughput needed. Size matters with the Endurance service, which scales performance linearly in relation to capacity (IOPS/GB). Conversely, the Performance service allows the administrator to choose capacity and performance independently. Throughput requirements matter with Performance.

    The throughput calculation is IOPS x 16 KB. IOPS is measured based on a 16-KB block size with a 50-50 read/write mix. Increasing block size increases the throughput but decreases IOPS. For example, doubling the block size to 32-KB blocks maintains the maximum throughput but halves the IOPS.

  • NFS uses many extra file control operations such as lookup, getattr, and readdir. These operations in addition to read/write operations can count as IOPS and vary by operation type and NFS version.

  • Both NFSv3 and NFSv4.1 are supported in the IBM Cloud® environment. However, the use NFSv3 is preferred due to its different file locking mechanism. NFSv4.1 must quiesce all operations and then complete lock reclamation, thus protocol issues can occur during network events.

    You can't use different NFS versions to mount the same datastore on multiple hosts. Because NFS 3 and NFS 4.1 clients don't use the same locking protocol, accessing the same virtual disks from two incompatible clients might result in incorrect behavior and cause data corruption. For more information, see NFS File Locking.

  • File Storage for Classic volumes are accessible only to authorized devices, subnets, or IP addresses.

  • To avoid storage disconnection during path-failover IBM® recommends installing VMware® tools, which set an appropriate timeout value. Don't change the value because the default setting is sufficient to make sure that your VMware® host doesn't lose connectivity.

  • File Storage for Classic allows administrators to set snapshot schedules that create and delete snapshot copies automatically for each storage volume. You can also create extra snapshot schedules (hourly, daily, weekly) for automatic snapshots and manually create ad hoc snapshots for business continuity and disaster recovery (BCDR) scenarios. Automatic alerts are delivered through the IBM Cloud® console to the volume owner for the retained snapshots and space used. Snapshot space is required to use snapshots. Space can be purchased on the initial volume order or after the initial provisioning. Restoring the File Storage for Classic volume requires powering off all the VMs. The volume needs to be temporarily unmounted from the ESXi host to avoid any data corruption during the process. For more information, see the snapshots article.

    VMware® environments are not aware of snapshots. The File Storage for Classic snapshot capability must not be confused with VMware® snapshots. Any recovery that uses the File Storage for Classic snapshot feature must be handled from the IBM Cloud® console.

Ordering required resources

Review Attached storage infrastructure design and follow the instructions in the Advanced Single-Site VMware® Reference Architecture to provision and configure your VMware environment.

File Storage for Classic can be ordered through The IBM Cloud® catalog, from the CLI, with the API or Terraform. For more information, see Ordering File Storage for Classic.

Authorizing hosts

You can create the authorization in the UI, from the CLI, with the API, or with Terraform.

Configuring the VMware virtual machine host

Before you begin the configuration process, make sure that the following requirements are met:

  • IBM Cloud® Bare Metal Servers with VMware® ESXi are provisioned with proper storage configuration and ESXi login credentials.
  • IBM Cloud® Windows physical or Virtual Servers is available in the same data center as the IBM Cloud® Bare Metal Servers. Make sure that you know the public IP address of the IBM Cloud® Windows server and the login credentials.
  • You have a computer with internet access, and with the web browser software and a Remote Desktop Protocol (RDP) client installed.

Connecting to vCenter

  1. From an internet-connected computer, start an RDP client and establish an RDP session to the IBM Cloud® Virtual Servers that is provisioned in the same data center where vSphere vCenter is installed.
  2. From the Virtual Servers, start a web browser and connect to VMware® vCenter through the vSphere Web Client.

Confirming the firewall settings

To enable access to NFS storage, ESXi automatically opens firewall ports for the NFS clients when you mount an NFS datastore. For troubleshooting reasons, you might need to verify that the ports are open.

  1. In the vSphere Client, select the ESXi host.
  2. Go to Manage > Settings > Security Profile and click Edit.
  3. Scroll down to an appropriate version of NFS to make sure that the port is open.
    The image shows the Edit Security Profile window. The NFS Client service is selected. The image shows that all connections from any IP addresses are allowed.
    NFS Port information - Allow connection from any IP address.

For more information, see the VMware vSphere 8.0 - Configuring ESXi Firewall and VMware vSphere 8.0 - NFS Client Firewall Behavior.

Configuring Jumbo frame settings

  1. Configure Jumbo Frames by going to the ESXi host Manage tab, select Manage and then Networking.
  2. Select the VMkernel adapters, highlight the vSwitch and the click Edit (Pencil icon).
  3. Select the NIC setting, and make sure that the NIC MTU is set to 9000.
  4. Optional. Validate the jumbo frame settings.
    • Windows

      ping -f -l 8972 a.b.c.d
      
    • UNIX

      ping -s 8972 a.b.c.d
      

      The value a.b.c.d is the neighboring Virtual Servers interface.

      Example

      ping a.b.c.d (a.b.c.d) 8972(9000) bytes of data.
      8980 bytes from a.b.c.d: icmp_seq=1 ttl=128 time=3.36 ms
      

For more information, see VMware vSphere 8.0 - Enabling Jumbo Frames.

Adding an uplink adapter to a virtual switch

  1. In the vSphere Client, go to the host.
  2. On the Configure tab, expand Networking, and select Virtual switches.
  3. Select the virtual switch that you want to add a physical adapter to.
  4. Click Manage physical adapters.
  5. Add one or more available physical network adapters to the switch.
  6. Click Add adapters, select one or more network adapters from the list and click OK.
  7. The selected adapters appear in the failover group list under the Assigned adapters list.
  8. Use the up and down arrows to change the position of an adapter in the failover groups. The failover group determines the role of the adapter for exchanging data with the external network, that is, active, standby or unused. By default, the adapters are added as active to the standard switch.
    The Add physical adapters to the switch screen is shown with 3 active network adapters already in the list.
    Add the physical adapters to the switch.
  9. Click OK to apply the physical adapter configuration.
  10. Return to Virtual switches, and click Edit setting.
  11. Expand NIC teaming.
  12. Verify that the Load-balancing option is set to Route based on the originating virtual port and click OK.

For more information, see vSphere Distributed Switch, and VMware vSphere 8.0 - Edit Virtual Switch Settings in theVMware Host Client.

Configuring static routing (Optional)

If you have a VMkernel port group for NFS storage, extra steps must be taken. By default, ESXi uses the VMkernel port that is on the same subnet as an NFS volume to mount the NFS volume. Since layer 3 routing is used to mount the NFS volume, ESXi must be forced to use the VMkernel port that was configured to mount the NFS volume. To use the correct port, a static route must be created to the storage array.

For more information, see vSphere host static routing, and VMware vSphere 8.0 - Configure VMkernel Binding for NFS 3 Datastores.

To configure a static route, SSH to each ESXi host that uses Performance or Endurance storage and run the following commands. Take note of the IP address that is the result of the ping command and use it with the esxcli network command.

ping <hostname of the storage array>

The NFS storage DNS hostname is a Forwarding Zone (FZ) that has multiple IP addresses assigned to it. These IP addresses are static and belong to that specific DNS hostname. Any of those IP addresses can be used to access a specific volume.

esxcli network ip route ipv4 add –gateway GATEWAYIP –network <result of ping command>/32

The same IP address as it can be used for mounting the volume in the next step. This process needs to be done for each NFS share you plan to mount to your ESXi host. For more information, see the VMware® KB article, Configuring static routes for VMkernel ports on an ESXi host.

Configuring Advanced ESXi host-side settings

Configure the Advanced settings that are required for ESXi hosts that want to mount NFS storage.

  1. Review the table of Advanced configuration parameters.
  2. Follow the steps to configure these advance settings in the vSphere Client or from the vSphere PowerCLI as they are described in Broadcom's Configuring advanced options for ESXi.

Creating the VMware® datastore

IBM Cloud recommends that FQDN names be used to connect to the VMware® datastore. Using direct IP addressing might bypass the load-balancing mechanism that is provided by using FQDN.

If you want to use the IP address instead of the FQDN, ping the server to obtain the IP address.

ping <hostname of the storage array>

To obtain the IP address from an ESXi host, use vmkping as shown in the following example.

~ # vmkping nfsdal0902a-fz.service.softlayer.com
PING nfsdal0902a-fz.service.softlayer.com (10.2.125.80): 56 data bytes
64 bytes from 10.2.125.80: icmp_seq=0 ttl=253 time=0.187 ms

Creating the NFS datastore

  1. Click the Go to vCenter icon, and then Hosts and Clusters.

  2. On the Related Object tab, click Datastores.

  3. Click the Create a new datastore icon.

  4. On the New Datastore screen, select the location of the VMware® datastore and click Next.

  5. On the Type screen, select NFS, and click next.

  6. Then, select the NFS version. Both NFSv3 and NFSv4.1 are supported, but NFSv3 is preferred.

    Make sure that you use only one NFS version to access the datastore. Consequences of mounting one or more hosts to the same datastore by using different versions can result in data corruption.

  7. On the Name and configuration screen, enter the name that you want to call the VMware datastore. Additionally, enter the hostname of the NFS server. Using the FQDN for the NFS server produces the best traffic distribution to the underlying server. IP address is also valid but is used less frequently and only in specific instances. Enter the folder name in the form of /foldername.

  8. On the Host accessibility screen, select one or more hosts that you want to mount the NFS VMware® datastore on and click next.

  9. Review the inputs on the next screen and click Finish.

  10. Repeat for any additional File Storage for Classic volumes.

For more information, see VMware vSphere 8.0 - Creating a Datastore Cluster in vSphere and VMware vSphere 8.0 - Creating vSphere Datastores.

Enabling ESXi Storage I/O Control (Optional)

Storage I/O Control (SIOC) is a feature available for customers who use an Enterprise Plus license. When SIOC is enabled in the environment, it changes the device queue length for the single VM. The change to the device queue length reduces the storage array queue for all VMs to an equal share. SIOC engages only if resources are constrained and the storage I/O latency is over a defined threshold.

In order for SIOC to determine when a storage device is congested or constrained, it requires a defined threshold. The congestion threshold latency is different for different storage types. The default selection is to 90% of peak throughput. The percentage of peak throughput value indicates the estimated latency threshold when the VMware® datastore is using that percentage of its estimated peak throughput.

Incorrectly configuring SIOC for a VMware® datastore or for a VMDK can significantly impact performance.

For more information, see Attached storage infrastructure design and Configuration and settings for attached storage.

Configuring Storage I/O Control for a VMware datastore

  1. Browse to the VMware® datastore in the vSphere Web Client navigator.
  2. Click the Manage tab.
  3. Click Settings and click General.
  4. Click Edit for Datastore Capabilities.
  5. Select the Enable Storage I/O Control checkbox.
    The image shows the NSF VMware® datastore - Configure I/O control screen.Enable Storage I/O Control option is selected.
    Select Enable Storage I/O Control.
  6. Click OK.

For more information about how to Enable Storage I/O Control, see VMware vSphere 8.0 - Manage Storage I/O Resources with vSphere.

This setting is specific to the VMware® datastore and not to the host.

Configuring Storage I/O Control for Virtual Servers

You can limit individual virtual disks for individual VMs or grant them different shares with SIOC. By limiting disks and granting different shares, you can match and align the environment to the workload with the acquired IBM Cloud® File Storage for Classic volume IOPS number. The limit is set by IOPS and it is possible to set a different weight or shares.

Virtual disks shares that are set to High (2,000 shares) receive twice as much I/O as a disk set to Normal (1,000 shares). Virtual disks shares that are set to High (2,000 shares) receive four times as much I/O as a share that is set to Low (500 shares). Normal is the default value for all the VMs, so you need to adjust the Normal settings for the VMs that require it.

For more information, see Storage I/O Control for NFS v3.

  1. Browse to the virtual machine in the vSphere Client.
    1. To find a virtual machine, select a data center, folder, cluster, resource pool, or host.
    2. Click the VMs tab.
  2. Right-click the virtual machine and click Edit Settings.
  3. Click the Virtual Hardware tab and select a virtual hard disk from the list. Expand Hard disk.
  4. Select a VM storage policy from the menu. If you select a storage policy, do not manually configure Shares and Limit - IOPS.
  5. Under Shares, click the menu and select the relative number of shares to allocate to the virtual machine (Low, Normal, or High). You can select Custom to enter a user-defined shares value.
  6. Under Limit - IOPS, click the drop-down menu and enter the maximum limit of storage resources to allocate to the virtual machine. By default, IOPS is unlimited.
  7. Click OK.

For more information about how to Set Storage I/O Control Resource Shares and Limits, see VMware vSphere 8.0 - Manage Storage I/O Resources with vSphere.

This process is used to set the resource consumption limits of individual vDisks in a Virtual Servers even when SIOC is not enabled. These settings are specific to the individual guest, and not the host, although they are used by SIOC.