IBM Cloud Docs
Mounting ISCSI VMware ESXi

Mounting ISCSI VMware ESXi

Mounting iSCSI into VMware® ESXi can be accomplished with a few steps and only the details of the server and storage node.

In most VMware® environments, NFS File Storage for Classic is the better choice. File Storage for Classic is designed to support high I/O applications that require predictable levels of performance. In a VMware® deployment, a single File Storage for Classic volume can be mounted to up to 64 ESXi hosts as shared storage that is much better than Block Storage for Classic, which supports 8 hosts by default. You can also mount multiple File Storage for Classic volumes to create a storage cluster to use vSphere Storage Distributed Resource Scheduler (DRS). For more information, see Attached Storage for vCenter Server architecture and Provisioning File Storage for use as VMware datastore.

Before you begin, you can familiarize yourself with VMware vSphere 8.0 - Using ESXi with iSCSI SAN.

  1. Log in to the vSphere by using the primary private IP and user root and root's password.
  2. From the Welcome page, click the Configuration tab > Storage adapters > Add.
  3. Click OK to add the Software iSCSI adapter and confirm by clicking OK again.
  4. After the refresh, the new iSCSI adapter is listed. Click Properties.
  5. From the Properties Window, click Configure and set the Name to the IQN for the server (found on the storage device page under authorized hosts).
    • Alternatively, you can set the IQN by running the following command from the ESXi shell: esxcli iscsi adapter set -A $(esxcli iscsi adapter list | grep vmh | awk '{print$1}') -n $IQNFROMAUTHORIZEDHOSTSECTION.
  6. Click the Dynamic discovery tab then click Add....
  7. The iSCSI server is the target IP of the storage device. Click CHAP.
  8. Select Use CHAP and clear Inherit from parent. Enter the username (found on the storage device page under authorized hosts) and password.
  9. Select Do not use CHAP under the Mutual CHAP section then click OK. Now you can see the device in the Dynamic discovery window and click Close.
  10. Confirm the rescan of the storage devices. You now see the device turn gray and 'unmounted.'
  11. Right-click the device name and select Attach.
  12. Click the data store Menu on the left side column then click add storage and choose Disc/LUN.
  13. Click the device with the iqn.
  14. Choose the file system version that you want and click Next to continue through the wizard.

You can now use the iSCSI as needed by the host and the VMs you create.

For a more stable connection, mount the network storage on the hypervisor first as described. Then create the VMs, and mount the attached storage volume from the virtual server's OS with a multipath connection. For more information, see Understanding Multipathing and Failover in the ESXi Environment, and Setting Up Network for iSCSI and iSER with ESXi.

Verifying the MPIO configuration

Complete the following steps to view which multipathing policies the host uses for a specific storage device and the status of all available paths for the storage device. If MPIO is configured correctly, then each storage volume has a single group, with the number of paths equal to the number of iSCSI sessions.

  1. In the vSphere Client, go to the ESXi host.

  2. Click the Configure tab.

  3. Under Storage, click Storage Devices.

  4. Select the storage device whose paths you want to view.

  5. Click the Properties tab.

    Under Multipathing Policies, you can also see the Path Selection Policy and, if applicable, the Storage Array Type Policy assigned to the device.

  6. Click the Paths tab to review all paths that are available for the storage device and the status of each path.

    Possible status values for the device paths.
    Status Description
    Active (I/O) Working path or multiple paths that currently transfer data.
    Standby These paths are inactive. If the active path fails, they can become operational and start transferring I/O.
    Disabled These paths are deactivated by the administrator.
    Dead Paths that are no longer available for processing I/O. A physical medium failure or array misconfiguration can cause this status.

For more information, see Viewing and Managing Storage Paths on ESXi Hosts.

If MPIO isn't configured correctly, your storage device might disconnect and appear offline when a network outage occurs or when IBM Cloud® teams perform maintenance. MPIO provides an extra level of connectivity during those events, and keeps an established session to the volume with active read/write operations.