IBM Cloud Docs
Connecting to iSCSI LUNs on Linux

Connecting to iSCSI LUNs on Linux

These instructions are mainly for mounting Block Storage for Classic on RHEL6 and CentOS6. Notes for other OS were added, but the topic does not cover all Linux® distributions. If you're using another Linux® operating system, refer to the Documentation of your specific distribution, and ensure that the multipath supports ALUA for path priority.

Before you begin

If multiple hosts mount the same Block Storage for Classic volume without being cooperatively managed, your data is at risk for corruption. Volume corruption can occur if changes are made to the volume by multiple hosts at the same time. You need a cluster-aware, shared-disk file system to prevent data loss such as Microsoft® Cluster Shared Volumes (CSV), Red Hat Global File System (GFS2), VMware® VMFS, and others. For more information, see your host's OS Documentation.

It's best to run storage traffic on a VLAN, which bypasses the firewall. Running storage traffic through software firewalls increases latency and adversely affects storage performance. For more information about routing storage traffic to its own VLAN interface, see the FAQs.

Before you begin, make sure that the host that is to access the Block Storage for Classic volume is authorized. For more information, see Authorizing the host in the UIAuthorizing the host from the CLIAuthorizing the host with Terraform.

Mounting Block Storage for Classic volumes

Complete the following steps to connect a Linux®-based IBM Cloud® Compute instance to a multipath input/output (MPIO) iSCSI storage volume. You're going to create two connections from one network interface of your host to two target IP addresses of the storage array.

For more information about Ubuntu specifics, see iSCSI Initiator Configuration and DM-Multipath.

  1. Install the iSCSI and multipath utilities to your host.

    • RHEL and CentOS
    yum install iscsi-initiator-utils device-mapper-multipath
    
    • Ubuntu and Debian
    sudo apt-get update
    sudo apt-get install multipath-tools
    
  2. Create or edit your multipath configuration file if it is needed.

    • RHEL 6 and CENTOS 6

      • Edit /etc/multipath.conf with the following minimum configuration.
      defaults {
      user_friendly_names no
      max_fds max
      flush_on_last_del yes
      queue_without_daemon no
      dev_loss_tmo infinity
      fast_io_fail_tmo 5
      }
      # All data in the following section must be specific to your system.
      blacklist {
      wwid "SAdaptec*"
      devnode "^hd[a-z]"
      devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
      devnode "^cciss.*"
      }
      devices {
      device {
      vendor "NETAPP"
      product "LUN"
      path_grouping_policy group_by_prio
      features "3 queue_if_no_path pg_init_retries 50"
      prio "alua"
      path_checker tur
      failback immediate
      path_selector "round-robin 0"
      hardware_handler "1 alua"
      rr_weight uniform
      rr_min_io 128
      }
      }
      
      • Restart iscsi and iscsid services so that the changes take effect.
      service iscsi restart
      service iscsid restart
      
    • RHEL7 and CentOS7, edit multipath.conf with the following minimum configuration.

      defaults {
      user_friendly_names no
      max_fds max
      flush_on_last_del yes
      queue_without_daemon no
      dev_loss_tmo infinity
      fast_io_fail_tmo 5
      }
      # All data in the following section must be specific to your system.
      blacklist {
      wwid "SAdaptec*"
      devnode "^hd[a-z]"
      devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]"
      devnode "^cciss."
      }
      devices {
      device {
      vendor "NETAPP"
      product "LUN"
      path_grouping_policy group_by_prio
      features "3 queue_if_no_path pg_init_retries 50"
      prio "alua"
      path_checker tur
      failback immediate
      path_selector "round-robin 0"
      hardware_handler "1 alua"
      rr_weight uniform
      rr_min_io 128
      }
      }
      
    • Ubuntu has a multipath configuration that is built into multipath-tools. However, the built-in configuration uses a "service-time 0" load-balancing policy, which can leave your connection vulnerable to interruptions. Create a multipath.conf file and update it as follows.

      defaults {
      user_friendly_names no
      max_fds max
      flush_on_last_del yes
      queue_without_daemon no
      dev_loss_tmo infinity
      fast_io_fail_tmo 5
      }
      # All data in the following section must be specific to your system.
      blacklist {
      wwid "SAdaptec*"
      devnode "^hd[a-z]"
      devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]"
      devnode "^cciss."
      }
      devices {
      device {
      vendor "NETAPP"
      product "LUN"
      path_grouping_policy group_by_prio
      features "2 pg_init_retries 50"
      no_path_retry queue
      prio "alua"
      path_checker tur
      failback immediate
      path_selector "round-robin 0"
      hardware_handler "1 alua"
      rr_weight uniform
      rr_min_io 128
      }
      }
      
      • Restart multipathd service so that the changes take effect.

        systemctl multipathd restart
        
  3. Load the multipath module, start multipath services, and set it start on boot.

    • RHEL 6
      modprobe dm-multipath
      
      service multipathd start
      
      chkconfig multipathd on
      
    • CentOS 7
      modprobe dm-multipath
      
      systemctl start multipathd
      
      systemctl enable multipathd
      
    • Ubuntu
      service multipath-tools start
      
    • For other distributions, check the OS vendor Documentation.
  4. Verify that the multipath is working.

    • RHEL 6
      multipath -l
      
      If it returns blank, it's working.
    • CentOS 7
      multipath -ll
      
      RHEL 7 and CentOS 7 might return No fc_host device, which can be ignored.
  5. Update the /etc/iscsi/initiatorname.iscsi file with the IQN from the IBM Cloud® console. Enter the value as lowercase.

    InitiatorName=<value-from-the-Portal>
    
  6. Update the credential settings in /etc/iscsi/iscsid.conf by using the username and password from the IBM Cloud® console. Use uppercase for CHAP names.

    node.session.auth.authmethod = CHAP
    node.session.auth.username = <Username-value-from-Portal>
    node.session.auth.password = <Password-value-from-Portal>
    discovery.sendtargets.auth.authmethod = CHAP
    discovery.sendtargets.auth.username = <Username-value-from-Portal>
    discovery.sendtargets.auth.password = <Password-value-from-Portal>
    

    Leave the other CHAP settings commented. IBM Cloud® storage uses only one-way authentication. Do not enable Mutual CHAP.

    Ubuntu users, while you are looking at the iscsid.conf file, check whether the node.startup setting is manual or automatic. If it's manual, change it to automatic.

  7. Set iSCSI to start at boot and start it now.

    • RHEL 6
      chkconfig iscsi on
      
      chkconfig iscsid on
      
      service iscsi start
      
      service iscsid start
      
    • CentOS 7
      systemctl enable iscsi
      
      systemctl enable iscsid
      
      systemctl restart iscsi
      
      systemctl restart iscsid
      
    • For other distributions, check the OS vendor Documentation.
  8. Discover the device by using the Target IP address that was obtained from the IBM Cloud® console.

    A. Run the discovery against the iSCSI array.

    iscsiadm -m discovery -t sendtargets -p <ip-value-from-IBM-Cloud-console>
    

    B. Log in the host to the iSCSI array.

    iscsiadm -m node -L automatic
    
  9. Verify that the host is logged in to the iSCSI array and maintained its sessions.

    iscsiadm -m session
    
    multipath -l
    

    This command reports the paths. It's possible to attach Block Storage for Classic with only a single path, but it is important that connections are established on both paths to ensure no disruption of service.

  10. Verify that the device is connected by issuing the following command.

fdisk -l | grep /dev/mapper

By default the device attaches to /dev/mapper/<wwid>. WWID is the generated worldwide ID of the connected storage device that is persistent while the volume exists. So that command reports something similar to the following example.

Disk /dev/mapper/3600a0980383030523424457a4a695266: 73.0 GB, 73023881216 bytes

In the example, the string 3600a0980383030523424457a4a695266 is the WWID. Your application ought to use the WWID. It's also possible to assign more easier-to-read names by using "user_friendly_names" or "alias" keywords in multipath.conf. For more information, see the multipath.conf man page.

The volume is now mounted and accessible on the host. You can create a file system next.

Creating a file system (optional)

Follow these steps to create a file system on the newly mounted volume. A file system is necessary for most applications to use the volume. Use fdisk for drives that are less than 2 TB and parted for a disk bigger than 2 TB.

Creating a file system with fdisk

  1. Get the disk name.

    fdisk -l | grep /dev/mapper
    

    The disk name that is returned looks similar to /dev/mapper/XXX.

  2. Create a partition on the disk.

    fdisk /dev/mapper/XXX
    

    The XXX represents the disk name that is returned in Step 1.

  3. Create a file system on the new partition.

    fdisk –l /dev/mapper/XXX
    
    • The new partition is listed with the disk, similar to XXXp1, followed by the size, Type (83), and Linux®.

    • Take a note of the partition name, you need it in the next step. (The XXXp1 represents the partition name.)

    • Create the file system:

      mkfs.ext3 /dev/mapper/XXXp1
      
  4. Create a mount point for the file system, and mount it.

    • Create a partition name PerfDisk or where you want to mount the file system.

      mkdir /PerfDisk
      
    • Mount the storage with the partition name.

      mount /dev/mapper/XXXp1 /PerfDisk
      
    • Check that you see your new file system listed.

      df -h
      
  5. Add the new file system to the system's /etc/fstab file to enable automatic mounting on boot.

    • Append the following line to the end of /etc/fstab (with the partition name from Step 3).

      /dev/mapper/XXXp1    /PerfDisk    ext3    defaults,_netdev    0    1
      

For more information about available command options, see fdisk - manipulate disk partition table.

Creating a file system with parted

On many Linux® distributions, parted comes preinstalled. If it isn't included in your distro, you can install it with:

  • Debian and Ubuntu
    sudo apt-get install parted
    
  • RHEL and CentOS
    yum install parted
    

To create a file system with parted, follow these steps.

  1. Run parted.

    parted
    
  2. Create a partition on the disk.

    1. Unless it is specified otherwise, parted uses your primary drive, which is /dev/sda in most cases. Switch to the disk that you want to partition by using the command select. Replace XXX with your new device name.

      select /dev/mapper/XXX
      
    2. Run print to confirm that you are on the correct disk.

      print
      
    3. Create a GPT partition table.

      mklabel gpt
      
    4. Parted can be used to create primary and logical disk partitions, the steps that are involved are the same. To create a partition, parted uses mkpart. You can give it other parameters like primary or logical depending on the partition type that you want to create. The listed units default to megabytes (MB). To create a 10-GB partition, you start from 1 and end at 10000. You can also change the sizing units to terabytes by entering unit TB if you want to.

      mkpart
      
    5. Exit parted with quit.

      quit
      
  3. Create a file system on the new partition.

    mkfs.ext3 /dev/mapper/XXXp1
    

    It's important to select the correct disk and partition when you run this command. Verify the result by printing the partition table. Under the file system column, you can see ext3.

  4. Create a mount point for the file system and mount it.

    • Create a partition name PerfDisk or where you want to mount the file system.

      mkdir /PerfDisk
      
    • Mount the storage with the partition name.

      mount /dev/mapper/XXXp1 /PerfDisk
      
    • Check that you see your new file system listed.

      df -h
      
  5. Add the new file system to the system's /etc/fstab file to enable automatic mounting on boot.

    • Append the following line to the end of /etc/fstab (by using the partition name from Step 3).

      /dev/mapper/XXXp1    /PerfDisk    ext3    defaults,_netdev    0    1
      

Verifying MPIO configuration

If MPIO isn't configured correctly, your storage device might disconnect and appear offline when a network outage occurs or when IBM Cloud® teams perform maintenance. MPIO ensures an extra level of connectivity during those events, and keeps an established session to the LUN with active read/write operations.

  • To check whether multipath is picking up the devices, list the current configuration. If it is configured correctly, then each volume has a single group, with a number of paths equal to the number of iSCSI sessions.

    multipath -l
    
    root@server:~# multipath -l
    3600a09803830304f3124457a45757067 dm-1 NETAPP,LUN C-Mode
    size=20G features='1 queue_if_no_path' hwhandler='0' wp=rw
    |-+- policy='round-robin 0' prio=-1 status=active
    | `6:0:0:101 sdd 8:48 active ready running
    `-+- policy='round-robin 0' prio=-1 status=enabled
     `- 7:0:0:101 sde 8:64 active ready running
    

    The string 3600a09803830304f3124457a45757067 in the example is the unique WWID of the LUN. Each volume is identified by its unique WWID, which is persistent while the volume exists.

  • Confirm that all the disks are present. In a correct configuration, you can expect two disks to show in the output with the same identifier, and a /dev/mapper listing of the same size with the same identifier. The /dev/mapper device is the one that multipath sets up.

    fdisk -l | grep Disk
    
    • The following example output shows a correct configuration.
    root@server:~# fdisk -l | grep Disk
    Disk /dev/sda: 500.1 GB, 500107862016 bytes Disk identifier: 0x0009170d
    Disk /dev/sdc: 21.5 GB, 21474836480 bytes Disk identifier: 0x2b5072d1
    Disk /dev/sdb: 21.5 GB, 21474836480 bytes Disk identifier: 0x2b5072d1
    Disk /dev/mapper/3600a09803830304f3124457a45757066: 21.5 GB, 21474836480 bytes Disk identifier: 0x2b5072d1
    

    The WWID is included in the device name that the multipath creates.

    • The following example output shows an incorrect configuration. The /dev/mapper disk does not exist.
    root@server:~# fdisk -l | grep Disk
    Disk /dev/sda: 500.1 GB, 500107862016 bytes Disk identifier: 0x0009170d
    Disk /dev/sdc: 21.5 GB, 21474836480 bytes Disk identifier: 0x2b5072d1
    Disk /dev/sdb: 21.5 GB, 21474836480 bytes Disk identifier: 0x2b5072d1
    
  • To confirm that no local disks are included in the list of multipath devices, display the current configuration with verbosity level 3. The output of the following command displays the devices and also shows which ones were added to the blocklist.

    multipath -l -v 3 | grep sd <date and time>
    
  • On rare occasions, a LUN is provisioned and attached while the second path is down. In such instances, the host might see one single path when the discovery scan is run. If you encounter this phenomenon, check the IBM Cloud® status page to see whether a current event might impact your host's ability to access the storage. If no events are reported, perform the discovery scan again to ensure that all paths are properly discovered. If an event is in progress, the storage can be attached with a single path. However, it's essential that paths are rescanned after the event is completed. If both paths are not discovered after the rescan, create a support case so it can be properly investigated.

Unmounting Block Storage for Classic volumes

  1. Unmount the file system.
    umount /dev/mapper/XXXp1 /PerfDisk
    
  2. Optional. If you do not have any other volumes in that target portal, you can log out of the target.
    iscsiadm -m node -t <TARGET NAME> -p <PORTAL IP:PORT> --logout
    
  3. Optional. If you do not have any other volumes in that target portal, delete the target portal record to prevent future login attempts.
    iscsiadm -m node -o delete -t <TARGET IQN> -p <PORTAL IP:PORT>
    
    For more information, see the iscsiadm manual.