IBM Cloud Docs
Mount iSCSI LUN on Debian 10

Mount iSCSI LUN on Debian 10

This tutorial guides you through how to mount a IBM Cloud® Block Storage for Classic volume on a server with the Debian 10 Buster operating system. Complete the following steps to connect a Linux®-based IBM Cloud® Compute instance to a multipath input/output (MPIO) iSCSI storage volume. You're going to create two connections from one network interface of your host to two target IP addresses of the storage array.

Before you begin

If multiple hosts mount the same Block Storage for Classic volume without being cooperatively managed, your data is at risk for corruption. Volume corruption can occur if changes are made to the volume by multiple hosts at the same time.

You need a cluster-aware, shared-disk file system to prevent data loss such as Microsoft® Cluster Shared Volumes (CSV), Red Hat Global File System (GFS2), VMware® VMFS, and others. For more information, see your host's OS Documentation.

It's best to run storage traffic on a VLAN, which bypasses the firewall. Running storage traffic through software firewalls increases latency and adversely affects storage performance. For more information about routing storage traffic to its own VLAN interface, see the FAQs.

Before you start configuring iSCSI, make sure to have the network interfaces correctly set and configured in order for the open-iscsi package to work correctly, especially during startup time. In Ubuntu 20.04 LTS, the default network configuration tool is netplan.io. For more information about how the iSCSI service works on the Ubuntu OS, see iSCSI Initiator (or Client) Documentation.

Also, make sure that the host that is to access the Block Storage for Classic volume is authorized. For more information, see Authorizing the host in the UI Authorizing the host from the CLIAuthorizing the host with Terraform.

Install the iSCSI and multipath utilities

Ensure that your system is updated and includes the open-iscsi and multipath-tools packages. Use the following commands to install the packages.

  • Install open-iscsi.

    apt-get install open-iscsi
    

    When the package is installed, it creates the following two files.

    • /etc/iscsi/iscsid.conf
    • /etc/iscsi/initiatorname.iscsi

    For more information about how the 'open-iscsi' works on Debian OS, see Debian as an iSCSI Initiator.

  • Install multipath-tools.

    apt install multipath-tools
    systemctl restart multipathd
    

Set up the multipath

  1. After you installed the multipath utility, find the location of the default multipath configuration file by issuing the following command.

    multipath -t
    
  2. Modify the default values of multipath.conf.

    defaults {
    user_friendly_names no
    max_fds max
    flush_on_last_del yes
    queue_without_daemon no
    dev_loss_tmo infinity
    fast_io_fail_tmo 5
    find_multipaths no
    }
    # All data in the following section must be specific to your system.
    blacklist {
    wwid "SAdaptec*"
    devnode "^hd[a-z]"
    devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
    devnode "^cciss.*"
    }
    devices {
    device {
    vendor "NETAPP"
    product "LUN"
    path_grouping_policy group_by_prio
    features "2 pg_init_retries 50"
    no_path_retry queue
    prio "alua"
    path_checker tur
    failback immediate
    path_selector "round-robin 0"
    hardware_handler "1 alua"
    rr_weight uniform
    rr_min_io 128
    }
    }
    

    The initial defaults section of the configuration file configures your system so that the names of the multipath devices are of the form /dev/mapper/mpathn. The mpathn is the WWID number of the device.

    For more information, see the multipath.conf manual for Debian Buster.

  3. Save the configuration file and exit the editor, if necessary.

  4. Start the multipath service.

    systemctl restart multipathd
    

    If you need to edit the multipath configuration file after you started the multipath daemon, you must restart the multipathd service for the changes to take effect.

Update /etc/iscsi/initiatorname.iscsi file

Update the /etc/iscsi/initiatorname.iscsi file with the IQN from the IBM Cloud® console. Enter the value as lowercase.

InitiatorName=<value-from-the-Portal>

Configure credentials

Edit the following settings in /etc/iscsi/iscsid.conf by using the username and password from the IBM Cloud® console. Use uppercase for CHAP names.

node.session.auth.authmethod = CHAP
node.session.auth.username = <Username-value-from-Portal>
node.session.auth.password = <Password-value-from-Portal>
discovery.sendtargets.auth.authmethod = CHAP
discovery.sendtargets.auth.username = <Username-value-from-Portal>
discovery.sendtargets.auth.password = <Password-value-from-Portal>

Leave the other CHAP settings commented. IBM Cloud® storage uses only one-way authentication. Do not enable Mutual CHAP.

Restart the iscsi service for the changes to take effect.

systemctl restart iscsid.service

Discover the storage device and login

The iscsiadm utility is a command-line tool that handles the discovery and login to iSCSI targets, plus access and management of the open-iscsi database. For more information, see the iscsiadm(8) man page. In this step, discover the device by using the Target IP address that was obtained from the IBM Cloud® console.

  1. Run the discovery against the iSCSI array.

    sudo iscsiadm -m discovery -I iscsi01 --op=new --op=del --type sendtargets --portal <ip-value-from-IBM-Cloud-console>
    

    If the IP information and access details are displayed, then the discovery is successful.

  2. Configure automatic login.

    sudo iscsiadm -m node --op=update -n node.conn[0].startup -v automatic
    sudo iscsiadm -m node --op=update -n node.startup -v automatic
    
  3. Enable the necessary services.

    systemctl enable open-iscsi
    systemctl enable iscsid
    
  4. Restart the iscsid service.

    systemctl restart iscsid.service
    
  5. Log in to the iSCSI array.

    sudo iscsiadm -m node --loginall=automatic
    

Verifying configuration

  1. Validate that the iSCSI session is established.

    iscsiadm -m session -o show
    
  2. Validate that multiple paths exist.

    multipath -ll
    

    This command reports the paths. If it is configured correctly, then each volume has a single group, with a number of paths equal to the number of iSCSI sessions. It's possible to attach Block Storage for Classic with only a single path, but it is important that connections are established on both paths to ensure no disruption of service.

    $ sudo multipath -ll
    mpathb (360014051f65c6cb11b74541b703ce1d4) dm-1 LIO-ORG,TCMU device
    size=1.0G features='0' hwhandler='0' wp=rw
    |-+- policy='service-time 0' prio=1 status=active
    | `- 7:0:0:2 sdh 8:112 active ready running
    `-+- policy='service-time 0' prio=1 status=enabled
      `- 8:0:0:2 sdg 8:96  active ready running
    mpatha (36001405b816e24fcab64fb88332a3fc9) dm-0 LIO-ORG,TCMU device
    size=1.0G features='0' hwhandler='0' wp=rw
    |-+- policy='service-time 0' prio=1 status=active
    | `- 7:0:0:1 sdj 8:144 active ready running
    `-+- policy='service-time 0' prio=1 status=enabled
      `- 8:0:0:1 sdi 8:128 active ready running
    

    If MPIO isn't configured correctly, your storage device might disconnect and appear offline when a network outage occurs or when IBM Cloud® teams perform maintenance. MPIO ensures an extra level of connectivity during those events, and keeps an established session to the LUN with active read/write operations.

    In the example,36001405b816e24fcab64fb88332a3fc9 is the WWID that is persistent while the volume exists. We recommend that your application uses the WWID. It's also possible to assign more easier-to-read names by using "user_friendly_names" or "alias" keywords in multipath.conf. For more information, see the multipath.conf man page.

  3. Check dmesg to make sure that the new disks are detected.

    dmesg
    

Creating a partition and a file system (optional)

After the volume is mounted and accessible on the host, you can create a file system. Follow these steps to create a file system on the newly mounted volume.

  1. Create a partition.

    $ sudo fdisk /dev/mapper/mpatha
    
    Welcome to fdisk (util-linux 2.34).
    Changes will remain in memory only, until you decide to write them.
    Be careful before using the write command.
    
    Device does not contain a recognized partition table.
    Created a new DOS disklabel with disk identifier 0x92c0322a.
    
    Command (m for help): p
    Disk /dev/mapper/mpatha: 1 GiB, 1073741824 bytes, 2097152 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 65536 bytes
    Disklabel type: dos
    Disk identifier: 0x92c0322a
    
    Command (m for help): n
    Partition type
       p   primary (0 primary, 0 extended, 4 free)
       e   extended (container for logical partitions)
    Select (default p): p
    Partition number (1-4, default 1):
    First sector (2048-2097151, default 2048):
    Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-2097151, default 2097151):
    
    Created a new partition 1 of type 'Linux' and of size 1023 MiB.
    
    Command (m for help): w
    The partition table has been altered.
    
  2. Create the file system.

    $ sudo mkfs.ext4 /dev/mapper/mpatha-part1
    mke2fs 1.45.5 (07-Jan-2020)
    Creating filesystem with 261888 4k blocks and 65536 inodes
    Filesystem UUID: cdb70b1e-c47c-47fd-9c4a-03db6f038988
    Superblock backups stored on blocks:
            32768, 98304, 163840, 229376
    
    Allocating group tables: done
    Writing inode tables: done
    Creating journal (4096 blocks): done
    Writing superblocks and filesystem accounting information: done
    
  3. Mount the block device.

    sudo mount /dev/mapper/mpatha-part1 /mnt
    
  4. Access the data to confirm that the new partition and file system are ready for use.

    ls /mnt