FC SAN Storage multipath with Proxmox VE 6.3

pramodsd

New Member
Apr 7, 2021
1
0
1
37
Hello All,

We are testing ProxmoxVE 6.3 in our environment. We have the configuration setup as below:
1. 2 * HPE DL360 Servers - Proxmox VE 6.3 installed in each. 2-port FC HBA in each server.
2. Cisco MDS 9148 SAN Switches.
3. QSAN Xevo 2026 AFA with FC HBA
4. Jetstor 812 FXD with FC HBA

I am trying to configure LVM on shared LUNs from the SAN storage. I am able to successfully configure one LUN as an LV from one storage. If I connect the second storage or add another LUN from the same storage, the multipath starts acting very weird. I don't see it as a different LUNs. All the paths for the storage LUNs kind of merge together. Below is my multipath.conf file.

root@pvenode1:~# cat /etc/multipath.conf
defaults {
find_multipaths "on"
polling_interval 2
path_selector "round-robin 0"
path_grouping_policy multibus
uid_attribute ID_SERIAL
rr_min_io 100
failback immediate
no_path_retry queue
user_friendly_names yes
}
blacklist {
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^(td|hd)[a-z]"
devnode "^dcssblk[0-9]*"
devnode "^cciss!c[0-9]d[0-9]*"
device {
vendor "DGC"
product "LUNZ"
}
device {
vendor "EMC"
product "LUNZ"
}
device {
vendor "IBM"
product "Universal Xport"
}
device {
vendor "IBM"
product "S/390.*"
}
device {
vendor "DELL"
product "Universal Xport"
}
device {
vendor "SGI"
product "Universal Xport"
}
device {
vendor "STK"
product "Universal Xport"
}
device {
vendor "SUN"
product "Universal Xport"
}
device {
vendor "(NETAPP|LSI|ENGENIO)"
product "Universal Xport"
}
}

blacklist_exceptions {
wwid "3201f0013780e2d80"
wwid "320200013780e2d80"
}

multipaths {
multipath {
wwid "3201f0013780e2d80"
alias mpath000
}
multipath {
wwid "320200013780e2d80"
alias mpath001
}
root@pvenode1:~#


Multipath -ll output with one LUN and two LUNs from single storage:

root@pvenode1:~# multipath -ll
mpatha (3) dm-5 QSAN,XF2026
size=1.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
|- 1:0:0:14 sdb 8:16 active ready running
|- 2:0:0:14 sdd 8:48 active ready running
|- 1:0:1:14 sdc 8:32 active ready running
`- 2:0:1:14 sde 8:64 active ready running

root@pvenode1:~# rescan-scsi-bus.sh
Scanning SCSI subsystem for new devices
Scanning host 0 for SCSI target IDs 0 1 2 3 4 5 6 7, all LUNs
Scanning for device 0 0 0 4194240 ...
NEW: Host: scsi0 Channel: 00 Id: 00 Lun: 4194240
Vendor: HP Model: P420i Rev: 8.32
Type: RAID ANSI SCSI revision: 05
Scanning for device 0 0 0 0 ...
OLD: Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: HP Model: P420i Rev: 8.32
Type: RAID ANSI SCSI revision: 05
Scanning for device 0 1 0 0 ... 0 ...
OLD: Host: scsi0 Channel: 01 Id: 00 Lun: 00
Vendor: HP Model: LOGICAL VOLUME Rev: 8.32
Type: Direct-Access ANSI SCSI revision: 05
Scanning host 1 for all SCSI target IDs, all LUNs
Scanning for device 1 0 0 14 ...
OLD: Host: scsi1 Channel: 00 Id: 00 Lun: 14
Vendor: QSAN Model: XF2026 Rev: 120
Type: Direct-Access ANSI SCSI revision: 04
Scanning for device 1 0 0 17 ...
NEW: Host: scsi1 Channel: 00 Id: 00 Lun: 17
Vendor: QSAN Model: XF2026 Rev: 120
Type: Direct-Access ANSI SCSI revision: 04
Scanning for device 1 0 1 14 ...
OLD: Host: scsi1 Channel: 00 Id: 01 Lun: 14
Vendor: QSAN Model: XF2026 Rev: 120
Type: Direct-Access ANSI SCSI revision: 04
Scanning for device 1 0 1 17 ...
NEW: Host: scsi1 Channel: 00 Id: 01 Lun: 17
Vendor: QSAN Model: XF2026 Rev: 120
Type: Direct-Access ANSI SCSI revision: 04
Scanning host 2 for all SCSI target IDs, all LUNs
Scanning for device 2 0 0 14 ...
OLD: Host: scsi2 Channel: 00 Id: 00 Lun: 14
Vendor: QSAN Model: XF2026 Rev: 120
Type: Direct-Access ANSI SCSI revision: 04
Scanning for device 2 0 0 17 ...
NEW: Host: scsi2 Channel: 00 Id: 00 Lun: 17
Vendor: QSAN Model: XF2026 Rev: 120
Type: Direct-Access ANSI SCSI revision: 04
Scanning for device 2 0 1 14 ...
OLD: Host: scsi2 Channel: 00 Id: 01 Lun: 14
Vendor: QSAN Model: XF2026 Rev: 120
Type: Direct-Access ANSI SCSI revision: 04
Scanning for device 2 0 1 17 ...
NEW: Host: scsi2 Channel: 00 Id: 01 Lun: 17
Vendor: QSAN Model: XF2026 Rev: 120
Type: Direct-Access ANSI SCSI revision: 04
5 new or changed device(s) found.
[0:0:0:4194240]
[1:0:0:17]
[1:0:1:17]
[2:0:0:17]
[2:0:1:17]
0 remapped or resized device(s) found.
0 device(s) removed.
root@pvenode1:~# fdisk -l
Disk /dev/sda: 838.3 GiB, 900151926784 bytes, 1758109232 sectors
Disk model: LOGICAL VOLUME
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 262144 bytes / 262144 bytes
Disklabel type: gpt
Disk identifier: 2E035AA8-E334-4CE4-8545-82678BC1EB24

Device Start End Sectors Size Type
/dev/sda1 34 2047 2014 1007K BIOS boot
/dev/sda2 2048 1050623 1048576 512M EFI System
/dev/sda3 1050624 1758109198 1757058575 837.9G Linux LVM

Partition 1 does not start on physical sector boundary.


Disk /dev/sdc: 1 TiB, 1099511627776 bytes, 2147483648 sectors
Disk model: XF2026
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 131072 bytes


Disk /dev/sdb: 1 TiB, 1099511627776 bytes, 2147483648 sectors
Disk model: XF2026
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 131072 bytes


Disk /dev/mapper/pve-swap: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 262144 bytes / 262144 bytes


Disk /dev/mapper/pve-root: 96 GiB, 103079215104 bytes, 201326592 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 262144 bytes / 262144 bytes


Disk /dev/sde: 1 TiB, 1099511627776 bytes, 2147483648 sectors
Disk model: XF2026
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 131072 bytes


Disk /dev/sdd: 1 TiB, 1099511627776 bytes, 2147483648 sectors
Disk model: XF2026
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 131072 bytes


Disk /dev/mapper/mpatha: 1 TiB, 1099511627776 bytes, 2147483648 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 131072 bytes


Disk /dev/mapper/sharedvg-vm--100--disk--0: 100 GiB, 107374182400 bytes, 209715200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 131072 bytes
Disklabel type: dos
Disk identifier: 0x000107ce

Device Boot Start End Sectors Size Id Type
/dev/mapper/sharedvg-vm--100--disk--0-part1 * 2048 2099199 2097152 1G 83 Linux
/dev/mapper/sharedvg-vm--100--disk--0-part2 2099200 209715199 207616000 99G 8e Linux LVM


Disk /dev/sdf: 500 GiB, 536870912000 bytes, 1048576000 sectors
Disk model: XF2026
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 131072 bytes


Disk /dev/sdg: 500 GiB, 536870912000 bytes, 1048576000 sectors
Disk model: XF2026
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 131072 bytes


Disk /dev/sdh: 500 GiB, 536870912000 bytes, 1048576000 sectors
Disk model: XF2026
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 131072 bytes


Disk /dev/sdi: 500 GiB, 536870912000 bytes, 1048576000 sectors
Disk model: XF2026
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 131072 bytes
root@pvenode1:~# /lib/udev/scsi_id -g -u -d /dev/sdf
320200013780e2d80
root@pvenode1:~# multipath -a 320200013780e2d80
wwid '320200013780e2d80' added

root@pvenode1:~# multipath -ll
mpatha (3) dm-5 QSAN,XF2026
size=500G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
|- 1:0:0:14 sdb 8:16 active ready running
|- 2:0:0:14 sdd 8:48 active ready running
|- 1:0:0:17 sdf 8:80 active ready running
|- 2:0:0:17 sdh 8:112 active ready running
|- 1:0:1:14 sdc 8:32 active ready running
|- 2:0:1:14 sde 8:64 active ready running
|- 1:0:1:17 sdg 8:96 active ready running
`- 2:0:1:17 sdi 8:128 active ready running

What am I doing wrong?

Please help.
 
At first glance you multipath is not working correctly, the wwid is missing on the multipath -ll output and only your prefix 3 is there. I often find it very confusing in doing this blacklist all, whitelist some methodology, I use blacklisted controllers and have a minimal multipath.conf.

I do not have a PVE7 san environment yet, but I plan to set it up this week due to an upgrade to PVE7, so I may encounter the same you have here, and I will report back.
 
I do not have a PVE7 san environment yet, but I plan to set it up this week due to an upgrade to PVE7, so I may encounter the same you have here, and I will report back.
I could not find anything wrong with the PVE7 setup, besides the setting find_multipaths "on" nothing else was required to get it to work immediately.

My configuration looks like this:

Code:
defaults {
    find_multipaths "on"
}

multipaths {
    multipath {
        wwid            360.............................0
        alias           DX200_PROXMOX_01
    }
}

blacklist {
    device {
        vendor "LSI"
        product ".*"
    }
}

devices {
    device {
        vendor                  "FUJITSU"
        product                 "ETERNUS_DXL"
        prio                    alua
        path_grouping_policy    group_by_prio
        path_selector           "round-robin 0"
        failback                immediate
        no_path_retry           0
        path_checker            tur
        dev_loss_tmo            2097151
        fast_io_fail_tmo        1
    }
}
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!