We are working on migrating off of vmware to proxmox. We have new servers with dual port qlogic fiber channel cards that connect across dual paths to our HP 3PAR storage. I've built 8 nodes so far. We boot our nodes from the SAN disks. On 7 of the 8 the boot drive does not show up in multipath -ll. Oddly on the last node I built it does and I can't figure out why. It would be my preferred outcome for all 8 of them because then the PVE VG shows up as valid as well.
Here is the output from node 8.
mpatha (360002ac0000000002a004cf90000c4c5) dm-0 3PARdata,VV
size=100G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
|- 1:0:0:0 sda 8:0 active ready running
|- 1:0:1:0 sde 8:64 active ready running
|- 10:0:1:0 sdm 8:192 active ready running
`- 10:0:0:0 sdi 8:128 active ready running
mpathb (360002ac00000000029004ec00000c4c5) dm-10 3PARdata,VV
size=10T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
|- 1:0:0:1 sdb 8:16 active ready running
|- 1:0:1:1 sdf 8:80 active ready running
|- 10:0:1:1 sdn 8:208 active ready running
`- 10:0:0:1 sdj 8:144 active ready running
mpathc (360002ac000000000290053ce0000c4c5) dm-4 3PARdata,VV
size=750G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
|- 1:0:0:2 sdc 8:32 active ready running
|- 1:0:1:2 sdg 8:96 active ready running
|- 10:0:0:2 sdk 8:160 active ready running
`- 10:0:1:2 sdo 8:224 active ready running
mpathd (360002ac0000000002a0033640000c4c5) dm-11 3PARdata,VV
size=20T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
|- 1:0:0:4 sdd 8:48 active ready running
|- 1:0:1:4 sdh 8:112 active ready running
|- 10:0:0:4 sdl 8:176 active ready running
`- 10:0:1:4 sdp 8:240 active ready running
VG #PV #LV #SN Attr VSize VFree
hpstorage 2 67 0 wz--n- <30.00t 21.03t
pve 1 3 0 wz--n- 99.48g 12.36g
As you can see the boot drive shows up as mpatha in the multipath -ll output and PVE shows up as a Volume Group.
The other 7 nodes represent like this.
mpathb (360002ac00000000029004ec00000c4c5) dm-6 3PARdata,VV
size=10T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
|- 9:0:0:1 sdb 8:16 active ready running
|- 9:0:1:1 sdf 8:80 active ready running
`- 10:0:0:1 sdj 8:144 active ready running
mpathc (360002ac000000000290053ce0000c4c5) dm-1 3PARdata,VV
size=750G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
|- 9:0:0:2 sdc 8:32 active ready running
|- 9:0:1:2 sdg 8:96 active ready running
`- 10:0:0:2 sdk 8:160 active ready running
mpathd (360002ac0000000002a0033640000c4c5) dm-7 3PARdata,VV
size=20T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
|- 9:0:0:4 sdd 8:48 active ready running
|- 9:0:1:4 sdh 8:112 active ready running
`- 10:0:0:4 sdl 8:176 active ready running
VG #PV #LV #SN Attr VSize VFree
hpstorage 2 67 0 wz--n- <30.00t 21.03t
They don't have an mpatha mapped nor do they show the VG PVE.
I've tried adding the WWID to /etc/multipath/wwids and there is no change. If I try running multipath /dev/sda on the nodes I get
78835.220363 | mpatha: addmap [0 209715200 multipath 1 queue_if_no_path 1 alua 1 1 service-time 0 3 1 8:0 1 8:64 1 8:128 1]
78835.220740 | libdevmapper: ioctl/libdm-iface.c(1980): device-mapper: reload ioctl on mpatha (252:75) failed: Device or resource busy
78835.220994 | dm_addmap: libdm task=0 error: Success
78835.221111 | mpatha: ignoring map
78835.222761 | mpatha: addmap [0 209715200 multipath 1 queue_if_no_path 1 alua 1 1 service-time 0 3 1 8:0 1 8:64 1 8:128 1]
78835.222976 | libdevmapper: ioctl/libdm-iface.c(1980): device-mapper: reload ioctl on mpatha (252:75) failed: Device or resource busy
78835.223071 | dm_addmap: libdm task=0 error: Success
78835.223099 | mpatha: ignoring map
78835.224350 | mpatha: addmap [0 209715200 multipath 1 queue_if_no_path 1 alua 1 1 service-time 0 3 1 8:0 1 8:64 1 8:128 1]
78835.224652 | libdevmapper: ioctl/libdm-iface.c(1980): device-mapper: reload ioctl on mpatha (252:75) failed: Device or resource busy
78835.224767 | dm_addmap: libdm task=0 error: Success
78835.224794 | mpatha: ignoring map
I've install multipath-boot-tools on all of the nodes and run 'update-initramfs -k all -u' on all nodes. They all have the same multipath.conf file and lvm.conf file. Yet of the 8 nodes only one of them behaves the way I expect it to. Looking for ideas here.
TIA
Here is the output from node 8.
mpatha (360002ac0000000002a004cf90000c4c5) dm-0 3PARdata,VV
size=100G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
|- 1:0:0:0 sda 8:0 active ready running
|- 1:0:1:0 sde 8:64 active ready running
|- 10:0:1:0 sdm 8:192 active ready running
`- 10:0:0:0 sdi 8:128 active ready running
mpathb (360002ac00000000029004ec00000c4c5) dm-10 3PARdata,VV
size=10T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
|- 1:0:0:1 sdb 8:16 active ready running
|- 1:0:1:1 sdf 8:80 active ready running
|- 10:0:1:1 sdn 8:208 active ready running
`- 10:0:0:1 sdj 8:144 active ready running
mpathc (360002ac000000000290053ce0000c4c5) dm-4 3PARdata,VV
size=750G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
|- 1:0:0:2 sdc 8:32 active ready running
|- 1:0:1:2 sdg 8:96 active ready running
|- 10:0:0:2 sdk 8:160 active ready running
`- 10:0:1:2 sdo 8:224 active ready running
mpathd (360002ac0000000002a0033640000c4c5) dm-11 3PARdata,VV
size=20T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
|- 1:0:0:4 sdd 8:48 active ready running
|- 1:0:1:4 sdh 8:112 active ready running
|- 10:0:0:4 sdl 8:176 active ready running
`- 10:0:1:4 sdp 8:240 active ready running
VG #PV #LV #SN Attr VSize VFree
hpstorage 2 67 0 wz--n- <30.00t 21.03t
pve 1 3 0 wz--n- 99.48g 12.36g
As you can see the boot drive shows up as mpatha in the multipath -ll output and PVE shows up as a Volume Group.
The other 7 nodes represent like this.
mpathb (360002ac00000000029004ec00000c4c5) dm-6 3PARdata,VV
size=10T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
|- 9:0:0:1 sdb 8:16 active ready running
|- 9:0:1:1 sdf 8:80 active ready running
`- 10:0:0:1 sdj 8:144 active ready running
mpathc (360002ac000000000290053ce0000c4c5) dm-1 3PARdata,VV
size=750G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
|- 9:0:0:2 sdc 8:32 active ready running
|- 9:0:1:2 sdg 8:96 active ready running
`- 10:0:0:2 sdk 8:160 active ready running
mpathd (360002ac0000000002a0033640000c4c5) dm-7 3PARdata,VV
size=20T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
|- 9:0:0:4 sdd 8:48 active ready running
|- 9:0:1:4 sdh 8:112 active ready running
`- 10:0:0:4 sdl 8:176 active ready running
VG #PV #LV #SN Attr VSize VFree
hpstorage 2 67 0 wz--n- <30.00t 21.03t
They don't have an mpatha mapped nor do they show the VG PVE.
I've tried adding the WWID to /etc/multipath/wwids and there is no change. If I try running multipath /dev/sda on the nodes I get
78835.220363 | mpatha: addmap [0 209715200 multipath 1 queue_if_no_path 1 alua 1 1 service-time 0 3 1 8:0 1 8:64 1 8:128 1]
78835.220740 | libdevmapper: ioctl/libdm-iface.c(1980): device-mapper: reload ioctl on mpatha (252:75) failed: Device or resource busy
78835.220994 | dm_addmap: libdm task=0 error: Success
78835.221111 | mpatha: ignoring map
78835.222761 | mpatha: addmap [0 209715200 multipath 1 queue_if_no_path 1 alua 1 1 service-time 0 3 1 8:0 1 8:64 1 8:128 1]
78835.222976 | libdevmapper: ioctl/libdm-iface.c(1980): device-mapper: reload ioctl on mpatha (252:75) failed: Device or resource busy
78835.223071 | dm_addmap: libdm task=0 error: Success
78835.223099 | mpatha: ignoring map
78835.224350 | mpatha: addmap [0 209715200 multipath 1 queue_if_no_path 1 alua 1 1 service-time 0 3 1 8:0 1 8:64 1 8:128 1]
78835.224652 | libdevmapper: ioctl/libdm-iface.c(1980): device-mapper: reload ioctl on mpatha (252:75) failed: Device or resource busy
78835.224767 | dm_addmap: libdm task=0 error: Success
78835.224794 | mpatha: ignoring map
I've install multipath-boot-tools on all of the nodes and run 'update-initramfs -k all -u' on all nodes. They all have the same multipath.conf file and lvm.conf file. Yet of the 8 nodes only one of them behaves the way I expect it to. Looking for ideas here.
TIA