[SOLVED] LVM on ISCSI Multipath not accessible after node reboot

semira uthsala

Well-Known Member
Nov 19, 2019
43
7
48
33
Singapore
Hi all,

I'm testing ISCSI mutipath setup with 3 node proxmox cluster

my setup is like below

1x Debian with tgt 4x nics (2xLUNS)

Code:
<target iqn.2020-03.pvelab.srv:tar01>
    backing-store /dev/sdb
    backing-store /dev/sdc
    initiator-address 192.168.132.0/24
</target>

3x PVE 6.1 with mutipath configured

1. /etc/iscsi/iscsid.conf

Code:
node.startup = automatic
node.session.timeo.replacement_timeout = 15

1. configured all 4x interfaces from GUI

Screenshot_69.png

Code:
2. multipath.conf

defaults {
        polling_interval        2
        path_selector           "round-robin 0"
        path_grouping_policy    multibus
        uid_attribute           ID_SERIAL
        rr_min_io               100
        failback                immediate
        no_path_retry           queue
        user_friendly_names     yes
        find_multipaths         no
}

blacklist {
        wwid .*
}

blacklist_exceptions {
        wwid "360000000000000000e00000000010001"
        wwid "360000000000000000e00000000010002"
}

multipaths {
  multipath {
        wwid "360000000000000000e00000000010001"
        alias LUN-DISK-01-1500GB
  }
  multipath {
        wwid "360000000000000000e00000000010002"
        alias LUN-DISK-02-2000GB
  }
}

3. after this setup I can see the multipath devices when I run multipath -ll

[/CODE]
root@node-01:~# multipath -ll
LUN-DISK-02-2000GB (360000000000000000e00000000010002) dm-6 IET,VIRTUAL-DISK
size=2.0T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 34:0:0:2 sdf 8:80 active ready running
|- 36:0:0:2 sdg 8:96 active ready running
|- 33:0:0:2 sdc 8:32 active ready running
`- 35:0:0:2 sdi 8:128 active ready running
LUN-DISK-01-1500GB (360000000000000000e00000000010001) dm-7 IET,VIRTUAL-DISK
size=1.5T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 36:0:0:1 sde 8:64 active ready running
|- 34:0:0:1 sdd 8:48 active ready running
|- 33:0:0:1 sdb 8:16 active ready running
`- 35:0:0:1 sdh 8:112 active ready running
[/CODE]

4. ouput of lsscsi

Code:
lsscsi
[2:0:0:0]    disk    VMware   Virtual disk     2.0   /dev/sda
[3:0:0:0]    cd/dvd  NECVMWar VMware SATA CD00 1.00  /dev/sr0
[33:0:0:0]   storage IET      Controller       0001  -
[33:0:0:1]   disk    IET      VIRTUAL-DISK     0001  /dev/sdb
[33:0:0:2]   disk    IET      VIRTUAL-DISK     0001  /dev/sdc
[34:0:0:0]   storage IET      Controller       0001  -
[34:0:0:1]   disk    IET      VIRTUAL-DISK     0001  /dev/sdd
[34:0:0:2]   disk    IET      VIRTUAL-DISK     0001  /dev/sdf
[35:0:0:0]   storage IET      Controller       0001  -
[35:0:0:1]   disk    IET      VIRTUAL-DISK     0001  /dev/sdh
[35:0:0:2]   disk    IET      VIRTUAL-DISK     0001  /dev/sdi
[36:0:0:0]   storage IET      Controller       0001  -
[36:0:0:1]   disk    IET      VIRTUAL-DISK     0001  /dev/sde
[36:0:0:2]   disk    IET      VIRTUAL-DISK     0001  /dev/sdg

5. create PV and VG on mutipath devices

Code:
root@node-01:~# pvs
  PV                             VG                 Fmt  Attr PSize    PFree
  /dev/mapper/LUN-DISK-01-1500GB LUN-DISK-01-1500GB lvm2 a--    <1.50t <1.50t
  /dev/mapper/LUN-DISK-02-2000GB LUN-DISK-02-2000GB lvm2 a--    <2.00t <2.00t
  /dev/sda3                      pve                lvm2 a--  <199.50g 15.99g

Code:
root@node-01:~# vgs
  VG                 #PV #LV #SN Attr   VSize    VFree
  LUN-DISK-01-1500GB   1   0   0 wz--n-   <1.50t <1.50t
  LUN-DISK-02-2000GB   1   0   0 wz--n-   <2.00t <2.00t
  pve                  1   3   0 wz--n- <199.50g 15.99g

6. Then I create LVM using GUI on top of those VGs

Screenshot_70.png

7. After this I can access both LUNs over multipath and all works fine.


------------------------

Then part of my test I complete hard shutdown all pve and storage node to replicate the power failure situation.

Then start all hosts again


after start I cannot use created LVM storage anymore. And I cannot see the iscsi block devices also.

1. mutipath -v3 output

Code:
root@node-01:~# multipath -v3
Apr 01 14:06:04 | set open fds limit to 1048576/1048576
Apr 01 14:06:04 | loading //lib/multipath/libchecktur.so checker
Apr 01 14:06:04 | checker tur: message table size = 3
Apr 01 14:06:04 | loading //lib/multipath/libprioconst.so prioritizer
Apr 01 14:06:04 | foreign library "nvme" loaded successfully
Apr 01 14:06:04 | sr0: blacklisted, udev property missing
Apr 01 14:06:04 | sda: blacklisted, udev property missing
Apr 01 14:06:04 | loop0: blacklisted, udev property missing
Apr 01 14:06:04 | loop1: blacklisted, udev property missing
Apr 01 14:06:04 | loop2: blacklisted, udev property missing
Apr 01 14:06:04 | loop3: blacklisted, udev property missing
Apr 01 14:06:04 | loop4: blacklisted, udev property missing
Apr 01 14:06:04 | loop5: blacklisted, udev property missing
Apr 01 14:06:04 | loop6: blacklisted, udev property missing
Apr 01 14:06:04 | loop7: blacklisted, udev property missing
Apr 01 14:06:04 | dm-0: blacklisted, udev property missing
Apr 01 14:06:04 | dm-1: blacklisted, udev property missing
Apr 01 14:06:04 | dm-2: blacklisted, udev property missing
Apr 01 14:06:04 | dm-3: blacklisted, udev property missing
Apr 01 14:06:04 | dm-4: blacklisted, udev property missing
Apr 01 14:06:05 | dm-5: blacklisted, udev property missing
===== no paths =====
Apr 01 14:06:05 | libdevmapper version 1.02.155 (2018-12-18)
Apr 01 14:06:05 | DM multipath kernel driver v1.13.0
Apr 01 14:06:05 | unloading const prioritizer
Apr 01 14:06:05 | unloading tur checker

2. multipath -ll output is empty

3. lsscsi output

Code:
root@node-01:~# lsscsi
[2:0:0:0]    disk    VMware   Virtual disk     2.0   /dev/sda
[3:0:0:0]    cd/dvd  NECVMWar VMware SATA CD00 1.00  /dev/sr0
[33:0:0:0]   storage IET      Controller       0001  -
[34:0:0:0]   storage IET      Controller       0001  -
[35:0:0:0]   storage IET      Controller       0001  -
[36:0:0:0]   storage IET      Controller       0001  -

4 iscsiadm -m session

Code:
iscsiadm -m session
tcp: [5] 192.168.132.22:3260,1 iqn.2020-03.pvelab.srv:tar01 (non-flash)
tcp: [6] 192.168.132.21:3260,1 iqn.2020-03.pvelab.srv:tar01 (non-flash)
tcp: [7] 192.168.132.6:3260,1 iqn.2020-03.pvelab.srv:tar01 (non-flash)
tcp: [8] 192.168.132.20:3260,1 iqn.2020-03.pvelab.srv:tar01 (non-flash)

What cause this issue ?

And I notice if I not create LVM on top of multipath device and do the hard shutdown non of this hapenning. Is this because I created PV and VG on the proxmox end ?


Update
-----------------------------------------------

When I run "pvs" and "vgs" command in storage node, I can see the physical volume and volume group created by pve node.

I removed all the LVs VGs and wiped the PV from storage node end using (vgremove and pvremove). Then recreate PV and VG from storage node itself. And restart the tgt. (not changed the target configurations) i can see multi path is immediately up in pve side

And I can see the PVs and VGs in pve side too.

So does this means I have to create PV and VG and from the storage side first and expose as a LUN ?
 

Attachments

  • Screenshot_70.png
    Screenshot_70.png
    9.4 KB · Views: 19
Last edited:
Hi All,

I managed to solve this issue. This is not proxmox end but unfortunately my Debian iscsi target.

The issue was LVM on Debian target locking the physical volume I try to expose using tgt at boot time. I tested with cenot os 7 and all work smoothly.
 
  • Like
Reactions: Moayad
Hi,

Same problem with new install of Proxmox 6. I have no output for multipath -ll. (no problem for old Proxmox version with my hardwares).
I followed configuration on Proxmox wiki and same config as written by Semira.

Someone can help me to fix this please ?
 
  • Like
Reactions: semira uthsala
Hi,

Same problem with new install of Proxmox 6. I have no output for multipath -ll. (no problem for old Proxmox version with my hardwares).
I followed configuration on Proxmox wiki and same config as written by Semira.

Someone can help me to fix this please ?


I think you missed the option "find_multipaths no" :-) check again
 
  • Like
Reactions: Alex24
Hello,

You rock !!! :D
Thank you very much and in addition to that you answered very quickly ! I’ve been on google for hours, Ouf :)
Good continuation.
 
  • Like
Reactions: semira uthsala
You can have a try:

Edit /etc/lvm/lvm.conf

Add this:

filter = [ "r|/dev/sdb|", "r|/dev/sdc|", "a/.*/" ]

then execute in terminal:
update-initramfs -u -k `uname -r`

And reboot

Have a good luck!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!