Multipath not work after Proxmox 7 upgrade

l.ansaloni

Renowned Member
Feb 20, 2011
42
2
73
Nonantola, Italy
newlogic.it
Hi,
I upgraded from version 6.4 to 7.0 of one of the blades on my Intel Modular Server

After reboot multipath not show any device:
Code:
root@proxmox106:~# multipath -ll
root@proxmox106:~#

on another node with proxmox 6.4 I have:

Code:
root@proxmox105:~# multipath -ll
sistema (222be000155bb7f72) dm-0 Intel,Multi-Flex
size=20G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| `- 0:0:0:0 sda 8:0   active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  `- 0:0:1:0 sde 8:64  active ready running
vol2 (222640001555bdbf2) dm-6 Intel,Multi-Flex
size=1.3T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| `- 0:0:1:2 sdg 8:96  active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  `- 0:0:0:2 sdc 8:32  active ready running
volssd (22298000155c08ddd) dm-8 Intel,Multi-Flex
size=2.1T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| `- 0:0:0:3 sdd 8:48  active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  `- 0:0:1:3 sdh 8:112 active ready running
vol1 (222d9000155080015) dm-4 Intel,Multi-Flex
size=1.3T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| `- 0:0:0:1 sdb 8:16  active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  `- 0:0:1:1 sdf 8:80  active ready running

multipath -v1 show:
Code:
root@proxmox106:~# multipath -v1
Jul 20 15:29:16 | libdevmapper: ioctl/libdm-iface.c(1927): device-mapper: reload ioctl on sistema (253:98) failed: Device or resource busy
Jul 20 15:29:16 | libdevmapper: ioctl/libdm-iface.c(1927): device-mapper: reload ioctl on vol1 (253:98) failed: Device or resource busy
Jul 20 15:29:16 | libdevmapper: ioctl/libdm-iface.c(1927): device-mapper: reload ioctl on vol2 (253:98) failed: Device or resource busy
Jul 20 15:29:16 | libdevmapper: ioctl/libdm-iface.c(1927): device-mapper: reload ioctl on volssd (253:98) failed: Device or resource busy
Jul 20 15:29:17 | libdevmapper: ioctl/libdm-iface.c(1927): device-mapper: reload ioctl on sistema (253:98) failed: Device or resource busy
Jul 20 15:29:17 | libdevmapper: ioctl/libdm-iface.c(1927): device-mapper: reload ioctl on vol1 (253:98) failed: Device or resource busy
Jul 20 15:29:18 | libdevmapper: ioctl/libdm-iface.c(1927): device-mapper: reload ioctl on vol2 (253:98) failed: Device or resource busy
Jul 20 15:29:18 | libdevmapper: ioctl/libdm-iface.c(1927): device-mapper: reload ioctl on volssd (253:98) failed: Device or resource busy

pveversion -v

Code:
proxmox-ve: 7.0-2 (running kernel: 5.11.22-1-pve)
pve-manager: 7.0-9 (running version: 7.0-9/228c9caa)
pve-kernel-helper: 7.0-4
pve-kernel-5.11: 7.0-3
pve-kernel-5.4: 6.4-4
pve-kernel-5.11.22-1-pve: 5.11.22-2
pve-kernel-5.4.124-1-pve: 5.4.124-1
ceph-fuse: 14.2.21-1
corosync: 3.1.2-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: 0.8.36
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.21-pve1
libproxmox-acme-perl: 1.1.1
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.0-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-4
libpve-guest-common-perl: 4.0-2
libpve-http-server-perl: 4.0-2
libpve-storage-perl: 7.0-9
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-2
lxcfs: 4.0.8-pve2
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.4-1
proxmox-backup-file-restore: 2.0.4-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.3-4
pve-cluster: 7.0-3
pve-container: 4.0-8
pve-docs: 7.0-5
pve-edk2-firmware: 3.20200531-1
pve-firewall: 4.2-2
pve-firmware: 3.2-4
pve-ha-manager: 3.3-1
pve-i18n: 2.4-1
pve-qemu-kvm: 6.0.0-2
pve-xtermjs: 4.12.0-1
qemu-server: 7.0-10
smartmontools: 7.2-pve2
spiceterm: 3.2-2
vncterm: 1.7-1
zfsutils-linux: 2.0.4-pve1

/etc/multipath.conf

Code:
##
## multipath-tools configuration file /etc/multipath.conf
##
blacklist {
        devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
        devnode "^(hd|xvd)[a-z][[0-9]*]"
        devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]"
}
blacklist_exceptions {
        property "(SCSI_IDENT_.*|ID_WWN|ID_SERIAL)"
        wwid "22250000155dcef22"
        wwid "222d9000155080015"
        wwid "222640001555bdbf2"
        wwid "22298000155c08ddd"
}
defaults {
        polling_interval        2
        path_selector           "round-robin 0"
        path_grouping_policy    multibus
        getuid_callout          "/lib/udev/scsi_id -g -u -d /dev/%n"
        rr_min_io               100
        failback                immediate
        no_path_retry           queue
        user_friendly_names     yes
}
devices {
        device {
                vendor                  "Intel"
                product                 "Multi-Flex"
                uid_attribute           "ID_SERIAL"
                path_grouping_policy    "group_by_prio"
                prio                    "alua"
                path_checker            tur
                hardware_handler        "1 alua"
                rr_min_io               100
                failback                immediate
                no_path_retry           queue
                rr_weight               uniform
                product_blacklist       "VTrak V-LUN"
        }
}
multipaths {
       multipath {
               wwid                    22250000155dcef22
               alias                   sistema
       }
       multipath {
               wwid                    222d9000155080015
               alias                   vol1
       }
       multipath {
               wwid                    222640001555bdbf2
               alias                   vol2
       }
       multipath {
               wwid                    22298000155c08ddd
               alias                   volssd
       }
}

How can I resolve?
 
multipath -v3 show:
Code:
Jul 20 15:39:12 | set open fds limit to 1048576/1048576
Jul 20 15:39:12 | loading //lib/multipath/libchecktur.so checker
Jul 20 15:39:12 | checker tur: message table size = 3
Jul 20 15:39:12 | loading //lib/multipath/libprioconst.so prioritizer
Jul 20 15:39:12 | _init_foreign: foreign library "nvme" is not enabled
Jul 20 15:39:12 | sda: size = 41943040
Jul 20 15:39:12 | sda: vendor = Intel
Jul 20 15:39:12 | sda: product = Multi-Flex
Jul 20 15:39:12 | sda: rev = 0306
Jul 20 15:39:12 | find_hwe: found 2 hwtable matches for Intel:Multi-Flex:0306
Jul 20 15:39:12 | sda: h:b:t:l = 0:0:0:0
Jul 20 15:39:12 | sda: tgt_node_name =
Jul 20 15:39:12 | sda: 2610 cyl, 255 heads, 63 sectors/track, start at 0
Jul 20 15:39:12 | sda: vpd_vendor_id = 0 "undef" (setting: multipath internal)
Jul 20 15:39:12 | sda: serial = 4C2020200000000000000000F089D395C4E4D4C4
Jul 20 15:39:12 | sda: detect_checker = yes (setting: multipath internal)
Jul 20 15:39:12 | sda: path_checker = tur (setting: storage device autodetected)
Jul 20 15:39:12 | sda: checker timeout = 30 s (setting: kernel sysfs)
Jul 20 15:39:12 | sda: tur state = up
Jul 20 15:39:12 | sda: uid_attribute = ID_SERIAL (setting: storage device configuration)
Jul 20 15:39:12 | sda: uid = 22250000155dcef22 (udev)
Jul 20 15:39:12 | sda: wwid 22250000155dcef22 whitelisted
Jul 20 15:39:12 | sda: detect_prio = yes (setting: multipath internal)
Jul 20 15:39:12 | loading //lib/multipath/libprioalua.so prioritizer
Jul 20 15:39:12 | sda: prio = alua (setting: storage device autodetected)
Jul 20 15:39:12 | sda: prio args = "" (setting: storage device autodetected)
Jul 20 15:39:12 | sda: reported target port group is 0
Jul 20 15:39:12 | sda: aas = 80 [active/optimized] [preferred]
Jul 20 15:39:12 | sda: alua prio = 50
Jul 20 15:39:12 | sdb: size = 2726297600
Jul 20 15:39:12 | sdb: vendor = Intel
Jul 20 15:39:12 | sdb: product = Multi-Flex
Jul 20 15:39:12 | sdb: rev = 0306
Jul 20 15:39:12 | find_hwe: found 2 hwtable matches for Intel:Multi-Flex:0306
Jul 20 15:39:12 | sdb: h:b:t:l = 0:0:0:1
Jul 20 15:39:12 | sdb: tgt_node_name =
Jul 20 15:39:12 | sdb: 38632 cyl, 255 heads, 63 sectors/track, start at 0
Jul 20 15:39:12 | sdb: vpd_vendor_id = 0 "undef" (setting: multipath internal)
Jul 20 15:39:12 | sdb: serial = 4C2020200000000000000000EB9D7CBB37986BC0
Jul 20 15:39:12 | sdb: detect_checker = yes (setting: multipath internal)
Jul 20 15:39:12 | sdb: path_checker = tur (setting: storage device autodetected)
Jul 20 15:39:12 | sdb: checker timeout = 30 s (setting: kernel sysfs)
Jul 20 15:39:12 | sdb: tur state = up
Jul 20 15:39:12 | sdb: uid_attribute = ID_SERIAL (setting: storage device configuration)
Jul 20 15:39:12 | sdb: uid = 222d9000155080015 (udev)
Jul 20 15:39:12 | sdb: wwid 222d9000155080015 whitelisted
Jul 20 15:39:12 | sdb: detect_prio = yes (setting: multipath internal)
Jul 20 15:39:12 | sdb: prio = alua (setting: storage device autodetected)
Jul 20 15:39:12 | sdb: prio args = "" (setting: storage device autodetected)
Jul 20 15:39:12 | sdb: reported target port group is 0
Jul 20 15:39:12 | sdb: aas = 80 [active/optimized] [preferred]
Jul 20 15:39:12 | sdb: alua prio = 50
...
Jul 20 15:39:12 | sdh: h:b:t:l = 0:0:1:3
Jul 20 15:39:12 | sdh: tgt_node_name =
Jul 20 15:39:12 | sdh: 13365 cyl, 255 heads, 63 sectors/track, start at 0
Jul 20 15:39:12 | sdh: vpd_vendor_id = 0 "undef" (setting: multipath internal)
Jul 20 15:39:12 | sdh: serial = 4C2020200000000000000000A662B619A553B6EB
Jul 20 15:39:12 | sdh: detect_checker = yes (setting: multipath internal)
Jul 20 15:39:12 | sdh: path_checker = tur (setting: storage device autodetected)
Jul 20 15:39:12 | sdh: checker timeout = 30 s (setting: kernel sysfs)
Jul 20 15:39:12 | sdh: tur state = up
Jul 20 15:39:12 | sdh: uid_attribute = ID_SERIAL (setting: storage device configuration)
Jul 20 15:39:12 | sdh: uid = 22298000155c08ddd (udev)
Jul 20 15:39:12 | sdh: wwid 22298000155c08ddd whitelisted
Jul 20 15:39:12 | sdh: detect_prio = yes (setting: multipath internal)
Jul 20 15:39:12 | sdh: prio = alua (setting: storage device autodetected)
Jul 20 15:39:12 | sdh: prio args = "" (setting: storage device autodetected)
Jul 20 15:39:12 | sdh: reported target port group is 1
Jul 20 15:39:12 | sdh: aas = 02 [standby]
Jul 20 15:39:12 | sdh: alua prio = 1
Jul 20 15:39:12 | loop0: device node name blacklisted
...
Jul 20 15:39:12 | dm-97: device node name blacklisted
===== paths list =====
uuid              hcil    dev dev_t pri dm_st chk_st vend/prod/rev    dev_st
22250000155dcef22 0:0:0:0 sda 8:0   50  undef undef  Intel,Multi-Flex unknown
222d9000155080015 0:0:0:1 sdb 8:16  50  undef undef  Intel,Multi-Flex unknown
222640001555bdbf2 0:0:0:2 sdc 8:32  1   undef undef  Intel,Multi-Flex unknown
22298000155c08ddd 0:0:0:3 sdd 8:48  50  undef undef  Intel,Multi-Flex unknown
22250000155dcef22 0:0:1:0 sde 8:64  1   undef undef  Intel,Multi-Flex unknown
222d9000155080015 0:0:1:1 sdf 8:80  1   undef undef  Intel,Multi-Flex unknown
222640001555bdbf2 0:0:1:2 sdg 8:96  50  undef undef  Intel,Multi-Flex unknown
22298000155c08ddd 0:0:1:3 sdh 8:112 1   undef undef  Intel,Multi-Flex unknown
Jul 20 15:39:12 | libdevmapper version 1.02.175 (2021-01-08)
Jul 20 15:39:12 | DM multipath kernel driver v1.14.0
Jul 20 15:39:12 | sda: udev property ID_SERIAL whitelisted
Jul 20 15:39:12 | sda: wwid 22250000155dcef22 whitelisted
Jul 20 15:39:12 | found wwid 22250000155dcef22 in wwids file, multipathing sda
Jul 20 15:39:12 | 22250000155dcef22: alias = sistema (setting: multipath.conf multipaths section)
Jul 20 15:39:12 | sda: tur state = up
Jul 20 15:39:12 | sda: reported target port group is 0
Jul 20 15:39:12 | sda: aas = 80 [active/optimized] [preferred]
Jul 20 15:39:12 | sda: ownership set to sistema
Jul 20 15:39:12 | sde: tur state = up
Jul 20 15:39:12 | sde: reported target port group is 1
Jul 20 15:39:12 | sde: aas = 02 [standby]
Jul 20 15:39:12 | sde: ownership set to sistema
Jul 20 15:39:12 | sistema: failback = "immediate" (setting: storage device configuration)
Jul 20 15:39:12 | sistema: path_grouping_policy = group_by_prio (setting: storage device configuration)
Jul 20 15:39:12 | sistema: path_selector = "round-robin 0" (setting: multipath.conf defaults/devices section)
Jul 20 15:39:12 | sistema: no_path_retry = "queue" (setting: storage device configuration)
Jul 20 15:39:12 | sistema: retain_attached_hw_handler = yes (setting: implied in kernel >= 4.3.0)
Jul 20 15:39:12 | sistema: features = "0" (setting: multipath internal)
Jul 20 15:39:12 | sistema: hardware_handler = "1 alua" (setting: storage device configuration)
Jul 20 15:39:12 | sistema: rr_weight = "uniform" (setting: storage device configuration)
Jul 20 15:39:12 | sistema: minio = 1 (setting: multipath internal)
Jul 20 15:39:12 | sistema: fast_io_fail_tmo = 5 (setting: multipath internal)
Jul 20 15:39:12 | sistema: deferred_remove = no (setting: multipath internal)
Jul 20 15:39:12 | sistema: marginal_path_err_sample_time = "no" (setting: multipath internal)
Jul 20 15:39:12 | sistema: marginal_path_err_rate_threshold = "no" (setting: multipath internal)
Jul 20 15:39:12 | sistema: marginal_path_err_recheck_gap_time = "no" (setting: multipath internal)
Jul 20 15:39:12 | sistema: marginal_path_double_failed_time = "no" (setting: multipath internal)
Jul 20 15:39:12 | sistema: san_path_err_threshold = "no" (setting: multipath internal)
Jul 20 15:39:12 | sistema: san_path_err_forget_rate = "no" (setting: multipath internal)
Jul 20 15:39:12 | sistema: san_path_err_recovery_time = "no" (setting: multipath internal)
Jul 20 15:39:12 | sistema: skip_kpartx = no (setting: multipath internal)
Jul 20 15:39:12 | sistema: ghost_delay = "no" (setting: multipath.conf defaults/devices section)
Jul 20 15:39:12 | sistema: flush_on_last_del = no (setting: multipath internal)
Jul 20 15:39:12 | sistema: setting dev_loss_tmo is unsupported for protocol scsi:unspec
Jul 20 15:39:12 | sistema: set ACT_CREATE (map does not exist)
Jul 20 15:39:12 | sistema: addmap [0 41943040 multipath 1 queue_if_no_path 1 alua 2 1 round-robin 0 1 1 8:0 1 round-robin 0 1 1 8:64 1]
Jul 20 15:39:12 | libdevmapper: ioctl/libdm-iface.c(1927): device-mapper: reload ioctl on sistema (253:98) failed: Device or resource busy
Jul 20 15:39:12 | dm_addmap: libdm task=0 error: Success
Jul 20 15:39:12 | sistema: failed to load map, error 16
Jul 20 15:39:12 | sistema: domap (0) failure for create/reload map
Jul 20 15:39:12 | sistema: ignoring map
Jul 20 15:39:12 | sda: orphan path, map removed internally
Jul 20 15:39:12 | sde: orphan path, map removed internally
...
...
Jul 20 15:39:14 | volssd: set ACT_CREATE (map does not exist)
Jul 20 15:39:14 | volssd: addmap [0 4426060268 multipath 1 queue_if_no_path 1 alua 2 1 round-robin 0 1 1 8:48 1 round-robin 0 1 1 8:112 1]
Jul 20 15:39:14 | libdevmapper: ioctl/libdm-iface.c(1927): device-mapper: reload ioctl on volssd (253:98) failed: Device or resource busy
Jul 20 15:39:15 | dm_addmap: libdm task=0 error: Success
Jul 20 15:39:15 | volssd: failed to load map, error 16
Jul 20 15:39:15 | volssd: domap (0) failure for create/reload map
Jul 20 15:39:15 | volssd: ignoring map
Jul 20 15:39:15 | sdd: orphan path, map removed internally
Jul 20 15:39:15 | sdh: orphan path, map removed internally
Jul 20 15:39:15 | unloading alua prioritizer
Jul 20 15:39:15 | unloading const prioritizer
Jul 20 15:39:15 | unloading tur checker
 
hi, has anyone found an explanation for the problem? I can't find any information how to fix it.
 
Last edited:
Hello,

i have almost the same issue, did you find a solution to this problem?

regards,
p.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!