[SOLVED] Cannot activate LVs in VG iscsi while PVs appear on duplicate devices.

Adam Smith

Active Member
Feb 1, 2018
24
6
43
51
  • Two node cluster, (A) Proxmox 5.0 and (B) Proxmox 5.1
  • One shared iSCSI with LVM
  • VM disks on iSCSI
I get the error in the subject line when trying to migrate an online VM or start a migrated VM.

What I've discovered is on node A, iSCSI is sdc and its LVM is sdd, while on node B it is just the opposite, as indicated by the message on node B:

Code:
 WARNING: PV J84yRW-jQWV-kcOQ-T3oy-cR1Y-Ntec-P3iNze on /dev/sdd was already found on /dev/sdc.
  WARNING: PV J84yRW-jQWV-kcOQ-T3oy-cR1Y-Ntec-P3iNze prefers device /dev/sdc because device was seen first.

How do I create a satisfactory configuration?
 
It seems you have multipathed devices, is this right? If so, you need to install and setup multipath-tools and use the multipathed devices for LVM.
 
  • Like
Reactions: Adam Smith
Thank you for the reply.

I'm not intentionally or knowingly multipathed... my iSCSI SAN is connected to two different VLANs, 100 for management and 110 for storage, through the same switch. Only node A was set up with the iSCSI device through its ethernet port on VLAN 110. When I added node B to the cluster, it adopted the device, but it seems it found them in reverse order?

Could it be that node B found the iSCSI device through VLAN 100 somehow? Is there a way to identify if I have multipaths (EDIT: or, more specifically, if each node is using a different path)?

EDIT: Reading up on multipaths...
 
Last edited:
Um, wow. All it seemed to take was to install multipath-tools. I did nothing else. My migrated VM started straight away after that. Live migration appears to be working as well. Thank for pointing me in the right direction!
 
  • Like
Reactions: Hyacin
multipath works very strange...

root@pm-node6:~# multipath -ll
36019cbd1e21024f0ccde35030000807f dm-25 EQLOGIC,100E-00
size=4.0T features='1 retain_attached_hw_handler' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=50 status=active
`- 8:0:0:0 sdd 8:48 active ready running
360000000000000000e00000000010001 dm-13 IET,VIRTUAL-DISK
size=11T features='1 retain_attached_hw_handler' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
`- 9:0:0:1 sde 8:64 active ready running
368fc61d60c82a3f59b8005c21e05005b dm-24 EQLOGIC,100E-00
size=8.4T features='1 retain_attached_hw_handler' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=50 status=active
`- 7:0:0:0 sdc 8:32 active ready running
368fc61d60ce225b11b91c59098d369bb dm-23 EQLOGIC,100E-00
size=9.6T features='1 retain_attached_hw_handler' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=50 status=active
`- 6:0:0:0 sdb 8:16 active ready running


WARNING: PV ivzQIG-kzJQ-m69n-Z2Gw-yYnD-q2cn-01JBxU on /dev/mapper/368fc61d60ce225b11b91c59098d369bb was already found on /dev/sdb.
WARNING: PV RtYTin-DjWo-CtUG-M1W9-yn59-kDxt-7rL4QO on /dev/sdc was already found on /dev/mapper/368fc61d60c82a3f59b8005c21e05005b.
WARNING: PV e7UeKE-l0dj-lSv5-e7XV-gC6e-QMV4-Gkvcp3 on /dev/sdd was already found on /dev/mapper/36019cbd1e21024f0ccde35030000807f.
WARNING: PV ivzQIG-kzJQ-m69n-Z2Gw-yYnD-q2cn-01JBxU prefers device /dev/sdb because device is used by LV.
WARNING: PV RtYTin-DjWo-CtUG-M1W9-yn59-kDxt-7rL4QO prefers device /dev/sdc because device is used by LV.
WARNING: PV e7UeKE-l0dj-lSv5-e7XV-gC6e-QMV4-Gkvcp3 prefers device /dev/sdd because device is used by LV.
WARNING: PV ivzQIG-kzJQ-m69n-Z2Gw-yYnD-q2cn-01JBxU on /dev/mapper/368fc61d60ce225b11b91c59098d369bb was already found on /dev/sdb.
WARNING: PV RtYTin-DjWo-CtUG-M1W9-yn59-kDxt-7rL4QO on /dev/sdc was already found on /dev/mapper/368fc61d60c82a3f59b8005c21e05005b.
WARNING: PV e7UeKE-l0dj-lSv5-e7XV-gC6e-QMV4-Gkvcp3 on /dev/sdd was already found on /dev/mapper/36019cbd1e21024f0ccde35030000807f.
WARNING: PV ivzQIG-kzJQ-m69n-Z2Gw-yYnD-q2cn-01JBxU prefers device /dev/sdb because device is used by LV.
WARNING: PV RtYTin-DjWo-CtUG-M1W9-yn59-kDxt-7rL4QO prefers device /dev/sdc because device is used by LV.
WARNING: PV e7UeKE-l0dj-lSv5-e7XV-gC6e-QMV4-Gkvcp3 prefers device /dev/sdd because device is used by LV.

TASK ERROR: can't activate LV '/dev/cold/vm-185-disk-0': Cannot activate LVs in VG cold while PVs appear on duplicate devices.
 
2019-12-18 18:57:01 can't activate LV '/dev/cold/vm-131-disk-0': Cannot activate LVs in VG cold while PVs appear on duplicate devices.
 
proxmox-ve: 5.4-1 (running kernel: 4.15.18-12-pve)
pve-manager: 5.4-3 (running version: 5.4-3/0a6eaa62)
pve-kernel-4.15: 5.3-3
pve-kernel-4.15.18-12-pve: 4.15.18-35
pve-kernel-4.15.18-5-pve: 4.15.18-24
pve-kernel-4.13.13-5-pve: 4.13.13-38
pve-kernel-4.13.13-2-pve: 4.13.13-33
pve-kernel-4.10.17-4-pve: 4.10.17-24
pve-kernel-4.10.17-2-pve: 4.10.17-20
pve-kernel-4.10.15-1-pve: 4.10.15-15
pve-kernel-4.4.67-1-pve: 4.4.67-92
pve-kernel-4.4.59-1-pve: 4.4.59-87
pve-kernel-4.4.49-1-pve: 4.4.49-86
pve-kernel-4.4.44-1-pve: 4.4.44-84
pve-kernel-4.4.40-1-pve: 4.4.40-82
pve-kernel-4.4.21-1-pve: 4.4.21-71
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-8
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-50
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-13
libpve-storage-perl: 5.0-41
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-3
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
openvswitch-switch: 2.7.0-3
proxmox-widget-toolkit: 1.0-25
pve-cluster: 5.0-36
pve-container: 2.0-37
pve-docs: 5.4-2
pve-edk2-firmware: 1.20190312-1
pve-firewall: 3.0-19
pve-firmware: 2.0-6
pve-ha-manager: 2.0-9
pve-i18n: 1.1-4
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 2.12.1-3
pve-xtermjs: 3.12.0-1
qemu-server: 5.0-50
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
 
I have the same trouble after upgrade proxmox from 5.0 to 5.4-13 on one node of cluster. Either reboot or install multipath-tools did not help. Multipath was already used before upgrade.

My investigation leads to that there is a mess in pvs and vgs. I can see there is PV /dev/sdc mapped to VG vmfs-raid5-15k. But there should be used multipath device /dev/mapper/3600000e00d110000001111ac00000000. The same for PV /dev/sdd (should be /dev/mapper/3600000e00d110000001111ac00020000) and /dev/sde (should be /dev/mapper/3600000e00d110000001111ac00010000).

root@pve1:~# pvs
WARNING: PV LCSzb7-ba0b-yLpH-wriG-KGLs-fMy9-e5YCsz on /dev/vmfs-raid5-15k/vm-211-disk-2 was already found on /dev/vmfs-raid10-15k/vm-211-disk-1.
WARNING: PV Xp8kxp-RKFp-ASar-SUOs-z30b-gX4b-nWBwW1 on /dev/mapper/3600000e00d110000001111ac00000000 was already found on /dev/sdc.
WARNING: PV wThchH-2IWs-QRyM-q2dt-flLV-PeA6-0A1qIQ on /dev/sdd was already found on /dev/mapper/3600000e00d110000001111ac00020000.
WARNING: PV d9hgJh-xa0b-Oyzz-3P70-SjQy-laVx-Q7BEbN on /dev/sde was already found on /dev/mapper/3600000e00d110000001111ac00010000.
WARNING: PV LCSzb7-ba0b-yLpH-wriG-KGLs-fMy9-e5YCsz prefers device /dev/vmfs-raid10-15k/vm-211-disk-1 because device was seen first.
WARNING: PV Xp8kxp-RKFp-ASar-SUOs-z30b-gX4b-nWBwW1 prefers device /dev/sdc because device is used by LV.
WARNING: PV wThchH-2IWs-QRyM-q2dt-flLV-PeA6-0A1qIQ prefers device /dev/sdd because device is used by LV.
WARNING: PV d9hgJh-xa0b-Oyzz-3P70-SjQy-laVx-Q7BEbN prefers device /dev/sde because device is used by LV.
PV VG Fmt Attr PSize PFree
/dev/mapper/3600000e00d110000001111ac00030000 backup2 lvm2 a-- 1000.00g 1000.00g
/dev/mapper/3600000e00d110000001111ac00050000 backups1 lvm2 a-- 1000.00g 1000.00g
/dev/sda3 pve lvm2 a-- 255.75g 15.83g

/dev/sdc vmfs-raid5-15k lvm2 a-- 1.60t 548.00g
/dev/sdd vmfs-raid5-7k2 lvm2 a-- 2.37t 464.00g
/dev/sde vmfs-raid10-15k lvm2 a-- 1.07t 862.00g

/dev/vmfs-raid10-15k/vm-211-disk-1 postgres_data-vg lvm2 a-- 10.00g 0
/dev/vmfs-raid5-7k2/vm-103-disk-1 pvevm26-pxvebackup-vg lvm2 a-- 200.00g 0
/dev/vmfs-raid5-7k2/vm-103-disk-2 dockershare lvm2 a-- 10.00g 96.00m
/dev/vmfs-raid5-7k2/vm-106-disk-2 pvevm27-vg-data lvm2 a-- 500.00g 0
/dev/vmfs-raid5-7k2/vm-213-disk-2 sqldata-vg lvm2 a-- 10.00g 508.00m
/dev/vmfs-raid5-7k2/vm-213-disk-3 sqllog-vg lvm2 a-- 5.00g 508.00m
/dev/vmfs-raid5-7k2/vm-219-disk-2 Docker-registry-data-vg lvm2 a-- 100.00g 96.00m


I can also see there are all LVs in state AVAILABLE but shouldn't be because there are not any VM on proxmox node.

Any idea how to fix it?
 
Normally, or in the past or whatever, the multipath driver blocks access to its backend storage so that you cannot access it. Maybe this changed?

What we normally do is to set the lvm filter to restrict which devices are scanned for LVM signatures so that you will not find additional lvms on consecutive scans if you have a lvm on a lvm for example. We normally allow sda, which is always the boot device and our named multipath devices. The link I provided has several examples including local storage + SAN.
 
@LnxBil Thank you for nice article about lvm and multipathing.

I have noticed that if I reboot the node sometimes it works correctly (multipath is used) and sometimes not. Looks like my problem is not related to proxmox update at all.

I have a default filter for lvms, that means accept all. So I should accept only /dev/sda and multipath devices, shouldn't I? filter = [ "a|^/dev/sda[1-9]$|", "a|^/dev/mapper/*|", "r|^/dev/*|" ] from example could be good starting point.
 
  • Like
Reactions: IIEP_IT
Thats because time sync. If 2 nodes have different times with too many offsets to each other, this problem occurs.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!