Proxmox VE and hardware RAID

mitol007

New Member
Mar 2, 2021
10
0
1
31
Hello,
at the beginnig, I apologize for my English, but I hope it won't be that bad :)

I am a longtime user of VMWare and I have decided to move the environment to Proxmox VE. And I have a problem with storage.

Hardware:
1x fujitsu primergy rx2540 m5
2x fujitsu primergy rx200 s7
Array:
Hitachi VSP E590 with Deduplication and Compression

In proxmox VE I created a cluster, added a physical server, it's okay. Then I mapped the resource from the array as LVM using multipathing.
The size on the array is 41TB, and in the Proxmox VE it is 45TB. Why it comes from?

In addition, after installing the test virtual machines and filling them with data, they did not get up after a reboot, error message: proxmox cannot activate lvs in vg while pvs appear on duplicate devices.

Where could I go wrong? Could deduplication and array compression be a problem?
 
Please provide the output of pveversion -v, pvs, vgs, lvs, multipath -ll, cat /etc/multipath.conf and cat /etc/multipath/wwids.
 
pveversion -v
Code:
proxmox-ve: 7.1-1 (running kernel: 5.13.19-1-pve)
pve-manager: 7.1-4 (running version: 7.1-4/ca457116)
pve-kernel-5.13: 7.1-4
pve-kernel-helper: 7.1-4
pve-kernel-5.13.19-1-pve: 5.13.19-2
ceph-fuse: 15.2.15-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-14
libpve-guest-common-perl: 4.0-3
libpve-http-server-perl: 4.0-3
libpve-storage-perl: 7.0-15
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-4
lxcfs: 4.0.8-pve2
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.14-1
proxmox-backup-file-restore: 2.0.14-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.4-2
pve-cluster: 7.1-2
pve-container: 4.1-2
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-3
pve-ha-manager: 3.3-1
pve-i18n: 2.6-1
pve-qemu-kvm: 6.1.0-2
pve-xtermjs: 4.12.0-1
qemu-server: 7.1-3
smartmontools: 7.2-1
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.1-pve3


pvs
Code:
PV                      VG              Fmt  Attr PSize    PFree
  /dev/mapper/mpath0_E590 vg_E590_volume0 lvm2 a--   <41.26t   41.16t
  /dev/sda3               pve             lvm2 a--  <892.75g       0
  /dev/sdc                                lvm2 ---  <630.64g <630.64g

vgs
Code:
VG              #PV #LV #SN Attr   VSize    VFree
  pve               1   2   0 wz--n- <892.75g     0
  vg_E590_volume0   1   1   0 wz--n-  <41.26t 41.16t

lvs
Code:
LV            VG              Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root          pve             -wi-ao---- <884.75g
  swap          pve             -wi-ao----    8.00g
  vm-100-disk-0 vg_E590_volume0 -wi-ao----  100.00g

multipath -ll
Code:
mpath0_E590 (360060e8021a039005060a03900000000) dm-2 HITACHI,OPEN-V
size=41T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 3:0:1:0  sdh 8:112 active ready running
  `- 12:0:1:0 sdo 8:224 active ready running

/etc/multipath.conf
Code:
blacklist {
        wwid .*
}

blacklist_exceptions {
        wwid "360060e8021a039005060a03900000000"
}

multipaths {
        multipath {
                wwid "360060e8021a039005060a03900000000"
                alias mpath0_E590
        }
}

defaults {
        polling_interval        2
        path_selector           "round-robin 0"
        path_grouping_policy    multibus
        uid_attribute           ID_SERIAL
        rr_min_io               100
        failback                immediate
        no_path_retry           queue
        user_friendly_names     yes
}

/etc/multipath/wwids
Code:
# Multipath wwids, Version : 1.0
# NOTE: This file is automatically maintained by multipath and multipathd.
# You should not need to edit this file in normal circumstances.
#
# Valid WWIDs:
/360060e8021a039005060a03900000000/
 
Last edited:
Thank you for the output.

proxmox cannot activate lvs in vg while pvs appear on duplicate devices.

This message usually appears when multipath is not configured correctly.
How is the array connected? iSCSI or FC?

Please also provide the output of ls -l /dev/disk/by-id and lsblk --ascii
 
The array is connected via FC.

ls -l /dev/disk/by-id
Code:
total 0
lrwxrwxrwx 1 root root 10 Dec 10 14:55 dm-name-mpath0_E590 -> ../../dm-2
lrwxrwxrwx 1 root root 10 Dec 10 14:26 dm-name-pve-root -> ../../dm-1
lrwxrwxrwx 1 root root 10 Dec 10 14:26 dm-name-pve-swap -> ../../dm-0
lrwxrwxrwx 1 root root 10 Dec 10 14:55 dm-name-vg_E590_volume0-vm--100--disk--0 -> ../../dm-3
lrwxrwxrwx 1 root root 10 Dec 10 14:55 dm-uuid-LVM-6TBh4DaTOShH4iW7rxOfgnGWdfJxfdr8BXLcbJA7h3nyOk17N7ucGDtcuFOaefmU -> ../../dm-3
lrwxrwxrwx 1 root root 10 Dec 10 14:26 dm-uuid-LVM-nc5oFEa9He7RK6xiOc4RpGjQd6LhgG9ZhmaQfb05uekuMDb10obVRU55Q2tcHJWO -> ../../dm-1
lrwxrwxrwx 1 root root 10 Dec 10 14:26 dm-uuid-LVM-nc5oFEa9He7RK6xiOc4RpGjQd6LhgG9ZlWXfwfORS8JxCMgGZfXip3HVumo1HSeQ -> ../../dm-0
lrwxrwxrwx 1 root root 10 Dec 10 14:55 dm-uuid-mpath-360060e8021a039005060a03900000000 -> ../../dm-2
lrwxrwxrwx 1 root root  9 Dec 10 14:26 lvm-pv-uuid-4CcZCc-jBHY-Jh9U-pDQU-mSWB-uEKG-9CUVT5 -> ../../sdj
lrwxrwxrwx 1 root root 10 Dec 10 14:26 lvm-pv-uuid-X83ScY-Yh5Y-Apvy-YwGd-5u3l-Vd1E-nyPnZh -> ../../sda3
lrwxrwxrwx 1 root root 10 Dec 10 14:55 lvm-pv-uuid-ybPmjc-trQ0-d28C-gpw6-hd3B-sRcJ-j09Tji -> ../../dm-2
lrwxrwxrwx 1 root root  9 Dec 10 14:26 scsi-1HITACHI_5060A0390000 -> ../../sdo
lrwxrwxrwx 1 root root  9 Dec 10 14:26 scsi-1HITACHI_912560920000 -> ../../sdi
lrwxrwxrwx 1 root root 10 Dec 10 14:26 scsi-1HITACHI_912560920000-part1 -> ../../sdi1
lrwxrwxrwx 1 root root  9 Dec 10 14:26 scsi-1HITACHI_912560920001 -> ../../sdj
lrwxrwxrwx 1 root root  9 Dec 10 14:26 scsi-1HITACHI_912560920002 -> ../../sdk
lrwxrwxrwx 1 root root 10 Dec 10 14:26 scsi-1HITACHI_912560920002-part1 -> ../../sdk1
lrwxrwxrwx 1 root root  9 Dec 10 14:26 scsi-1HITACHI_912560920003 -> ../../sdl
lrwxrwxrwx 1 root root 10 Dec 10 14:26 scsi-1HITACHI_912560920003-part1 -> ../../sdl1
lrwxrwxrwx 1 root root  9 Dec 10 14:26 scsi-1HITACHI_912560920004 -> ../../sdm
lrwxrwxrwx 1 root root 10 Dec 10 14:26 scsi-1HITACHI_912560920004-part1 -> ../../sdm1
lrwxrwxrwx 1 root root  9 Dec 10 14:26 scsi-1HITACHI_912560920008 -> ../../sdn
lrwxrwxrwx 1 root root 10 Dec 10 14:26 scsi-1HITACHI_912560920008-part1 -> ../../sdn1
lrwxrwxrwx 1 root root  9 Dec 10 14:26 scsi-36003005711d8435029291de017892f49 -> ../../sda
lrwxrwxrwx 1 root root 10 Dec 10 14:26 scsi-36003005711d8435029291de017892f49-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Dec 10 14:26 scsi-36003005711d8435029291de017892f49-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Dec 10 14:26 scsi-36003005711d8435029291de017892f49-part3 -> ../../sda3
lrwxrwxrwx 1 root root  9 Dec 10 14:26 scsi-360060e80105ec8c00570751c00000000 -> ../../sdi
lrwxrwxrwx 1 root root 10 Dec 10 14:26 scsi-360060e80105ec8c00570751c00000000-part1 -> ../../sdi1
lrwxrwxrwx 1 root root  9 Dec 10 14:26 scsi-360060e80105ec8c00570751c00000001 -> ../../sdj
lrwxrwxrwx 1 root root  9 Dec 10 14:26 scsi-360060e80105ec8c00570751c00000002 -> ../../sdk
lrwxrwxrwx 1 root root 10 Dec 10 14:26 scsi-360060e80105ec8c00570751c00000002-part1 -> ../../sdk1
lrwxrwxrwx 1 root root  9 Dec 10 14:26 scsi-360060e80105ec8c00570751c00000003 -> ../../sdl
lrwxrwxrwx 1 root root 10 Dec 10 14:26 scsi-360060e80105ec8c00570751c00000003-part1 -> ../../sdl1
lrwxrwxrwx 1 root root  9 Dec 10 14:26 scsi-360060e80105ec8c00570751c00000004 -> ../../sdm
lrwxrwxrwx 1 root root 10 Dec 10 14:26 scsi-360060e80105ec8c00570751c00000004-part1 -> ../../sdm1
lrwxrwxrwx 1 root root  9 Dec 10 14:26 scsi-360060e80105ec8c00570751c00000008 -> ../../sdn
lrwxrwxrwx 1 root root 10 Dec 10 14:26 scsi-360060e80105ec8c00570751c00000008-part1 -> ../../sdn1
lrwxrwxrwx 1 root root 10 Dec 10 14:55 scsi-360060e8021a039005060a03900000000 -> ../../dm-2
lrwxrwxrwx 1 root root  9 Dec 10 14:26 scsi-SFTS_PRAID_EP420i_00492f8917e01d29295043d811570003 -> ../../sda
lrwxrwxrwx 1 root root 10 Dec 10 14:26 scsi-SFTS_PRAID_EP420i_00492f8917e01d29295043d811570003-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Dec 10 14:26 scsi-SFTS_PRAID_EP420i_00492f8917e01d29295043d811570003-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Dec 10 14:26 scsi-SFTS_PRAID_EP420i_00492f8917e01d29295043d811570003-part3 -> ../../sda3
lrwxrwxrwx 1 root root  9 Dec 10 14:26 scsi-SHITACHI_DF600F_912560920000 -> ../../sdi
lrwxrwxrwx 1 root root 10 Dec 10 14:26 scsi-SHITACHI_DF600F_912560920000-part1 -> ../../sdi1
lrwxrwxrwx 1 root root  9 Dec 10 14:26 scsi-SHITACHI_DF600F_912560920001 -> ../../sdj
lrwxrwxrwx 1 root root  9 Dec 10 14:26 scsi-SHITACHI_DF600F_912560920002 -> ../../sdk
lrwxrwxrwx 1 root root 10 Dec 10 14:26 scsi-SHITACHI_DF600F_912560920002-part1 -> ../../sdk1
lrwxrwxrwx 1 root root  9 Dec 10 14:26 scsi-SHITACHI_DF600F_912560920003 -> ../../sdl
lrwxrwxrwx 1 root root 10 Dec 10 14:26 scsi-SHITACHI_DF600F_912560920003-part1 -> ../../sdl1
lrwxrwxrwx 1 root root  9 Dec 10 14:26 scsi-SHITACHI_DF600F_912560920004 -> ../../sdm
lrwxrwxrwx 1 root root 10 Dec 10 14:26 scsi-SHITACHI_DF600F_912560920004-part1 -> ../../sdm1
lrwxrwxrwx 1 root root  9 Dec 10 14:26 scsi-SHITACHI_DF600F_912560920008 -> ../../sdn
lrwxrwxrwx 1 root root 10 Dec 10 14:26 scsi-SHITACHI_DF600F_912560920008-part1 -> ../../sdn1
lrwxrwxrwx 1 root root  9 Dec 10 14:26 scsi-SHITACHI_OPEN-V_50641017 -> ../../sdo
lrwxrwxrwx 1 root root  9 Dec 10 14:26 wwn-0x6003005711d8435029291de017892f49 -> ../../sda
lrwxrwxrwx 1 root root 10 Dec 10 14:26 wwn-0x6003005711d8435029291de017892f49-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Dec 10 14:26 wwn-0x6003005711d8435029291de017892f49-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Dec 10 14:26 wwn-0x6003005711d8435029291de017892f49-part3 -> ../../sda3
lrwxrwxrwx 1 root root  9 Dec 10 14:26 wwn-0x60060e80105ec8c00570751c00000000 -> ../../sdi
lrwxrwxrwx 1 root root 10 Dec 10 14:26 wwn-0x60060e80105ec8c00570751c00000000-part1 -> ../../sdi1
lrwxrwxrwx 1 root root  9 Dec 10 14:26 wwn-0x60060e80105ec8c00570751c00000001 -> ../../sdj
lrwxrwxrwx 1 root root  9 Dec 10 14:26 wwn-0x60060e80105ec8c00570751c00000002 -> ../../sdk
lrwxrwxrwx 1 root root 10 Dec 10 14:26 wwn-0x60060e80105ec8c00570751c00000002-part1 -> ../../sdk1
lrwxrwxrwx 1 root root  9 Dec 10 14:26 wwn-0x60060e80105ec8c00570751c00000003 -> ../../sdl
lrwxrwxrwx 1 root root 10 Dec 10 14:26 wwn-0x60060e80105ec8c00570751c00000003-part1 -> ../../sdl1
lrwxrwxrwx 1 root root  9 Dec 10 14:26 wwn-0x60060e80105ec8c00570751c00000004 -> ../../sdm
lrwxrwxrwx 1 root root 10 Dec 10 14:26 wwn-0x60060e80105ec8c00570751c00000004-part1 -> ../../sdm1
lrwxrwxrwx 1 root root  9 Dec 10 14:26 wwn-0x60060e80105ec8c00570751c00000008 -> ../../sdn
lrwxrwxrwx 1 root root 10 Dec 10 14:26 wwn-0x60060e80105ec8c00570751c00000008-part1 -> ../../sdn1
lrwxrwxrwx 1 root root 10 Dec 10 14:55 wwn-0x60060e8021a039005060a03900000000 -> ../../dm-2

lsblk --ascii
Code:
NAME                                 MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                                    8:0    0 893.3G  0 disk
|-sda1                                 8:1    0  1007K  0 part
|-sda2                                 8:2    0   512M  0 part  /boot/efi
`-sda3                                 8:3    0 892.7G  0 part
  |-pve-swap                         253:0    0     8G  0 lvm   [SWAP]
  `-pve-root                         253:1    0 884.7G  0 lvm   /
sdb                                    8:16   0     2T  0 disk
`-sdb1                                 8:17   0     2T  0 part
sdc                                    8:32   0 630.6G  0 disk
sdd                                    8:48   0    10T  0 disk
`-sdd1                                 8:49   0    10T  0 part
sde                                    8:64   0    10T  0 disk
`-sde1                                 8:65   0    10T  0 part
sdf                                    8:80   0   3.7T  0 disk
`-sdf1                                 8:81   0   3.7T  0 part
sdg                                    8:96   0   5.6T  0 disk
`-sdg1                                 8:97   0   5.6T  0 part
sdh                                    8:112  0  41.3T  0 disk
`-mpath0_E590                        253:2    0  41.3T  0 mpath
  `-vg_E590_volume0-vm--100--disk--0 253:3    0   100G  0 lvm
sdi                                    8:128  0     2T  0 disk
`-sdi1                                 8:129  0     2T  0 part
sdj                                    8:144  0 630.6G  0 disk
sdk                                    8:160  0    10T  0 disk
`-sdk1                                 8:161  0    10T  0 part
sdl                                    8:176  0    10T  0 disk
`-sdl1                                 8:177  0    10T  0 part
sdm                                    8:192  0   3.7T  0 disk
`-sdm1                                 8:193  0   3.7T  0 part
sdn                                    8:208  0   5.6T  0 disk
`-sdn1                                 8:209  0   5.6T  0 part
sdo                                    8:224  0  41.3T  0 disk
`-mpath0_E590                        253:2    0  41.3T  0 mpath
  `-vg_E590_volume0-vm--100--disk--0 253:3    0   100G  0 lvm
 
Can you run lvchange -ay vg_E590_volume0/vm-100-disk-0 and provide the complete output?
 
Hmm, empty output is good?

Code:
root@pve-01:~# lvchange -ay vg_E590_volume0/vm-100-disk-0
  WARNING: Not using device /dev/sdj for PV 4CcZCc-jBHY-Jh9U-pDQU-mSWB-uEKG-9CUVT5.
  WARNING: PV 4CcZCc-jBHY-Jh9U-pDQU-mSWB-uEKG-9CUVT5 prefers device /dev/sdc because device was seen first.
root@pve-01:~#

The warnings refer to resources from the old matrix that is connected to the same fc-switch. I don't use these resources in the new environment.
 
It seems you've extended the VG vg_E590_volume0 with /dev/sdc (and sdj). But those don't have multipath configured.
And this would explain why you see a bigger size in PVE than the LUN you've created on the storage side.
 
I think not.
sdj (and sdc) are size 630GB ,so size of VG vg_E590_volume0 would to be around 41.89TB (41.26TB+630GB).
I don't use sdj and sdc, there come from old array.

The output from pvs and vgs show size 41.26
Code:
root@pve-01:~# pvs
  WARNING: Not using device /dev/sdj for PV 4CcZCc-jBHY-Jh9U-pDQU-mSWB-uEKG-9CUVT5.
  WARNING: PV 4CcZCc-jBHY-Jh9U-pDQU-mSWB-uEKG-9CUVT5 prefers device /dev/sdc because device was seen first.
  PV                      VG              Fmt  Attr PSize    PFree
  /dev/mapper/mpath0_E590 vg_E590_volume0 lvm2 a--   <41.26t   41.06t
  /dev/sda3               pve             lvm2 a--  <892.75g       0
  /dev/sdc                                lvm2 ---  <630.64g <630.64g
Code:
root@pve-01:~# vgs
  WARNING: Not using device /dev/sdj for PV 4CcZCc-jBHY-Jh9U-pDQU-mSWB-uEKG-9CUVT5.
  WARNING: PV 4CcZCc-jBHY-Jh9U-pDQU-mSWB-uEKG-9CUVT5 prefers device /dev/sdc because device was seen first.
  VG              #PV #LV #SN Attr   VSize    VFree
  pve               1   2   0 wz--n- <892.75g     0
  vg_E590_volume0   1   2   0 wz--n-  <41.26t 41.06t
root@pve-01:~#

but in the GUI I see 45.37:
vgs_gui.JPG

How I create LVM?:
Code:
pvcreate / dev / mapper / mpath0_E590
then
Code:
vgcreate vg_e590_volume0 / dev / mapper / mpath0_E590
ath the end in GUI
Datacenter -> Storage -> Add - LVM
Przechwytywanie.JPG

Maybe I did something wrong?
 
Yes, those are the steps. You could probably create it in the GUI directly.

For some reason /dev/sdc and /dev/sdj are used by LVM. You could start by adding a multipath for those 2 disks.
This could solve the issue.

What's the output of vgdisplay?
 
Ok, before change:

Code:
root@pve-01:~# vgdisplay
  WARNING: Not using device /dev/sdj for PV 4CcZCc-jBHY-Jh9U-pDQU-mSWB-uEKG-9CUVT5.
  WARNING: PV 4CcZCc-jBHY-Jh9U-pDQU-mSWB-uEKG-9CUVT5 prefers device /dev/sdc because device was seen first.
  --- Volume group ---
  VG Name               pve
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  9
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <892.75 GiB
  PE Size               4.00 MiB
  Total PE              228543
  Alloc PE / Size       228543 / <892.75 GiB
  Free  PE / Size       0 / 0
  VG UUID               nc5oFE-a9He-7RK6-xiOc-4RpG-jQd6-LhgG9Z

  --- Volume group ---
  VG Name               vg_E590_volume0
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <41.26 TiB
  PE Size               4.00 MiB
  Total PE              10816061
  Alloc PE / Size       51200 / 200.00 GiB
  Free  PE / Size       10764861 / 41.06 TiB
  VG UUID               6TBh4D-aTOS-hH4i-W7rx-Ofgn-GWdf-Jxfdr8

And after change:
Code:
root@pve-01:~# vgdisplay
  --- Volume group ---
  VG Name               pve
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  9
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <892.75 GiB
  PE Size               4.00 MiB
  Total PE              228543
  Alloc PE / Size       228543 / <892.75 GiB
  Free  PE / Size       0 / 0
  VG UUID               nc5oFE-a9He-7RK6-xiOc-4RpG-jQd6-LhgG9Z

  --- Volume group ---
  VG Name               vg_E590_volume0
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <41.26 TiB
  PE Size               4.00 MiB
  Total PE              10816061
  Alloc PE / Size       51200 / 200.00 GiB
  Free  PE / Size       10764861 / 41.06 TiB
  VG UUID               6TBh4D-aTOS-hH4i-W7rx-Ofgn-GWdf-Jxfdr8

multipath -ll
Code:
root@pve-01:~# multipath -ll
mpath0_E590 (360060e8021a039005060a03900000000) dm-2 HITACHI,OPEN-V
size=41T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 3:0:1:0  sdh 8:112 active ready running
  `- 12:0:1:0 sdo 8:224 active ready running
mpath1_630GB (360060e80105ec8c00570751c00000001) dm-5 HITACHI,DF600F
size=631G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  |- 3:0:0:1  sdc 8:32  active ready running
  `- 12:0:0:1 sdj 8:144 active ready running
root@pve-01:~#

/etc/multipath.conf
Code:
root@pve-01:~# cat /etc/multipath.conf
blacklist {
        wwid .*
}

blacklist_exceptions {
        wwid "360060e8021a039005060a03900000000"
        wwid "360060e80105ec8c00570751c00000001"
}

multipaths {
        multipath {
                wwid "360060e8021a039005060a03900000000"
                alias mpath0_E590
        }
        multipath {
                wwid "360060e80105ec8c00570751c00000001"
                alias mpath1_630GB
        }
}

defaults {
        polling_interval        2
        path_selector           "round-robin 0"
        path_grouping_policy    multibus
        uid_attribute           ID_SERIAL
        rr_min_io               100
        failback                immediate
        no_path_retry           queue
        user_friendly_names     yes
}
root@pve-01:~#

/etc/multipath/wwids
Code:
root@pve-01:~# cat  /etc/multipath/wwids
# Multipath wwids, Version : 1.0
# NOTE: This file is automatically maintained by multipath and multipathd.
# You should not need to edit this file in normal circumstances.
#
# Valid WWIDs:
/360060e8021a039005060a03900000000/
/360060e80105ec8c00570751c00000001/
root@pve-01:~#
 
Now, I shutdown the test virtual machine, power off the physical server (pve-01), power on the physical server (pve-01) and when i wanted turn on the test virtual machine, a have an error:

Code:
  WARNING: Not using device /dev/mapper/mpath0_E590 for PV ybPmjc-trQ0-d28C-gpw6-hd3B-sRcJ-j09Tji.
  WARNING: PV ybPmjc-trQ0-d28C-gpw6-hd3B-sRcJ-j09Tji prefers device /dev/sdh because device is used by LV.
  WARNING: Not using device /dev/mapper/mpath0_E590 for PV ybPmjc-trQ0-d28C-gpw6-hd3B-sRcJ-j09Tji.
  WARNING: PV ybPmjc-trQ0-d28C-gpw6-hd3B-sRcJ-j09Tji prefers device /dev/sdh because device is used by LV.
TASK ERROR: can't activate LV '/dev/vg_E590_volume0/vm-100-disk-0':   Cannot activate LVs in VG vg_E590_volume0 while PVs appear on duplicate devices.
 
Please provide the journal since the boot (journalctl -b > journal.txt). The command creates a file called `journal.txt`, please attach it here.
 
Code:
Dec 16 12:02:19 pve-01 lvm[1590]:   pvscan[1590] PV /dev/mapper/mpath0_E590 is duplicate for PVID ybPmjctrQ0d28Cgpw6hd3BsRcJj09Tji on 253:5 and 8:112.
Dec 16 12:02:19 pve-01 lvm[1590]:   pvscan[1590] PV /dev/mapper/mpath0_E590 failed to create online file.

Could you provide the output of lsblk --ascii -o NAME,MAJ:MIN,RM,SIZE,RO,MOUNTPOINT,UUID again?
 
Last edited:
of course

Code:
root@pve-01:/tmp# lsblk --ascii -o NAME,MAJ:MIN,RM,SIZE,RO,MOUNTPOINT,UUID
NAME                               MAJ:MIN RM   SIZE RO MOUNTPOINT UUID
sda                                  8:0    0 893.3G  0
|-sda1                               8:1    0  1007K  0
|-sda2                               8:2    0   512M  0 /boot/efi  C57D-F32C
`-sda3                               8:3    0 892.7G  0            X83ScY-Yh5Y-Apvy-YwGd-5u3l-Vd1E-nyPnZh
  |-pve-swap                       253:0    0     8G  0 [SWAP]     bab4ec5c-d6a6-4a8e-9d87-c456ec935715
  `-pve-root                       253:1    0 884.7G  0 /          71b70d34-a6be-44cd-8baf-4df7a120d8bd
sdb                                  8:16   0     2T  0
`-sdb1                               8:17   0     2T  0
sdc                                  8:32   0 630.6G  0            4CcZCc-jBHY-Jh9U-pDQU-mSWB-uEKG-9CUVT5
`-mpath1_630GB                     253:4    0 630.6G  0            4CcZCc-jBHY-Jh9U-pDQU-mSWB-uEKG-9CUVT5
sdd                                  8:48   0    10T  0
`-sdd1                               8:49   0    10T  0            53fc76b3-2f8ab436-d356-001999975b19
sde                                  8:64   0    10T  0
`-sde1                               8:65   0    10T  0            53fc934d-7bca9898-5305-f80f41f90aa8
sdf                                  8:80   0   3.7T  0
`-sdf1                               8:81   0   3.7T  0            53fc932f-c9615ae8-bcc0-f80f41f90aa8
sdg                                  8:96   0   5.6T  0
`-sdg1                               8:97   0   5.6T  0            57e29ecc-aa04973d-fd99-a0369f8e829a
sdh                                  8:112  0  41.3T  0            ybPmjc-trQ0-d28C-gpw6-hd3B-sRcJ-j09Tji
|-vg_E590_volume0-vm--100--disk--0 253:2    0   100G  0
|-vg_E590_volume0-vm--100--disk--1 253:3    0   100G  0
`-mpath0_E590                      253:5    0  41.3T  0            ybPmjc-trQ0-d28C-gpw6-hd3B-sRcJ-j09Tji
sdi                                  8:128  0     2T  0
`-sdi1                               8:129  0     2T  0
sdj                                  8:144  0 630.6G  0            4CcZCc-jBHY-Jh9U-pDQU-mSWB-uEKG-9CUVT5
`-mpath1_630GB                     253:4    0 630.6G  0            4CcZCc-jBHY-Jh9U-pDQU-mSWB-uEKG-9CUVT5
sdk                                  8:160  0    10T  0
`-sdk1                               8:161  0    10T  0            53fc76b3-2f8ab436-d356-001999975b19
sdl                                  8:176  0    10T  0
`-sdl1                               8:177  0    10T  0            53fc934d-7bca9898-5305-f80f41f90aa8
sdm                                  8:192  0   3.7T  0
`-sdm1                               8:193  0   3.7T  0            53fc932f-c9615ae8-bcc0-f80f41f90aa8
sdn                                  8:208  0   5.6T  0
`-sdn1                               8:209  0   5.6T  0            57e29ecc-aa04973d-fd99-a0369f8e829a
sdo                                  8:224  0  41.3T  0            ybPmjc-trQ0-d28C-gpw6-hd3B-sRcJ-j09Tji
`-mpath0_E590                      253:5    0  41.3T  0            ybPmjc-trQ0-d28C-gpw6-hd3B-sRcJ-j09Tji
root@pve-01:/tmp#
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!