Hi, I have a PVE cluster where all nodes are connected to several SANs using multipath.
However one PV in LVM complains, that it is using one of the paths instead of multipath device:
All VMs were moved out of node5.
All other nodes have the same LUNs from the same SANs configured identically.
Is it save to deactivate vg03 and how do I force LVM to use multipath device /dev/mapper/san3 to have redundancy back?
Please advice.
Here are some more details about connected LUNs on node5:
For reference, here is how it looks like one node8:
Thank you beforehand.
However one PV in LVM complains, that it is using one of the paths instead of multipath device:
Code:
[node5@15:35 ~]# pvdisplay
WARNING: PV 0tO1MK-jzPE-WKj4-RBi6-Ev9t-4Yvf-jXgNGF on /dev/sde was already found on /dev/mapper/san3.
WARNING: PV 0tO1MK-jzPE-WKj4-RBi6-Ev9t-4Yvf-jXgNGF prefers device /dev/sde because device is used by LV.
--- Physical volume ---
PV Name /dev/sde
VG Name vg03
PV Size 1,02 TiB / not usable 3,62 MiB
Allocatable yes
PE Size 4,00 MiB
Total PE 267097
Free PE 93273
Allocated PE 173824
PV UUID 0tO1MK-jzPE-WKj4-RBi6-Ev9t-4Yvf-jXgNGF
--- Physical volume ---
PV Name /dev/mapper/san1
VG Name vg01
PV Size 1,00 TiB / not usable 16,00 MiB
Allocatable yes
PE Size 4,00 MiB
Total PE 262140
Free PE 97276
Allocated PE 164864
PV UUID bvQWLj-PhGs-FcOz-h9hf-a2vZ-MxGr-2YVU6x
--- Physical volume ---
PV Name /dev/mapper/san2
VG Name vg02
PV Size 931,32 GiB / not usable 4,00 MiB
Allocatable yes
PE Size 4,00 MiB
Total PE 238417
Free PE 238417
Allocated PE 0
PV UUID d1j1ny-QhSL-IeYr-C1bf-Pv0f-oBFg-GZI82X
--- Physical volume ---
PV Name /dev/sda3
VG Name pve
PV Size 279,11 GiB / not usable 4,28 MiB
Allocatable yes
PE Size 4,00 MiB
Total PE 71452
Free PE 4040
Allocated PE 67412
PV UUID 21mnuf-cCf2-XMHG-h5K9-Qep8-Ao4Y-Z6aSdP
[node5@10:38 ~]#
All other nodes have the same LUNs from the same SANs configured identically.
Is it save to deactivate vg03 and how do I force LVM to use multipath device /dev/mapper/san3 to have redundancy back?
Please advice.
Here are some more details about connected LUNs on node5:
Code:
[node5@10:50 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 279,4G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 256M 0 part
└─sda3 8:3 0 279,1G 0 part
├─pve-swap 253:24 0 32G 0 lvm [SWAP]
├─pve-root 253:25 0 20G 0 lvm /
├─pve-data_tmeta 253:28 0 108M 0 lvm
│ └─pve-data 253:30 0 211,1G 0 lvm
└─pve-data_tdata 253:29 0 211,1G 0 lvm
└─pve-data 253:30 0 211,1G 0 lvm
sdb 8:16 0 1T 0 disk
└─san3 253:26 0 1T 0 mpath
sdc 8:32 0 500G 0 disk
└─san4 253:27 0 500G 0 mpath
sdd 8:48 0 500G 0 disk
└─san4 253:27 0 500G 0 mpath
sde 8:64 0 1T 0 disk
├─vg03-vm--108--disk--2 253:0 0 20G 0 lvm
├─and many other virtual disk volumes
└─san3 253:26 0 1T 0 mpath
sdj 8:144 0 931,3G 0 disk
└─san2 253:31 0 931,3G 0 mpath
sdk 8:160 0 931,3G 0 disk
└─san2 253:31 0 931,3G 0 mpath
sdl 8:176 0 1T 0 disk
└─san1 253:32 0 1T 0 mpath
├─vg01-vm--7357--disk--1 253:33 0 5G 0 lvm
└─and many other virtual disk volumes
sdm 8:192 0 1T 0 disk
└─san1 253:32 0 1T 0 mpath
├─vg01-vm--7357--disk--1 253:33 0 5G 0 lvm
└─and many other virtual disk volumes
sdn 8:208 0 1T 0 disk
└─san1 253:32 0 1T 0 mpath
├─vg01-vm--7357--disk--1 253:33 0 5G 0 lvm
└─and many other virtual disk volumes
sdo 8:224 0 1T 0 disk
└─san1 253:32 0 1T 0 mpath
├─vg01-vm--7357--disk--1 253:33 0 5G 0 lvm
└─and many other virtual disk volumes
sr0 11:0 1 1024M 0 rom
[node5@10:50 ~]#
Code:
[node8@12:07 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 279,4G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 256M 0 part
└─sda3 8:3 0 279,1G 0 part
├─pve-swap 253:0 0 40G 0 lvm [SWAP]
├─pve-root 253:1 0 20G 0 lvm /
├─pve-data_tmeta 253:4 0 2G 0 lvm
│ └─pve-data 253:6 0 199,1G 0 lvm
└─pve-data_tdata 253:5 0 199,1G 0 lvm
└─pve-data 253:6 0 199,1G 0 lvm
sdb 8:16 0 1T 0 disk
└─san3 253:2 0 1T 0 mpath
├─vg03-vm--108--disk--2 253:7 0 20G 0 lvm
└─and many other virtual disk volumes
sdc 8:32 0 1T 0 disk
└─san3 253:2 0 1T 0 mpath
├─vg03-vm--108--disk--2 253:7 0 20G 0 lvm
└─and many other virtual disk volumes
sdd 8:48 0 500G 0 disk
└─san4 253:3 0 500G 0 mpath
sde 8:64 0 500G 0 disk
└─san4 253:3 0 500G 0 mpath
sdf 8:80 0 1T 0 disk
└─san3 253:2 0 1T 0 mpath
├─vg03-vm--108--disk--2 253:7 0 20G 0 lvm
└─and many other virtual disk volumes
sdg 8:96 0 1T 0 disk
└─san3 253:2 0 1T 0 mpath
├─vg03-vm--108--disk--2 253:7 0 20G 0 lvm
└─and many other virtual disk volumes
sdh 8:112 0 500G 0 disk
└─san4 253:3 0 500G 0 mpath
sdi 8:128 0 500G 0 disk
└─san4 253:3 0 500G 0 mpath
sdj 8:144 0 931,3G 0 disk
└─san2 253:20 0 931,3G 0 mpath
sdk 8:160 0 931,3G 0 disk
└─san2 253:20 0 931,3G 0 mpath
sdl 8:176 0 1T 0 disk
└─san1 253:21 0 1T 0 mpath
├─vg01-vm--7357--disk--1 253:22 0 5G 0 lvm
└─and many other virtual disk volumes
sdm 8:192 0 1T 0 disk
└─san1 253:21 0 1T 0 mpath
├─vg01-vm--7357--disk--1 253:22 0 5G 0 lvm
└─and many other virtual disk volumes
sdn 8:208 0 1T 0 disk
└─san1 253:21 0 1T 0 mpath
├─vg01-vm--7357--disk--1 253:22 0 5G 0 lvm
└─and many other virtual disk volumes
sdo 8:224 0 1T 0 disk
└─san1 253:21 0 1T 0 mpath
├─vg01-vm--7357--disk--1 253:22 0 5G 0 lvm
└─and many other virtual disk volumes
sr0 11:0 1 1024M 0 rom
[node8@12:07 ~]#