Volume group not found and cannot create it, but VMs working

rentel

New Member
Nov 13, 2021
4
1
3
39
This morning around 6:00 a.m. an error occurred that we have not yet detected in our storage, called mpath0. When I try to move the disk image of a VM to another MPATH or power on a machine, the following error is thrown by Proxmox

Bash:
Command failed with status code 5.
command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
  Volume group "iscsi-mp-cluster" not found
TASK ERROR: can't activate LV '/dev/iscsi-mp-cluster/vm-134-disk-1':   Cannot process volume group iscsi-mp-cluster

But when I try to create it It says It already exists

Bash:
root@px1:/dev/mapper# vgcreate iscsi-mp-cluster /dev/sdc /dev/sdf
  /dev/iscsi-mp-cluster: already exists in filesystem
  Run `vgcreate --help' for more information.

So I checked the volume,

Bash:
root@px1:~# pvesm status
  Command failed with status code 5.
command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
storage 'nas4-weekly' is not online
Name                        Type     Status           Total            Used       Available        %
local                        dir     active        14126104         8090916         5297908   57.28%
local-lvm                lvmthin     active        29515776        12978086        16537689   43.97%
[B]mpath0                       lvm   inactive               0               0               0    0.00%[/B]
mpath1                       lvm     active     11706589184      2945449984      8761139200   25.16%
nas1-radius2                 nfs     active      5812636032       526407168      5236228864    9.06%

The volume is inactive, which is logical since it does not appear to be added to the system.

Code:
root@px1:~# pvs
  PV                 VG     Fmt  Attr PSize  PFree
  /dev/mapper/mpath1 mpath1 lvm2 a--  10.90t <8.16t
  /dev/sda3          pve    lvm2 a--  55.64g <6.81g
  /dev/sdd           nas4n  lvm2 a--   5.37t <3.36t

When I try to create the group that has disappeared, the following error is returned

Code:
root@px1:~# vgcreate mpath0 /dev/mapper/mpath0
  Can't open /dev/mapper/mpath0 exclusively.  Mounted filesystem?
  Can't open /dev/mapper/mpath0 exclusively.  Mounted filesystem?

Surprisingly the machines that were turned on at the time of the error continue to function normally, although they do not allow disk migration nor can you turn on a machine in mpath0. The Multipath seems to be working correctly.

Code:
root@px1:~# multipath -ll
Jun 09 12:09:28 | vpd pg80 overflow, 251/65 bytes required
mpath1 (3600140588d1e29cd31ebd4c46db396d5) dm-7 SYNOLOGY,iSCSI Storage
size=11T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
  |- 6:0:0:0  sdb 8:16 active ready running
  `- 9:0:0:0  sde 8:64 active ready running
mpath0 (36001405aff94691d54d1d446edb8f7dd) dm-48 SYNOLOGY,iSCSI Storage
size=11T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
  |- 10:0:0:0 sdf 8:80 active ready running
  `- 7:0:0:0  sdc 8:32 active ready running

MPATH0 is present in /dev/mapper, but there isn't any of the vm disk images contained in it
Code:
root@px1:/dev/mapper# ls -lh mpath0
lrwxrwxrwx 1 root root 8 Jun  9 10:44 mpath0 -> ../dm-48

Using lsblk -f -m I can see the information contained in mpath0.

Code:
sdc                                      mpath_member                                                                        10.9T             brw-rw----
└─mpath0                                                                                                                     10.9T             brw-rw----
  ├─iscsi--mp--cluster-vm--150--disk--0  ext4               7b098ce6-ba0f-4507-8071-961c7945de2f                               16G             brw-rw----
  ├─iscsi--mp--cluster-vm--903--disk--1                                                                                       332G             brw-rw----
  ├─iscsi--mp--cluster-vm--108--disk--1                                                                                        10G             brw-rw----
  ├─iscsi--mp--cluster-vm--112--disk--1                                                                                         5G             brw-rw----

I am currently running a TestDisk to detect possible problems with the partition table, but I am quite lost.
 
Last edited: