[SOLVED] Why did Proxmox filter the LVM drives?

meowxiik

New Member
Dec 7, 2021
2
0
1
36
Hi,

For no discernible reason my Proxmox one day decided to refuse to manipulate with LVM under the issue of:

Code:
Volume group "pve" not found
  Cannot process volume group pve
TASK ERROR: command '/sbin/lvs --separator : --noheadings --units b --unbuffered --nosuffix --config 'report/time_format="%s"' --options vg_name,lv_name,lv_size,lv_attr,pool_lv,data_percent,metadata_percent,snap_percent,uuid,tags,metadata_size,time pve' failed: exit code 5

Although lsblk very obviously saw the volumes

Code:
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                            8:0    0 111.8G  0 disk
├─sda1                         8:1    0  1007K  0 part
├─sda2                         8:2    0   512M  0 part /boot/efi
└─sda3                         8:3    0 111.3G  0 part
  ├─pve-swap                 253:0    0     4G  0 lvm  [SWAP]
  ├─pve-root                 253:1    0  27.8G  0 lvm  /
  ├─pve-data_tmeta           253:2    0     1G  0 lvm
  │ └─pve-data-tpool         253:4    0  63.7G  0 lvm
  │   ├─pve-data             253:5    0  63.7G  0 lvm
  │   ├─pve-vm--100--disk--0 253:6    0     8G  0 lvm
  │   ├─pve-vm--105--disk--0 253:7    0     8G  0 lvm
  │   ├─pve-vm--101--disk--0 253:9    0     8G  0 lvm
  │   ├─pve-vm--107--disk--0 253:10   0     8G  0 lvm
  │   ├─pve-vm--102--disk--0 253:11   0     8G  0 lvm
  │   ├─pve-vm--104--disk--0 253:12   0     8G  0 lvm
  │   ├─pve-vm--102--disk--1 253:13   0     8G  0 lvm
  │   ├─pve-vm--108--disk--0 253:14   0     8G  0 lvm
  │   ├─pve-vm--109--disk--0 253:15   0     8G  0 lvm
  │   └─pve-vm--110--disk--0 253:16   0    64G  0 lvm
  └─pve-data_tdata           253:3    0  63.7G  0 lvm
    └─pve-data-tpool         253:4    0  63.7G  0 lvm
      ├─pve-data             253:5    0  63.7G  0 lvm
      ├─pve-vm--100--disk--0 253:6    0     8G  0 lvm
      ├─pve-vm--105--disk--0 253:7    0     8G  0 lvm
      ├─pve-vm--101--disk--0 253:9    0     8G  0 lvm
      ├─pve-vm--107--disk--0 253:10   0     8G  0 lvm
      ├─pve-vm--102--disk--0 253:11   0     8G  0 lvm
      ├─pve-vm--104--disk--0 253:12   0     8G  0 lvm
      ├─pve-vm--102--disk--1 253:13   0     8G  0 lvm
      ├─pve-vm--108--disk--0 253:14   0     8G  0 lvm
      ├─pve-vm--109--disk--0 253:15   0     8G  0 lvm
      └─pve-vm--110--disk--0 253:16   0    64G  0 lvm
sdb                            8:16   0 931.5G  0 disk
└─sdb1                         8:17   0 931.5G  0 part
sdc                            8:32   0   1.4T  0 disk
└─sdc1                         8:33   0   1.4T  0 part

I tracked the issue down to the file
Code:
/etc/lvm/lvm.conf
, which had the following lines:

Code:
global_filter = [ "r|/dev/zd.*|", "r|/dev/mapper/pve-.*|" "r|/dev/mapper/.*-(vm|base)--[0-9]+--disk--[0-9]+|", "r|/dev/sdc*|", "r|/dev/sdb*|"]

Now it doesn't seem to affect /dev/sda but when I removed it and replaced it with

Code:
global_filter = [ "a|/dev/*|" ]

It seems to have started working. I can provide any additional logs on demand.

My questions are:

- Is this a bug? (The original regex shouldn't reject /dev/sda)
- How should I check myself that it is a bug (where is a bug tracker? I only found a mailing list)
- Why do you reject these paths in the first place?
 
Hi,
For no discernible reason my Proxmox one day decided to refuse to manipulate with LVM under the issue of:
What Proxmox VE version do you use currently and with what version was this setup?
Now it doesn't seem to affect /dev/sda but when I removed it and replaced it with

Code:
global_filter = [ "a|/dev/*|" ]
That can cause you trouble if some VM uses LVM inside their blockdev too and scan_lvs is on..

, which had the following lines:

Code:
global_filter = [ "r|/dev/zd.*|", "r|/dev/mapper/pve-.*|" "r|/dev/mapper/.*-(vm|base)--[0-9]+--disk--[0-9]+|", "r|/dev/sdc*|", "r|/dev/sdb*|"]
Besides the sdc and sdb part they seem OK, that's how pre 7.x Proxmox VE systems ensured that the LVs from inside their guest won't be detected and activated on the host itself.

- Is this a bug? (The original regex shouldn't reject /dev/sda)
Well something like that was intended earlier, as mentioned above, but the sdb/sdc part looks off, we never write those out FWICT (checking our LVM source used from pre 7.0)

- How should I check myself that it is a bug (where is a bug tracker? I only found a mailing list)
https://bugzilla.proxmox.com/
Documented here https://pve.proxmox.com/pve-docs/pve-admin-guide.html#getting_help and linked at the bottom of this forum.
- Why do you reject these paths in the first place?
To not screw with the LVs of guest, a hypervisor like Proxmox VE has to take a bit more special care.
But as hinted, starting with Proxmox VE 7.0 we can reduce the effor here as upstream LVM finally decided that the default LV scanning is not the best idea, and so we nowadays only need to exclude ZFS volumes via a global_filter=["r|/dev/zd.*|"]
 
What Proxmox VE version do you use currently and with what version was this setup?
Currently my Proxmox is on "Virtual Environment 6.2-4" . I don't remember the install version but I suspect it was "5.4.34-1-pve" (From bash MOTD)

I also seem to faintly recall adding the /dev/sdb, /dev/sdc filters manually and I also confirm that they indeed filtered the /dev/sda. I removed them and now I works well with the original filters thanks!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!