local-LVM not available after Kernel update on PVE 7

fiona

Proxmox Staff Member
Staff member
Aug 1, 2019
2,436
483
88
Hi,
Hopefully there's no real downside to this.
the downside is that the check at start-up is not as thorough, quoting from man thin_check:
Code:
--skip-mappings
              Skip checking of the block mappings which make up the bulk of the metadata.
So you might want to do the full check manually from time to time. Unfortunately, that can't be done while the pool is active IIRC.
 
  • Like
Reactions: Krypty

colemorgan

New Member
Jun 17, 2022
3
0
1
None of these solutions have worked for me. I just a have console full of :

pvestatd[1869]: activating LV 'pve/data' failed: Use --select vg_uuid=<uuid> in place of the VG name.
 

fiona

Proxmox Staff Member
Staff member
Aug 1, 2019
2,436
483
88
None of these solutions have worked for me. I just a have console full of :

pvestatd[1869]: activating LV 'pve/data' failed: Use --select vg_uuid=<uuid> in place of the VG name.
That's most likely a different issue. What's the full output when you manually run lvchange -ay pve/data? What's the output of lvs?
 

colemorgan

New Member
Jun 17, 2022
3
0
1
Code:
# lvchange -ay pve/data~
  WARNING: VG name pve is used by VGs JCjMnn-c2uP-tB6N-7tOq-4Nab-KNVz-fm8Ftm and Jo703t-z11X-sUen-P3r0-5VvR-D4sW-LXPRHK.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  Multiple VGs found with the same name: skipping pve
  Use --select vg_uuid=<uuid> in place of the VG name.


Code:
# lvs
  WARNING: VG name pve is used by VGs JCjMnn-c2uP-tB6N-7tOq-4Nab-KNVz-fm8Ftm and Jo703t-z11X-sUen-P3r0-5VvR-D4sW-LXPRHK.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  LV            VG  Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve twi-aotz-- <338.36g             33.34  2.06                           
  data          pve twi---tz-- <794.79g                                                   
  root          pve -wi-ao----   96.00g                                                   
  root          pve -wi-------   96.00g                                                   
  swap          pve -wi-ao----    8.00g                                                   
  swap          pve -wi-------    8.00g                                                   
  vm-100-disk-0 pve Vwi-a-tz--   32.00g data        32.71                                 
  vm-101-disk-0 pve Vwi-a-tz--   16.00g data        53.78                                 
  vm-102-disk-0 pve Vwi-a-tz--   16.00g data        54.20                                 
  vm-103-disk-0 pve Vwi-a-tz--   32.00g data        46.58                                 
  vm-104-disk-0 pve Vwi-a-tz--   18.00g data        36.77                                 
  vm-105-disk-0 pve Vwi-a-tz--   16.00g data        92.23                                 
  vm-106-disk-0 pve Vwi-a-tz--   32.00g data        26.55                                 
  vm-107-disk-0 pve Vwi-a-tz--   32.00g data        80.63                                 
  vm-108-disk-0 pve Vwi-a-tz--    4.00m data        0.00                                   
  vm-108-disk-1 pve Vwi-a-tz--   32.00g data        45.24
 

colemorgan

New Member
Jun 17, 2022
3
0
1
Update: I removed an NVMe drive that I had installed recently and the system functions normally now. The system did boot ~2-3 times with this drive installed before it got into this bad state.

I would love to know how I can install NVMe without it porking the whole system. Is there something I can do to stop the PCIe identifiers from shifting? I assume it has something to do with that.
 

fiona

Proxmox Staff Member
Staff member
Aug 1, 2019
2,436
483
88
Update: I removed an NVMe drive that I had installed recently and the system functions normally now. The system did boot ~2-3 times with this drive installed before it got into this bad state.

I would love to know how I can install NVMe without it porking the whole system. Is there something I can do to stop the PCIe identifiers from shifting? I assume it has something to do with that.
It sounds like there was an other PVE installation on that drive (therefore the duplicate VG names). Likely the easiest way is to copy any data on the drive you still need and then wipe it clean.
 

Maiko

New Member
Feb 13, 2021
1
0
1
28
Hi there,

Got the same issue this night after a reboot.
Is this not yet resolve ?

I had to modify my lvm.conf to add the --skip-mappings in order to fix the issue.

Code:
# pveversion -v
proxmox-ve: 7.2-1 (running kernel: 5.15.53-1-pve)
pve-manager: 7.2-11 (running version: 7.2-11/b76d3178)
pve-kernel-helper: 7.2-12
pve-kernel-5.15: 7.2-10
pve-kernel-5.4: 6.4-17
pve-kernel-5.15.53-1-pve: 5.15.53-1
pve-kernel-5.15.35-1-pve: 5.15.35-3
pve-kernel-5.4.189-1-pve: 5.4.189-1
pve-kernel-5.4.162-1-pve: 5.4.162-2
ceph-fuse: 14.2.21-1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
ksmtuned: 4.20150326
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve1
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-2
libpve-guest-common-perl: 4.1-2
libpve-http-server-perl: 4.1-3
libpve-storage-perl: 7.2-8
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.2.6-1
proxmox-backup-file-restore: 2.2.6-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-2
pve-container: 4.2-2
pve-docs: 7.2-2
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-6
pve-firmware: 3.5-2
pve-ha-manager: 3.4.0
pve-i18n: 2.7-2
pve-qemu-kvm: 7.0.0-3
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-4
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.5-pve1
 

fiona

Proxmox Staff Member
Staff member
Aug 1, 2019
2,436
483
88
Hi,
Hi there,

Got the same issue this night after a reboot.
Is this not yet resolve ?

I had to modify my lvm.conf to add the --skip-mappings in order to fix the issue.
no, it's not yet resolved. There was no reaction from Debian developers to the bug report and if the thin_check takes too long on a system, it will run into the timeout of course.

As suggested here, two ways to fix it would be:
  1. Add --skip-mappings in the udev rules for LVM. But since it works for most people without that, I'd argue the current "add it if you're affected" is the better approach
  2. Switch to using systemd for auto-activation during early boot. But Debian doesn't currently do this and it would be a non-trivial change of course.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!