nvme lvm-thin not loading after upgrade


Nov 21, 2020
Hey all,
So last night i did the mistake of updating both my host PVE os and motherboard firmware (which also reset all settings for which i hadnt kept a backup as it seems) and now nvme (VMstorage) with lvm-thin wont load.
Host updated Linux 6.2.16-12-pve up from Linux 6.2.16-10-pve i think.
Motherboard has only AHCI mode afaik, so that shouldnt be the deal, since the disk is already showing up under some part of the pve host and the other passthrough nvme is readable from a live linux environment.

all vms and lxcs show the below:
2023-09-13T14:02:18.653544+03:00 pve pvestatd[1185]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
2023-09-13T14:02:18.653608+03:00 pve pvestatd[1185]: no such logical volume VMstorage/VMstorage

the disk is showing up under
datacenter>storage as an lvm-thin type
pve>disks showing up smart status and reports

VMstorage (pve)>
Enabled Yes
Active No

both nvme0 and nvme1 are showing under bios and pve under disks
nvme0 is the one in question, there are mentions of it under /dev/nvme0 nvme0n1 nvme0n1p1
nvme1 is passed through to an unraid vm which works fine since it doesnt use that storage and just boots from a usb stick.
root@pve:~# pvs -a
PV VG Fmt Attr PSize PFree
/dev/nvme0n1p1 --- 0 0
/dev/nvme1n1p1 --- 0 0
/dev/sda2 --- 0 0
/dev/sda3 pve lvm2 a-- <464.76g 16.00g
root@pve:~# lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- <337.86g 0.00 0.50
[data_tdata] pve Twi-ao---- <337.86g
[data_tmeta] pve ewi-ao---- <3.45g
[lvol0_pmspare] pve ewi------- <3.45g
root pve -wi-ao---- 96.00g
swap pve -wi-ao---- 8.00g

root@pve:~# vgs -a
VG #PV #LV #SN Attr VSize VFree
pve 1 3 0 wz--n- <464.76g 16.00g

dir: local
path /var/lib/vz
content backup,vztmpl,iso

lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir

lvmthin: VMstorage
thinpool VMstorage
vgname VMstorage
content rootdir,images
nodes pve

cifs: DATAsrv-homelab-proxmox
path /mnt/pve/DATAsrv-homelab-proxmox
share homelab
+other cifs mounts

root@pve:~# lsblk
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 1G 0 part /boot/efi
└─sda3 8:3 0 464.8G 0 part
├─pve-swap 253:0 0 8G 0 lvm [SWAP]
├─pve-root 253:1 0 96G 0 lvm /
├─pve-data_tmeta 253:2 0 3.4G 0 lvm
│ └─pve-data-tpool 253:4 0 337.9G 0 lvm
│ └─pve-data 253:5 0 337.9G 1 lvm
└─pve-data_tdata 253:3 0 337.9G 0 lvm
└─pve-data-tpool 253:4 0 337.9G 0 lvm
└─pve-data 253:5 0 337.9G 1 lvm
nvme0n1 259:2 0 1.8T 0 disk
└─nvme0n1p1 259:3 0 100M 0 part

root@pve:~# pveversion -v
proxmox-ve: 8.0.2 (running kernel: 6.2.16-12-pve)
pve-manager: 8.0.4 (running version: 8.0.4/d258a813cfa6b390)
pve-kernel-6.2: 8.0.5
proxmox-kernel-helper: 8.0.3
proxmox-kernel-6.2.16-12-pve: 6.2.16-12
proxmox-kernel-6.2: 6.2.16-12
proxmox-kernel-6.2.16-10-pve: 6.2.16-10
proxmox-kernel-6.2.16-8-pve: 6.2.16-8
proxmox-kernel-6.2.16-6-pve: 6.2.16-7
pve-kernel-6.2.16-5-pve: 6.2.16-6
pve-kernel-6.2.16-3-pve: 6.2.16-3
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx4
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.25-pve1
libproxmox-acme-perl: 1.4.6
libproxmox-backup-qemu0: 1.4.0
libproxmox-rs-perl: 0.3.1
libpve-access-control: 8.0.5
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.0.8
libpve-guest-common-perl: 5.0.4
libpve-http-server-perl: 5.0.4
libpve-rs-perl: 0.8.5
libpve-storage-perl: 8.0.2
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-2
proxmox-backup-client: 3.0.2-1
proxmox-backup-file-restore: 3.0.2-1
proxmox-kernel-helper: 8.0.3
proxmox-mail-forward: 0.2.0
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.0.6
pve-cluster: 8.0.3
pve-container: 5.0.4
pve-docs: 8.0.4
pve-edk2-firmware: 3.20230228-4
pve-firewall: 5.0.3
pve-firmware: 3.8-2
pve-ha-manager: 4.0.2
pve-i18n: 3.0.5
pve-qemu-kvm: 8.0.2-5
pve-xtermjs: 4.16.0-3
qemu-server: 8.0.7
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.1.12-pve1
root@pve:~# pvesm lvmthinscan VMstorage
Volume group "VMstorage" not found
Cannot process volume group VMstorage
command '/sbin/lvs --separator : --noheadings --units b --unbuffered --nosuffix --config 'report/time_format="%s"' --options vg_name,lv_name,lv_size,lv_attr,pool_lv,data_percent,metadata_percent,snap_percent,uuid,tags,metadata_size,time VMstorage' failed: exit code 5

attached a syslog after a reboot (shutdown until line 210)

hopefully i wont need to reinstall the host and wipe that disk :/



  • syslog.txt
    192.1 KB · Views: 1
Last edited:


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!