No LVM-thin pool after update (8.0 => 8.2).

Anandir

New Member
Aug 8, 2023
5
0
1
Good evening,
today I've decided, after one year, to update/upgrade my Proxmox installation.
I used to run over the 8.0.x (x = like 3 or something), and because this is the only moment of the year I can touch the server, I've decided to perform the update.
But, suddenly, the LVM-thin pool with the VMs image is gone for good.
I've tried everything I've seen on the forum to fix that, but nothing worked.

Code:
Linux proxima 6.8.12-1-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.12-1 (2024-08-05T16:17Z) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Sat Aug 10 16:40:47 2024
root@proxima:~# lsblk
NAME         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda            8:0    0 223.6G  0 disk
|-sda1         8:1    0  1007K  0 part
|-sda2         8:2    0     1G  0 part /boot/efi
`-sda3         8:3    0 222.6G  0 part
  |-pve-swap 252:0    0     8G  0 lvm  [SWAP]
  `-pve-root 252:1    0 148.9G  0 lvm  /
sdb            8:16   0   3.6T  0 disk
`-sdb1         8:17   0   3.6T  0 part /mnt/backup_disk
root@proxima:~# lvscan
  ACTIVE            '/dev/pve/swap' [8.00 GiB] inherit
  ACTIVE            '/dev/pve/root' [<148.93 GiB] inherit
root@proxima:~# ls /dev/
autofs           cuse      full       i2c-0    log           loop7                    null   pve     sdb       stdout  tty13  tty21  tty3   tty38  tty46  tty54  tty62   ttyS12  ttyS20  ttyS29  ttyS9        vcs1   vcsa3  vcsu5        zero
block            disk      fuse       i2c-1    loop-control  mapper                   nvram  random  sdb1      tpm0    tty14  tty22  tty30  tty39  tty47  tty55  tty63   ttyS13  ttyS21  ttyS3   ttyprintk    vcs2   vcsa4  vcsu6        zfs
bsg              dm-0      gpiochip0  i2c-2    loop0         mcelog                   port   rfkill  sg0       tpmrm0  tty15  tty23  tty31  tty4   tty48  tty56  tty7    ttyS14  ttyS22  ttyS30  udmabuf      vcs3   vcsa5  vfio
btrfs-control    dm-1      hidraw0    i2c-3    loop1         megaraid_sas_ioctl_node  ppp    rtc     sg1       tty     tty16  tty24  tty32  tty40  tty49  tty57  tty8    ttyS15  ttyS23  ttyS31  uhid         vcs4   vcsa6  vga_arbiter
bus              dma_heap  hidraw1    initctl  loop2         mem                      psaux  rtc0    shm       tty0    tty17  tty25  tty33  tty41  tty5   tty58  tty9    ttyS16  ttyS24  ttyS4   uinput       vcs5   vcsu   vhci
char             dri       hidraw2    input    loop3         mqueue                   ptmx   sda     snapshot  tty1    tty18  tty26  tty34  tty42  tty50  tty59  ttyS0   ttyS17  ttyS25  ttyS5   urandom      vcs6   vcsu1  vhost-net
console          ecryptfs  hpet       ipmi0    loop4         mtd0                     ptp0   sda1    snd       tty10   tty19  tty27  tty35  tty43  tty51  tty6   ttyS1   ttyS18  ttyS26  ttyS6   userfaultfd  vcsa   vcsu2  vhost-vsock
core             fb0       hugepages  kmsg     loop5         mtd0ro                   ptp1   sda2    stderr    tty11   tty2   tty28  tty36  tty44  tty52  tty60  ttyS10  ttyS19  ttyS27  ttyS7   userio       vcsa1  vcsu3  watchdog
cpu_dma_latency  fd        hwrng      kvm      loop6         net                      pts    sda3    stdin     tty12   tty20  tty29  tty37  tty45  tty53  tty61  ttyS11  ttyS2   ttyS28  ttyS8   vcs          vcsa2  vcsu4  watchdog0
root@proxima:~#

But, luckily, if I boot using the old kernel, everything works just file:

Code:
Linux proxima 6.2.16-20-pve #1 SMP PREEMPT_DYNAMIC PMX 6.2.16-20 (2023-12-01T13:17Z) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Sat Aug 10 16:26:00 2024 from 192.168.1.23
root@proxima:~# lsblk
NAME                             MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                                8:0    0 223.6G  0 disk
|-sda1                             8:1    0  1007K  0 part
|-sda2                             8:2    0     1G  0 part /boot/efi
`-sda3                             8:3    0 222.6G  0 part
  |-pve-swap                     253:0    0     8G  0 lvm  [SWAP]
  `-pve-root                     253:1    0 148.9G  0 lvm  /
sdb                                8:16   0 837.3G  0 disk
|-vmstorage-vmstorage_tmeta      253:2    0   8.4G  0 lvm 
| `-vmstorage-vmstorage-tpool    253:4    0 820.4G  0 lvm 
|   |-vmstorage-vmstorage        253:5    0 820.4G  1 lvm 
|   |-vmstorage-vm--100--disk--0 253:6    0   300G  0 lvm 
|   `-vmstorage-vm--101--disk--0 253:7    0    20G  0 lvm 
`-vmstorage-vmstorage_tdata      253:3    0 820.4G  0 lvm 
  `-vmstorage-vmstorage-tpool    253:4    0 820.4G  0 lvm 
    |-vmstorage-vmstorage        253:5    0 820.4G  1 lvm 
    |-vmstorage-vm--100--disk--0 253:6    0   300G  0 lvm 
    `-vmstorage-vm--101--disk--0 253:7    0    20G  0 lvm 
sdc                                8:32   0   3.6T  0 disk
`-sdc1                             8:33   0   3.6T  0 part /mnt/backup_disk
root@proxima:~# lvscan
  ACTIVE            '/dev/vmstorage/vmstorage' [820.39 GiB] inherit
  ACTIVE            '/dev/vmstorage/vm-100-disk-0' [300.00 GiB] inherit
  ACTIVE            '/dev/vmstorage/vm-101-disk-0' [20.00 GiB] inherit
  ACTIVE            '/dev/pve/swap' [8.00 GiB] inherit
  ACTIVE            '/dev/pve/root' [<148.93 GiB] inherit
root@proxima:~# ls /dev/
autofs           cuse  dm-7       hidraw0    i2c-3         loop1   megaraid_sas_ioctl_node  ppp     rtc   sg0       stdout  tty13  tty21  tty3   tty38  tty46  tty54  tty62   ttyS12  ttyS20  ttyS29  ttyS9        vcs1   vcsa3  vcsu5        watchdog0
block            disk  dma_heap   hidraw1    initctl       loop2   mem                      psaux   rtc0  sg1       tpm0    tty14  tty22  tty30  tty39  tty47  tty55  tty63   ttyS13  ttyS21  ttyS3   ttyprintk    vcs2   vcsa4  vcsu6        zero
bsg              dm-0  dri        hidraw2    input         loop3   mqueue                   ptmx    sda   sg2       tpmrm0  tty15  tty23  tty31  tty4   tty48  tty56  tty7    ttyS14  ttyS22  ttyS30  udmabuf      vcs3   vcsa5  vfio         zfs
btrfs-control    dm-1  ecryptfs   hpet       ipmi0         loop4   mtd0                     ptp0    sda1  sg3       tty     tty16  tty24  tty32  tty40  tty49  tty57  tty8    ttyS15  ttyS23  ttyS31  uhid         vcs4   vcsa6  vga_arbiter
bus              dm-2  fb0        hugepages  kmsg          loop5   mtd0ro                   ptp1    sda2  shm       tty0    tty17  tty25  tty33  tty41  tty5   tty58  tty9    ttyS16  ttyS24  ttyS4   uinput       vcs5   vcsu   vhci
char             dm-3  fd         hwrng      kvm           loop6   net                      pts     sda3  snapshot  tty1    tty18  tty26  tty34  tty42  tty50  tty59  ttyS0   ttyS17  ttyS25  ttyS5   urandom      vcs6   vcsu1  vhost-net
console          dm-4  full       i2c-0      log           loop7   null                     pve     sdb   snd       tty10   tty19  tty27  tty35  tty43  tty51  tty6   ttyS1   ttyS18  ttyS26  ttyS6   userfaultfd  vcsa   vcsu2  vhost-vsock
core             dm-5  fuse       i2c-1      loop-control  mapper  nvram                    random  sdc   stderr    tty11   tty2   tty28  tty36  tty44  tty52  tty60  ttyS10  ttyS19  ttyS27  ttyS7   userio       vcsa1  vcsu3  vmstorage
cpu_dma_latency  dm-6  gpiochip0  i2c-2      loop0         mcelog  port                     rfkill  sdc1  stdin     tty12   tty20  tty29  tty37  tty45  tty53  tty61  ttyS11  ttyS2   ttyS28  ttyS8   vcs          vcsa2  vcsu4  watchdog
root@proxima:~#

For now, I've pinned the old kernel, so I can restore the VMs and everything is "good". But, honestly, I would like to know "why" this is happening...

The machine is a Lenovo ThinkSystem ST250 (7Y45) and the vmstorage pool is mapped on a RAID 1 volume.
The RAID controller is ThinkSystem RAID 530-8i PCIe 12Gb Adapter (but recognized as
Code:
04:00.0 RAID bus controller: Broadcom / LSI MegaRAID Tri-Mode SAS3408 (rev 01)
).

Thanks a lot in advance.

Best regards,
Giacomo
 
Hi so appears we are also having the same problem - updated from v 8.2.2 - to v 8.2.4
Booted fine all VMs started - then when the daily scheduled backup occured at around 8/9pm - the local-lvm stopped functioning and we can no longer activate it

Any workarounds is this a known bug - with a patch/fix incoming? we are now scared to shutdown the host as it seems VMs are still running fine - but u cant access the Disk.images so we cant move them or back them up or anything not even sure if changes are being written to disk at moment. they might revert back 7 days when the lvm-local became unreachable.

some commands we tried and their outputs aswell as version info:

here is a debug of :
pvscan -vvv





1724349594940.jpeg
1724349621747.jpeg


1724349637038.jpeg
 

Attachments

  • output.txt
    32.4 KB · Views: 0
Here's more info for the above error we faced:
LVM-Thin no longer accessible not active and wont-reactivate after upgrade to v8.2.4 from v.8.2.2

Upgraded to v8.2.4 on 15th August 2024 1pm then during scheduled backup at 8pm same evening - lvm-thin became un-usable.

Anyone with any info that might help us? have also opened a bug on the bugzilla forum

Hardware:
Lenovo - SR630 V2 with a 9350-8i raid adapter

Current Proxmox Version Running:
root@pve01:/dev/pve# pveversion -v
proxmox-ve: 8.2.0 (running kernel: 6.8.12-1-pve)
pve-manager: 8.2.4 (running version: 8.2.4/faa83925c9641325)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.12-1
proxmox-kernel-6.8.12-1-pve-signed: 6.8.12-1
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2



Backup Error:

INFO: starting new backup job: vzdump 106 --mode snapshot --node pve01 --notes-template '{{guestname}}' --notification-mode auto --remove 0 --compress zstd --storage AUNFS1
INFO: Starting Backup of VM 106 (qemu)
INFO: Backup started at 2024-08-22 11:14:49
INFO: status = running
INFO: VM Name: canteen
INFO: include disk 'scsi0' 'local-lvm:vm-106-disk-1' 40G
INFO: include disk 'efidisk0' 'local-lvm:vm-106-disk-0' 4M
ERROR: Backup of VM 106 failed - storage 'local-lvm' does not exist
INFO: Failed at 2024-08-22 11:14:49
INFO: Backup job finished with errors
INFO: notified via target `mail-to-root`
TASK ERROR: job errors


TASK ERROR: no such logical volume pve/data
pvesm status
pvesm lvmthinscan pve
No matching physical volumes found
no such logical volume pve/data

Volume group "pve" not found
Cannot process volume group pve


Code:
13:39:35.066172 pvscan [2286637] activate/dev_manager.c:636 /dev/mapper/pve-data_tmeta: Reserved uuid LVM-YtP2y9YVy5j00CJjzTFimjg3z0ys11JVMbx
meta on internal LV device pve-data_tmeta not usable.
13:39:35.066187 pvscan[2286637] filters/filter-usable.c:95 /dev/mapper/pve-data_tmeta: Skipping unusable device.
13:39:35.066216 pvscan [2286637] device/dev-io.c:120 /dev/loop3: size is 0 sectors
13:39:35.066228 pvscan [2286637]
13:39:35.066239 pvscan [2286637]
13:39:35.066282 pvscan [2286637] device/dev-io.c:120 /dev/sda3: size is 1873220239 sectors

13:39:35.066425 pvscan[2286637] activate/dev_manager.c:636 /dev/mapper/pve-data_tdata: Reserved uuid LVM-YtP2y9YVy5j00CJjzTFimjg3z0ys11JVaLh
data on internal LV device pve-data_tdata not usable.
13:39:35.066436 pvscan [2286637] filters/filter-usable.c:95 /dev/mapper/pve-data_tdata: Skipping unusable device.
 

Attachments

  • Picture9.png
    Picture9.png
    121 KB · Views: 2
  • Picture8.png
    Picture8.png
    382.3 KB · Views: 2
  • Picture7.png
    Picture7.png
    71.4 KB · Views: 2
  • Picture6.png
    Picture6.png
    57.7 KB · Views: 2
  • Picture3.png
    Picture3.png
    42.1 KB · Views: 2
  • Picture2.png
    Picture2.png
    139.6 KB · Views: 2
  • Picture1.png
    Picture1.png
    139.2 KB · Views: 2
  • output.txt
    32.4 KB · Views: 1
Last edited:
Seems the same issue that I and others we had with kernel 6.8 and megaraid drivers.
Try to use previous kernel, some more details here: https://forum.proxmox.com/threads/pve-8-2-on-dell-r420.153223/post-696021
Thanks i tried this - but no luck sadly - even with old Kernel running lvm-thin not accessible

root@pve01:~# pveversion -v
proxmox-ve: 8.2.0 (running kernel: 6.8.4-2-pve)
pve-manager: 8.2.4 (running version: 8.2.4/faa83925c9641325)


using old kernel but LVM-Thin not reachable
 
Your
Code:
pveversion -v
output show you are still running 6.8, you need to use 6.5
i see got u - will 6.2 work? - seems to be the only ones showing up from apt repository.

i had pinned an older installed kernel - as these x2 were the selectable options:
Automatically selected kernels:
6.8.12-1-pve

Pinned kernel:
6.8.4-2-pve



Code:
root@pve01:~# apt search pve-kernel
Sorting... Done
Full Text Search... Done
pve-firmware/stable,now 3.13-1 all [installed]
  Binary firmware code for the pve-kernel

pve-kernel-6.1/stable 7.3-4 all
  Latest Proxmox VE Kernel Image

pve-kernel-6.1.10-1-pve/stable 6.1.10-1 amd64
  Proxmox Kernel Image

pve-kernel-6.2/stable 8.0.5 all
  Proxmox Kernel Image for 6.2 series (transitional package)

pve-kernel-6.2.16-1-pve/stable 6.2.16-1 amd64
  Proxmox Kernel Image

pve-kernel-6.2.16-2-pve/stable 6.2.16-2 amd64
  Proxmox Kernel Image

pve-kernel-6.2.16-3-pve/stable 6.2.16-3 amd64
  Proxmox Kernel Image

pve-kernel-6.2.16-4-pve/stable 6.2.16-5 amd64
  Proxmox Kernel Image

pve-kernel-6.2.16-5-pve/stable 6.2.16-6 amd64
  Proxmox Kernel Image

pve-kernel-helper/stable 7.3-4 all
  Function for various kernel maintenance tasks.

pve-kernel-libc-dev/stable 6.2.16-3 amd64
  Linux support headers for userspace development
 
Last edited:
Installed this version but no luck same error - for the lvm-thin storage volume group.

Code:
apt install proxmox-kernel-6.5.13-5-pve-signed


apt install proxmox-headers-6.5.13-5-pve


then


proxmox-boot-tool kernel pin 6.5.13-5-pve


Code:
pveversion -v

root@pve01:~# pveversion -v
proxmox-ve: 8.2.0 (running kernel: 6.5.13-5-pve)
pve-manager: 8.2.4 (running version: 8.2.4/faa83925c9641325)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.12-1
proxmox-kernel-6.8.12-1-pve-signed: 6.8.12-1
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
proxmox-kernel-6.5.13-5-pve-signed: 6.5.13-5
 
Can be a different case depending on the hardware and software components used, I suppose was kernel 6.8 regression with megaraid driver based on your controller raid, upgrade and after not found the disk, from your message:
Code:
TASK ERROR: no such logical volume pve/data
pvesm status
pvesm lvmthinscan pve
No matching physical volumes found
no such logical volume pve/data

Volume group "pve" not found
Cannot process volume group pve

but I looked very quickly and I did not see the next output from which seems other problems, I mean:
Code:
13:39:35.066172 pvscan [2286637] activate/dev_manager.c:636 /dev/mapper/pve-data_tmeta: Reserved uuid LVM-YtP2y9YVy5j00CJjzTFimjg3z0ys11JVMbx
meta on internal LV device pve-data_tmeta not usable.
13:39:35.066187 pvscan[2286637] filters/filter-usable.c:95 /dev/mapper/pve-data_tmeta: Skipping unusable device.
13:39:35.066216 pvscan [2286637] device/dev-io.c:120 /dev/loop3: size is 0 sectors
13:39:35.066228 pvscan [2286637]
13:39:35.066239 pvscan [2286637]
13:39:35.066282 pvscan [2286637] device/dev-io.c:120 /dev/sda3: size is 1873220239 sectors

13:39:35.066425 pvscan[2286637] activate/dev_manager.c:636 /dev/mapper/pve-data_tdata: Reserved uuid LVM-YtP2y9YVy5j00CJjzTFimjg3z0ys11JVaLh
data on internal LV device pve-data_tdata not usable.
13:39:35.066436 pvscan [2286637] filters/filter-usable.c:95 /dev/mapper/pve-data_tdata: Skipping unusable device.

I think you need to wait for someone more experienced in this regard, I suggest to make sure that the physical disks of the volume are visible and not damaged (check smart data and kernel log)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!