TASK ERROR: activating LV 'pve/data' failed: Check of pool pve/data failed (status:64). Manual repair required!

zimm

New Member
Jun 29, 2025
11
1
3
Not sure how to go about resolving this. I believe it's a space constraint on my host drive, but both partitions show ample space so I'm not sure. Suggestions on how to resolve this?

System:
Code:
CPU(s) 32 x AMD Ryzen 9 7950X 16-Core Processor (1 Socket)
Kernel Version Linux 6.14.8-2-pve (2025-07-22T10:04Z)
Boot Mode EFI
Manager Version pve-manager/9.0.3/025864202ebb6109

Code:
oot@pve:~# vgs
  VG  #PV #LV #SN Attr   VSize    VFree 
  pve   1  15   0 wz--n- <930.51g 920.00m
root@pve:~#

Code:
root@pve:~# lsblk
NAME               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                  8:0    0   9.1T  0 disk
└─sda1               8:1    0   9.1T  0 part
sdb                  8:16   0   9.1T  0 disk
└─sdb1               8:17   0   9.1T  0 part
sdc                  8:32   0   9.1T  0 disk
└─sdc1               8:33   0   9.1T  0 part
sdd                  8:48   0   9.1T  0 disk
└─sdd1               8:49   0   9.1T  0 part
sde                  8:64   0   9.1T  0 disk
└─sde1               8:65   0   9.1T  0 part
sdf                  8:80   0  10.9T  0 disk
└─sdf1               8:81   0  10.9T  0 part
sdg                  8:96   0  10.9T  0 disk
└─sdg1               8:97   0  10.9T  0 part
sdh                  8:112  0   9.1T  0 disk
└─sdh1               8:113  0   9.1T  0 part
sdi                  8:128  0  10.9T  0 disk
└─sdi1               8:129  0  10.9T  0 part
sdj                  8:144  0   9.1T  0 disk
└─sdj1               8:145  0   9.1T  0 part
sdk                  8:160  0  10.9T  0 disk
└─sdk1               8:161  0  10.9T  0 part
sdl                  8:176  0   9.1T  0 disk
└─sdl1               8:177  0   9.1T  0 part
sdm                  8:192  0   9.1T  0 disk
└─sdm1               8:193  0   9.1T  0 part
sdn                  8:208  0   9.1T  0 disk
└─sdn1               8:209  0   9.1T  0 part
sdo                  8:224  0   1.8T  0 disk
└─sdo1               8:225  0   1.8T  0 part
sdp                  8:240  0   1.8T  0 disk
└─sdp1               8:241  0   1.8T  0 part
sdq                 65:0    0   1.8T  0 disk
└─sdq1              65:1    0   1.8T  0 part
sdr                 65:16   0   1.8T  0 disk
└─sdr1              65:17   0   1.8T  0 part
sds                 65:32   0 223.6G  0 disk
├─sds1              65:33   0 223.6G  0 part
└─sds9              65:41   0     8M  0 part
sdt                 65:48   0 223.6G  0 disk
├─sdt1              65:49   0 223.6G  0 part
└─sdt9              65:57   0     8M  0 part
nvme0n1            259:0    0 931.5G  0 disk
├─nvme0n1p1        259:1    0  1007K  0 part
├─nvme0n1p2        259:2    0     1G  0 part /boot/efi
└─nvme0n1p3        259:3    0 930.5G  0 part
  ├─pve-swap       252:0    0     8G  0 lvm  [SWAP]
  ├─pve-root       252:1    0   103G  0 lvm  /
  ├─pve-data_meta0 252:2    0   8.1G  1 lvm 
  └─pve-data_meta1 252:3    0   8.1G  1 lvm 
root@pve:~#

Code:
root@pve:~# vgdisplay
  --- Volume group ---
  VG Name               pve
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  80
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                15
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <930.51 GiB
  PE Size               4.00 MiB
  Total PE              238210
  Alloc PE / Size       237980 / <929.61 GiB
  Free  PE / Size       230 / 920.00 MiB
  VG UUID               1zhiwe-HoHT-fazi-Cfll-Cs8L-Gspc-yxgz6B
  
root@pve:~#

Code:
root@pve:~# du -Shx / | sort -rh | head -15
9.1G    /root
793M    /usr/bin
671M    /var/lib/vz/template/iso
533M    /var/cache/apt/archives
441M    /usr/lib/x86_64-linux-gnu
416M    /boot
311M    /usr/share/kvm
309M    /var/log/journal/b0590098690d4430a931d19c1d22d668
220M    /var/lib/unifi/db/diagnostic.data
201M    /var/lib/unifi/db/journal
174M    /usr/lib
139M    /usr/lib/jvm/java-21-openjdk-amd64/lib
100M    /usr/lib/firmware/amdgpu
91M     /var/cache/apt
88M     /var/lib/apt/lists
root@pve:~#

Code:
oot@pve:~# journalctl -xb
Aug 06 17:33:29 pve kernel: Linux version 6.14.8-2-pve (build@proxmox) (gcc (Debian 14.2.0-19) 14.2.0, GNU ld (GNU Binutils for Debian) 2.44) #1 SMP PREEMPT_DYNAMIC PMX 6>
Aug 06 17:33:29 pve kernel: Command line: BOOT_IMAGE=/boot/vmlinuz-6.14.8-2-pve root=/dev/mapper/pve-root ro quiet
Aug 06 17:33:29 pve kernel: KERNEL supported cpus:
Aug 06 17:33:29 pve kernel:   Intel GenuineIntel
Aug 06 17:33:29 pve kernel:   AMD AuthenticAMD
Aug 06 17:33:29 pve kernel:   Hygon HygonGenuine
Aug 06 17:33:29 pve kernel:   Centaur CentaurHauls
Aug 06 17:33:29 pve kernel:   zhaoxin   Shanghai 
Aug 06 17:33:29 pve kernel: BIOS-provided physical RAM map:
Aug 06 17:33:29 pve kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable
Aug 06 17:33:29 pve kernel: BIOS-e820: [mem 0x00000000000a0000-0x00000000000fffff] reserved
Aug 06 17:33:29 pve kernel: BIOS-e820: [mem 0x0000000000100000-0x0000000009afefff] usable
Aug 06 17:33:29 pve kernel: BIOS-e820: [mem 0x0000000009aff000-0x0000000009ffffff] reserved
Aug 06 17:33:29 pve kernel: BIOS-e820: [mem 0x000000000a000000-0x000000000a1fffff] usable
Aug 06 17:33:29 pve kernel: BIOS-e820: [mem 0x000000000a200000-0x000000000a211fff] ACPI NVS
Aug 06 17:33:29 pve kernel: BIOS-e820: [mem 0x000000000a212000-0x000000000affffff] usable
Aug 06 17:33:29 pve kernel: BIOS-e820: [mem 0x000000000b000000-0x000000000b020fff] reserved
Aug 06 17:33:29 pve kernel: BIOS-e820: [mem 0x000000000b021000-0x000000008857efff] usable
Aug 06 17:33:29 pve kernel: BIOS-e820: [mem 0x000000008857f000-0x000000008e57efff] reserved
Aug 06 17:33:29 pve kernel: BIOS-e820: [mem 0x000000008e57f000-0x000000008e67efff] ACPI data
Aug 06 17:33:29 pve kernel: BIOS-e820: [mem 0x000000008e67f000-0x000000009067efff] ACPI NVS
Aug 06 17:33:29 pve kernel: BIOS-e820: [mem 0x000000009067f000-0x00000000987fefff] reserved
Aug 06 17:33:29 pve kernel: BIOS-e820: [mem 0x00000000987ff000-0x0000000099ff8fff] usable
Aug 06 17:33:29 pve kernel: BIOS-e820: [mem 0x0000000099ff9000-0x0000000099ffbfff] reserved
Aug 06 17:33:29 pve kernel: BIOS-e820: [mem 0x0000000099ffc000-0x0000000099ffffff] usable
Aug 06 17:33:29 pve kernel: BIOS-e820: [mem 0x000000009a000000-0x000000009bffffff] reserved
Aug 06 17:33:29 pve kernel: BIOS-e820: [mem 0x000000009d7f3000-0x000000009fffffff] reserved
Aug 06 17:33:29 pve kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved
Aug 06 17:33:29 pve kernel: BIOS-e820: [mem 0x00000000fd000000-0x00000000ffffffff] reserved
Aug 06 17:33:29 pve kernel: BIOS-e820: [mem 0x0000000100000000-0x000000203de7ffff] usable
Aug 06 17:33:29 pve kernel: BIOS-e820: [mem 0x000000203eec0000-0x00000020801fffff] reserved
Aug 06 17:33:29 pve kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved
Aug 06 17:33:29 pve kernel: NX (Execute Disable) protection: active
Aug 06 17:33:29 pve kernel: APIC: Static calls initialized
Aug 06 17:33:29 pve kernel: e820: update [mem 0x81c32018-0x81c88c57] usable ==> usable
Aug 06 17:33:29 pve kernel: e820: update [mem 0x81bdb018-0x81c31c57] usable ==> usable
Aug 06 17:33:29 pve kernel: e820: update [mem 0x81d47018-0x81d51e57] usable ==> usable
Aug 06 17:33:29 pve kernel: extended physical RAM map:
Aug 06 17:33:29 pve kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable
Aug 06 17:33:29 pve kernel: reserve setup_data: [mem 0x00000000000a0000-0x00000000000fffff] reserved
Aug 06 17:33:29 pve kernel: reserve setup_data: [mem 0x0000000000100000-0x0000000009afefff] usable
Aug 06 17:33:29 pve kernel: reserve setup_data: [mem 0x0000000009aff000-0x0000000009ffffff] reserved
Aug 06 17:33:29 pve kernel: reserve setup_data: [mem 0x000000000a000000-0x000000000a1fffff] usable
Aug 06 17:33:29 pve kernel: reserve setup_data: [mem 0x000000000a200000-0x000000000a211fff] ACPI NVS
Aug 06 17:33:29 pve kernel: reserve setup_data: [mem 0x000000000a212000-0x000000000affffff] usable
Aug 06 17:33:29 pve kernel: reserve setup_data: [mem 0x000000000b000000-0x000000000b020fff] reserved
...skipping...
░░ Subject: User manager start-up is now complete
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ The user manager instance for user 0 has been started. All services queued
░░ for starting have been started. Note that other services might still be starting
░░ up or be started at any later time.
░░
░░ Startup of the manager took 255140 microseconds.
Aug 06 18:20:15 pve systemd[1]: Started user@0.service - User Manager for UID 0.
░░ Subject: A start job for unit user@0.service has finished successfully
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ A start job for unit user@0.service has finished successfully.
░░
░░ The job identifier is 425.
Aug 06 18:20:15 pve systemd[1]: Started session-2.scope - Session 2 of User root.
░░ Subject: A start job for unit session-2.scope has finished successfully
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ A start job for unit session-2.scope has finished successfully.
░░
░░ The job identifier is 549.
Aug 06 18:20:15 pve login[21742]: ROOT LOGIN ON pts/0
Aug 06 18:20:15 pve pvestatd[3087]: activating LV 'pve/data' failed:   Check of pool pve/data failed (status:64). Manual repair required!
Aug 06 18:20:25 pve pvestatd[3087]: activating LV 'pve/data' failed:   Check of pool pve/data failed (status:64). Manual repair required!
Aug 06 18:20:35 pve pvestatd[3087]: activating LV 'pve/data' failed:   Check of pool pve/data failed (status:64). Manual repair required!
Aug 06 18:20:44 pve pvestatd[3087]: activating LV 'pve/data' failed:   Check of pool pve/data failed (status:64). Manual repair required!
Aug 06 18:20:55 pve pvestatd[3087]: activating LV 'pve/data' failed:   Check of pool pve/data failed (status:64). Manual repair required!
Aug 06 18:21:05 pve pvestatd[3087]: activating LV 'pve/data' failed:   Check of pool pve/data failed (status:64). Manual repair required!
Aug 06 18:21:15 pve pvestatd[3087]: activating LV 'pve/data' failed:   Check of pool pve/data failed (status:64). Manual repair required!
Aug 06 18:21:24 pve pvestatd[3087]: activating LV 'pve/data' failed:   Check of pool pve/data failed (status:64). Manual repair required!
Aug 06 18:21:35 pve pvestatd[3087]: activating LV 'pve/data' failed:   Check of pool pve/data failed (status:64). Manual repair required!
Aug 06 18:21:45 pve pvestatd[3087]: activating LV 'pve/data' failed:   Check of pool pve/data failed (status:64). Manual repair required!
Aug 06 18:21:55 pve pvestatd[3087]: activating LV 'pve/data' failed:   Check of pool pve/data failed (status:64). Manual repair required!
Aug 06 18:22:04 pve pvestatd[3087]: activating LV 'pve/data' failed:   Check of pool pve/data failed (status:64). Manual repair required!
Aug 06 18:22:15 pve pvestatd[3087]: activating LV 'pve/data' failed:   Check of pool pve/data failed (status:64). Manual repair required!
Aug 06 18:22:25 pve pvestatd[3087]: activating LV 'pve/data' failed:   Check of pool pve/data failed (status:64). Manual repair required!
Aug 06 18:22:35 pve pvestatd[3087]: activating LV 'pve/data' failed:   Check of pool pve/data failed (status:64). Manual repair required!
Aug 06 18:22:44 pve pvestatd[3087]: activating LV 'pve/data' failed:   Check of pool pve/data failed (status:64). Manual repair required!
Aug 06 18:22:55 pve pvestatd[3087]: activating LV 'pve/data' failed:   Check of pool pve/data failed (status:64). Manual repair required!
Aug 06 18:23:05 pve pvestatd[3087]: activating LV 'pve/data' failed:   Check of pool pve/data failed (status:64). Manual repair required!
Aug 06 18:23:15 pve pvestatd[3087]: activating LV 'pve/data' failed:   Check of pool pve/data failed (status:64). Manual repair required!
Aug 06 18:23:24 pve pvestatd[3087]: activating LV 'pve/data' failed:   Check of pool pve/data failed (status:64). Manual repair required!
 ESC[3lines 4416-4461/4461 (END)

Code:
root@pve:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content iso,vztmpl,backup

lvmthin: local-lvm
        thinpool data
        vgname pve
        content images,rootdir

root@pve:~#

Code:
root@pve:~# pvs
  PV             VG  Fmt  Attr PSize    PFree 
  /dev/nvme0n1p3 pve lvm2 a--  <930.51g 920.00m
root@pve:~#

Code:
root@pve:~# lvconvert --repair pve/data
  Volume group "pve" has insufficient free space (230 extents): 2074 required.
root@pve:~#

1754525926563.png

1754525966720.png
 
Hi,
could you provide the /var/log/apt/term.log from during the update (please check if it was already rotated, then the relevant logs might be in term.log.1.gz etc. instead) as well as the system journal from the time right before the upgrade and until the pool was repaired, e.g. the command could be journalctl --since "2025-08-05 17:00:00" --until "2025-08-07 11:00:00" > /tmp/journal.txt with adapted time stamps.
 
Code:
root@pve:~# pvs
  PV             VG  Fmt  Attr PSize    PFree
  /dev/nvme0n1p3 pve lvm2 a--  <930.51g 920.00m
root@pve:~#

Code:
root@pve:~# lvconvert --repair pve/data
  Volume group "pve" has insufficient free space (230 extents): 2074 required.
root@pve:~#
You still have free space on the PV, so you could try extending the volume group, but it's not enough, as you need (2074-230) * 4 MiB = 7376 MiB.

Can you free up space in the VG differently? What does the output of lvs show? For example, if you have a big enough swap LV you can deactivate that for the host and remove that.
 
Could you also post the output of grep -e thin_check_options -e auto-repair /etc/lvm/lvm.conf?
 
Hope that helps: files attached

root@pve1:~# grep -e thin_check_options -e auto-repair /etc/lvm/lvm.conf
# (Not recommended.) Also see thin_check_options.
# Configuration option global/thin_check_options.
# thin_check_options = [ "-q", "--clear-needs-check-flag" ]
 

Attachments

  • Like
Reactions: fiona
In your journal I see
Code:
Aug 07 21:44:11 pve1 lvm[2198425]:   No input from event server.
Aug 07 21:44:11 pve1 lvm[2198425]:   WARNING: Failed to unmonitor pve/data.
Aug 07 21:44:11 pve1 lvm[2198425]:   8 logical volume(s) in volume group "pve" unmonitored
Aug 07 21:44:11 pve1 systemd[1]: lvm2-monitor.service: Control process exited, code=exited, status=5/NOTINSTALLED
Aug 07 21:44:11 pve1 systemd[1]: lvm2-monitor.service: Failed with result 'exit-code'.
Aug 07 21:44:11 pve1 systemd[1]: Stopped lvm2-monitor.service - Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
which might be hints. But no smoking gun yet.

In your term.log I see the following, which might not be related to the issue, but it's recommended you follow the suggetsion if you didn't already do so:
Code:
Removable bootloader found at '/boot/efi/EFI/BOOT/BOOTX64.efi', but GRUB packages not set up to update it!
Run the following command:

echo 'grub-efi-amd64 grub2/force_efi_extra_removable boolean true' | debconf-set-selections -v -u

Then reinstall GRUB with 'apt install --reinstall grub-efi-amd64'
 
This happened to one of my nodes, as well. Running lvconvert --repair pve/data as suggested above fixed it.

EDIT: Looks like this issue has been added to the wiki.
 
Last edited:
Hi,
could you provide the /var/log/apt/term.log from during the update (please check if it was already rotated, then the relevant logs might be in term.log.1.gz etc. instead) as well as the system journal from the time right before the upgrade and until the pool was repaired, e.g. the command could be journalctl --since "2025-08-05 17:00:00" --until "2025-08-07 11:00:00" > /tmp/journal.txt with adapted time stamps.
Hi. Attached are the requested logs. I updated to the beta version of 9 the day it was released. Please change extension on the .zip files to .7z.
 

Attachments

You still have free space on the PV, so you could try extending the volume group, but it's not enough, as you need (2074-230) * 4 MiB = 7376 MiB.

Can you free up space in the VG differently? What does the output of lvs show? For example, if you have a big enough swap LV you can deactivate that for the host and remove that.
Please see attached screen shots. Not sure why I'm running out of space when both volumes have plenty free. The only reason the pve/root has 50G instead of what I installed it with, 100G (the default), is because I used Gparted to shrink it. This allowed me to run the repair. After the repair completes the local-lvm comes back online. Until it crashes usually a few hours later.

Code:
root@pve:~# lvconvert --repair pve/data
 

Attachments

  • local (pve).png
    local (pve).png
    273.9 KB · Views: 2
  • local-lvm (pve).png
    local-lvm (pve).png
    270.8 KB · Views: 0
Could you also post the output of grep -e thin_check_options -e auto-repair /etc/lvm/lvm.conf?
Code:
root@pve:/tmp# grep -e thin_check_options -e auto-repair /etc/lvm/lvm.conf
        # (Not recommended.) Also see thin_check_options.
        # Configuration option global/thin_check_options.
        # thin_check_options = [ "-q", "--clear-needs-check-flag" ]
root@pve:/tmp#

Edit: Attaching lvm.conf and lvmlocal.conf from /etc/lvm/
 

Attachments

Last edited:
Hi,
thank you for all the logs! We identified the issue and it's a missing build flag for Debian's lvm2 package. For details, see: https://lore.proxmox.com/pve-devel/20250808200818.169456-1-f.ebner@proxmox.com/T/

In short, thin pools with certain minor issues won't get auto-repaired and won't auto-activate, unless a certain option is specified. It should be there by default, but the missing build flag leads to the default missing it.

Our plan is to re-build and ship and updated version of the package until the bug is fixed in Debian Trixie itself.

EDIT: the rebuilt lvm2 package 2.03.31-2+pmx1 is available on the pve-test repository now.
 
Last edited:
Hello,

Command "lvconvert -v --repair data/data" doesn't work for my. Getting this :

root@pve-server-03:/var/lib/backup# lvconvert -v --repair data/data
activation/volume_list configuration setting not defined: Checking only host tags for data/lvol0_pmspare.
Creating data-lvol0_pmspare
Loading table for data-lvol0_pmspare (252:1).
Resuming data-lvol0_pmspare (252:1).
activation/volume_list configuration setting not defined: Checking only host tags for data/data_tmeta.
Creating data-data_tmeta
Loading table for data-data_tmeta (252:7).
Resuming data-data_tmeta (252:7).
Executing: /usr/sbin/thin_repair -i /dev/data/data_tmeta -o /dev/data/lvol0_pmspare
no compatible roots found
/usr/sbin/thin_repair failed: 64
Repair of thin metadata volume of thin pool data/data failed (status:64). Manual repair required!
Removing data-data_tmeta (252:7)
Removing data-lvol0_pmspare (252:1)

I would appreciate it any help.

Cheers.
 
Hi,
Command "lvconvert -v --repair data/data" doesn't work for my. Getting this :
what does lvs -a data say? What about vgs and pvs? Is there a pmspare volume? How was the thin pool initially created? With Proxmox VE or manually on the CLI with specific settings?
 
Hi Fiona,

# lvs -a data
File descriptor 9 (pipe:[37162]) leaked on lvs invocation. Parent PID 6959: /bin/bash
File descriptor 11 (pipe:[37163]) leaked on lvs invocation. Parent PID 6959: /bin/bash
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data data twi---tz-- <3.64t
[data_tdata] data Twi------- <3.64t
[data_tmeta] data ewi------- 120.00m
[lvol0_pmspare] data ewi------- 120.00m
snap_vm-100-disk-0_vm_ad_server2022_clean_14012023 data Vri---tz-k 64.00g data vm-100-disk-0
snap_vm-100-disk-0_vm_ad_server2022_demote_14012023 data Vri---tz-k 64.00g data vm-100-disk-0
snap_vm-100-disk-1_vm_ad_server2022_clean_14012023 data Vri---tz-k 4.00m data vm-100-disk-1
snap_vm-100-disk-1_vm_ad_server2022_demote_14012023 data Vri---tz-k 4.00m data vm-100-disk-1
snap_vm-100-disk-2_vm_ad_server2022_clean_14012023 data Vri---tz-k 4.00m data vm-100-disk-2
snap_vm-100-disk-2_vm_ad_server2022_demote_14012023 data Vri---tz-k 4.00m data vm-100-disk-2
snap_vm-102-disk-0_ct_postgres3_debian11_05022023 data Vri---tz-k 8.00g data vm-102-disk-0
snap_vm-102-disk-1_ct_postgres3_debian11_05022023 data Vri---tz-k 64.00g data vm-102-disk-1
snap_vm-103-disk-0_ct_bareos3_debian11_05022023 data Vri---tz-k 8.00g data vm-103-disk-0
snap_vm-103-disk-1_ct_bareos3_debian11_05022023 data Vri---tz-k 64.00g data vm-103-disk-1
vm-100-disk-0 data Vwi---tz-- 64.00g data
vm-100-disk-1 data Vwi---tz-- 4.00m data
vm-100-disk-2 data Vwi---tz-- 4.00m data
vm-100-state-vm_ad_server2022_demote_14012023 data Vwi---tz-- <8.49g data
vm-102-disk-0 data Vwi---tz-- 8.00g data
vm-102-disk-1 data Vwi---tz-- 64.00g data
vm-103-disk-0 data Vwi---tz-- 8.00g data
vm-103-disk-1 data Vwi---tz-- 64.00g data
vm-104-disk-0 data Vwi---tz-- 8.00g data
vm-104-disk-1 data Vwi---tz-- 512.00g data
vm-105-disk-0 data Vwi---tz-- 64.00g data
vm-105-disk-1 data Vwi---tz-- 4.00m data
vm-107-disk-0 data Vwi---tz-- 4.00m data
vm-107-disk-1 data Vwi---tz-- 64.00g data
vm-107-disk-2 data Vwi---tz-- 4.00m data

# vgs
File descriptor 9 (pipe:[37162]) leaked on vgs invocation. Parent PID 6959: /bin/bash
File descriptor 11 (pipe:[37163]) leaked on vgs invocation. Parent PID 6959: /bin/bash
VG #PV #LV #SN Attr VSize VFree
data 2 26 0 wz--n- <3.64t 0
proxmox 1 6 0 wz--n- <1.82t 832.00m

# pvs
File descriptor 9 (pipe:[37162]) leaked on pvs invocation. Parent PID 6959: /bin/bash
File descriptor 11 (pipe:[37163]) leaked on pvs invocation. Parent PID 6959: /bin/bash
PV VG Fmt Attr PSize PFree
/dev/sda2 proxmox lvm2 a-- <1.82t 832.00m
/dev/sdb data lvm2 a-- <1.82t 0
/dev/sdc data lvm2 a-- <1.82t 0

# vgchange -v -ay data
File descriptor 9 (pipe:[37162]) leaked on vgchange invocation. Parent PID 6959: /bin/bash
File descriptor 11 (pipe:[37163]) leaked on vgchange invocation. Parent PID 6959: /bin/bash
Activating logical volume data/data.
activation/volume_list configuration setting not defined: Checking only host tags for data/data.
Creating data-data_tmeta
Loading table for data-data_tmeta (252:0).
Resuming data-data_tmeta (252:0).
Creating data-data_tdata
Loading table for data-data_tdata (252:1).
Resuming data-data_tdata (252:1).
Executing: /usr/sbin/thin_check -q /dev/mapper/data-data_tmeta
/usr/sbin/thin_check failed: 64
Check of pool data/data failed (status:64). Manual repair required!
Removing data-data_tmeta (252:0)
Removing data-data_tdata (252:1)
Activating logical volume data/vm-100-disk-0.
activation/volume_list configuration setting not defined: Checking only host tags for data/vm-100-disk-0.
Creating data-data_tmeta
Loading table for data-data_tmeta (252:0).
Resuming data-data_tmeta (252:0).
Creating data-data_tdata
Loading table for data-data_tdata (252:1).
Resuming data-data_tdata (252:1).
Executing: /usr/sbin/thin_check -q /dev/mapper/data-data_tmeta
/usr/sbin/thin_check failed: 64
Check of pool data/data failed (status:64). Manual repair required!
Removing data-data_tmeta (252:0)
Removing data-data_tdata (252:1)
Activating logical volume data/vm-100-disk-1.
activation/volume_list configuration setting not defined: Checking only host tags for data/vm-100-disk-1.
Creating data-data_tmeta
Loading table for data-data_tmeta (252:0).
Resuming data-data_tmeta (252:0).
Creating data-data_tdata
Loading table for data-data_tdata (252:1).
Resuming data-data_tdata (252:1).
Executing: /usr/sbin/thin_check -q /dev/mapper/data-data_tmeta
/usr/sbin/thin_check failed: 64
Check of pool data/data failed (status:64). Manual repair required!
Removing data-data_tmeta (252:0)
Removing data-data_tdata (252:1)
Activating logical volume data/vm-100-disk-2.
activation/volume_list configuration setting not defined: Checking only host tags for data/vm-100-disk-2.
Creating data-data_tmeta
Loading table for data-data_tmeta (252:0).
Resuming data-data_tmeta (252:0).
Creating data-data_tdata
Loading table for data-data_tdata (252:1).
Resuming data-data_tdata (252:1).
Executing: /usr/sbin/thin_check -q /dev/mapper/data-data_tmeta
/usr/sbin/thin_check failed: 64
Check of pool data/data failed (status:64). Manual repair required!
Removing data-data_tmeta (252:0)
Removing data-data_tdata (252:1)
ACTIVATION_SKIP flag set for LV data/snap_vm-100-disk-0_vm_ad_server2022_clean_14012023, skipping activation.
ACTIVATION_SKIP flag set for LV data/snap_vm-100-disk-1_vm_ad_server2022_clean_14012023, skipping activation.
ACTIVATION_SKIP flag set for LV data/snap_vm-100-disk-2_vm_ad_server2022_clean_14012023, skipping activation.
Activating logical volume data/vm-102-disk-0.
activation/volume_list configuration setting not defined: Checking only host tags for data/vm-102-disk-0.
Creating data-data_tmeta
Loading table for data-data_tmeta (252:0).
Resuming data-data_tmeta (252:0).
Creating data-data_tdata
Loading table for data-data_tdata (252:1).
Resuming data-data_tdata (252:1).
Executing: /usr/sbin/thin_check -q /dev/mapper/data-data_tmeta
/usr/sbin/thin_check failed: 64
Check of pool data/data failed (status:64). Manual repair required!
Removing data-data_tmeta (252:0)
Removing data-data_tdata (252:1)
Activating logical volume data/vm-102-disk-1.
activation/volume_list configuration setting not defined: Checking only host tags for data/vm-102-disk-1.
Creating data-data_tmeta
Loading table for data-data_tmeta (252:0).
Resuming data-data_tmeta (252:0).
Creating data-data_tdata
Loading table for data-data_tdata (252:1).
Resuming data-data_tdata (252:1).
Executing: /usr/sbin/thin_check -q /dev/mapper/data-data_tmeta
/usr/sbin/thin_check failed: 64
Check of pool data/data failed (status:64). Manual repair required!
Removing data-data_tmeta (252:0)
Removing data-data_tdata (252:1)
Activating logical volume data/vm-103-disk-0.
activation/volume_list configuration setting not defined: Checking only host tags for data/vm-103-disk-0.
Creating data-data_tmeta
Loading table for data-data_tmeta (252:0).
Resuming data-data_tmeta (252:0).
Creating data-data_tdata
Loading table for data-data_tdata (252:1).
Resuming data-data_tdata (252:1).
Executing: /usr/sbin/thin_check -q /dev/mapper/data-data_tmeta
/usr/sbin/thin_check failed: 64
Check of pool data/data failed (status:64). Manual repair required!
Removing data-data_tmeta (252:0)
Removing data-data_tdata (252:1)
Activating logical volume data/vm-103-disk-1.
activation/volume_list configuration setting not defined: Checking only host tags for data/vm-103-disk-1.
Creating data-data_tmeta
Loading table for data-data_tmeta (252:0).
Resuming data-data_tmeta (252:0).
Creating data-data_tdata
Loading table for data-data_tdata (252:1).
Resuming data-data_tdata (252:1).
Executing: /usr/sbin/thin_check -q /dev/mapper/data-data_tmeta
/usr/sbin/thin_check failed: 64
Check of pool data/data failed (status:64). Manual repair required!
Removing data-data_tmeta (252:0)
Removing data-data_tdata (252:1)
ACTIVATION_SKIP flag set for LV data/snap_vm-103-disk-0_ct_bareos3_debian11_05022023, skipping activation.
ACTIVATION_SKIP flag set for LV data/snap_vm-103-disk-1_ct_bareos3_debian11_05022023, skipping activation.
ACTIVATION_SKIP flag set for LV data/snap_vm-102-disk-0_ct_postgres3_debian11_05022023, skipping activation.
ACTIVATION_SKIP flag set for LV data/snap_vm-102-disk-1_ct_postgres3_debian11_05022023, skipping activation.
Activating logical volume data/vm-104-disk-0.
activation/volume_list configuration setting not defined: Checking only host tags for data/vm-104-disk-0.
Creating data-data_tmeta
Loading table for data-data_tmeta (252:0).
Resuming data-data_tmeta (252:0).
Creating data-data_tdata
Loading table for data-data_tdata (252:1).
Resuming data-data_tdata (252:1).
Executing: /usr/sbin/thin_check -q /dev/mapper/data-data_tmeta
/usr/sbin/thin_check failed: 64
Check of pool data/data failed (status:64). Manual repair required!
Removing data-data_tmeta (252:0)
Removing data-data_tdata (252:1)
Activating logical volume data/vm-104-disk-1.
activation/volume_list configuration setting not defined: Checking only host tags for data/vm-104-disk-1.
Creating data-data_tmeta
Loading table for data-data_tmeta (252:0).
Resuming data-data_tmeta (252:0).
Creating data-data_tdata
Loading table for data-data_tdata (252:1).
Resuming data-data_tdata (252:1).
Executing: /usr/sbin/thin_check -q /dev/mapper/data-data_tmeta
/usr/sbin/thin_check failed: 64
Check of pool data/data failed (status:64). Manual repair required!
Removing data-data_tmeta (252:0)
Removing data-data_tdata (252:1)
Activating logical volume data/vm-100-state-vm_ad_server2022_demote_14012023.
activation/volume_list configuration setting not defined: Checking only host tags for data/vm-100-state-vm_ad_server2022_demote_14012023.
Creating data-data_tmeta
Loading table for data-data_tmeta (252:0).
Resuming data-data_tmeta (252:0).
Creating data-data_tdata
Loading table for data-data_tdata (252:1).
Resuming data-data_tdata (252:1).
Executing: /usr/sbin/thin_check -q /dev/mapper/data-data_tmeta
/usr/sbin/thin_check failed: 64
Check of pool data/data failed (status:64). Manual repair required!
Removing data-data_tmeta (252:0)
Removing data-data_tdata (252:1)
ACTIVATION_SKIP flag set for LV data/snap_vm-100-disk-0_vm_ad_server2022_demote_14012023, skipping activation.
ACTIVATION_SKIP flag set for LV data/snap_vm-100-disk-1_vm_ad_server2022_demote_14012023, skipping activation.
ACTIVATION_SKIP flag set for LV data/snap_vm-100-disk-2_vm_ad_server2022_demote_14012023, skipping activation.
Activated 0 logical volumes in volume group data.
0 logical volume(s) in volume group "data" now active

Thin pool was create by hand based on proxmox documentarion, this is an upgrade from debian bookworm to trixie. This is a node of two in a proxmox cluster. One node went very well on upgrade. This second refuse to activate lvm. Both of them uses thin pools.

Also as you can see it has another vg called proxmox (no thin pool) and it activates well too.

Thank you for your help.
 
Last edited:
Running into the same issue on my update from 8 to 9, but the proposed workaround of `lvconvert --repair` does not work for my setup:

Code:
root@goliath:~# lvconvert -v --repair NVME/NVME
Preparing pool metadata spare volume for Volume group NVME.
Volume group "NVME" has insufficient free space (0 extents): 4065 required.

root@goliath:~# vgs 
  VG   #PV #LV #SN Attr   VSize    VFree 
  NVME   1  14   0 wz--n-   <1.82t     0 
  pve    1   3   0 wz--n- <464.76g 16.00g
root@goliath:~# pvs
  PV           VG   Fmt  Attr PSize    PFree 
  /dev/nvme0n1 NVME lvm2 a--    <1.82t     0 
  /dev/sdb3    pve  lvm2 a--  <464.76g 16.00g
root@goliath:~# lvs -a NVME
  LV            VG   Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  NVME          NVME twi---tz--  <1.79t                                                    
  NVME_meta0    NVME -wi-a----- <15.88g                                                    
  [NVME_tdata]  NVME Twi-------  <1.79t                                                    
  [NVME_tmeta]  NVME ewi------- <15.88g                                                    
  vm-100-disk-0 NVME Vwi---tz--  32.00g NVME                                               
  vm-101-disk-0 NVME Vwi---tz-- 128.00g NVME                                               
  vm-102-disk-0 NVME Vwi---tz--   8.00g NVME                                               
  vm-103-disk-0 NVME Vwi---tz-- 256.00g NVME                                               
  vm-104-disk-0 NVME Vwi---tz-- 128.00g NVME                                               
  vm-105-disk-0 NVME Vwi---tz--  64.00g NVME                                               
  vm-106-disk-1 NVME Vwi---tz--  32.00g NVME                                               
  vm-111-disk-0 NVME Vwi---tz--  32.00g NVME                                               
  vm-112-disk-0 NVME Vwi---tz--  16.00g NVME                                               
  vm-113-disk-0 NVME Vwi---tz--  32.00g NVME                                               
  vm-129-disk-0 NVME Vwi---tz--  64.00g NVME                                               
  vm-129-disk-1 NVME Vwi---tz--   4.00m NVME

Summing up all the LSize for my disks yields a number way lower than my disk's total capacity of 2tb. I'm not quite sure why VFree/PFree are all 0. Even if all my VMs were magically using 200% of their allocated space, I should still have non-zero free space. Maybe there's something about LVMs I don't know.

Tried shrinking the main tdata volume, but "Thin pool volumes [...] cannot be reduced in size yet."

Anyways, I noticed that there were two metadata volumes (NVME_meta0, NVME_tmeta). Felt like living on the edge, so I deleted the NVME_meta0 (lvremove NVME/NVME_meta0) and reran the repair. After the repair,
Code:
root@goliath:~# lvconvert --repair NVME/NVME
  Volume group "NVME" has insufficient free space (0 extents): 4065 required.
  WARNING: LV NVME/NVME_meta0 holds a backup of the unrepaired metadata. Use lvremove when no longer required.

Well, I think I know where that meta0 came from. Probably had to repair this in the past, and never cleaned up the backup'd meta0. It did take its time though, and despite having the "insufficient free space" error again, the repair succeeded. A reboot later, and all my CTs and VMs were back up and running.
 
  • Like
Reactions: fiona