TASK ERROR: activating LV 'pve/data' failed: Check of pool pve/data failed (status:64). Manual repair required!

Hi Derock,

It didn't work for me. Still getting:

Code:
# lvconvert -v --repair data/data
  Preparing pool metadata spare volume for Volume group data.
  Creating logical volume lvol0
  Archiving volume group "data" metadata (seqno 189).
  Activating logical volume data/lvol0.
  activation/volume_list configuration setting not defined: Checking only host tags for data/lvol0.
  Creating data-lvol0
  Loading table for data-lvol0 (252:0).
  Resuming data-lvol0 (252:0).
  Initializing 120.00 MiB of logical volume data/lvol0 with value 0.
  Temporary logical volume "lvol0" created.
  Removing data-lvol0 (252:0)
  Renaming lvol0 as pool metadata spare volume lvol0_pmspare.
  Archiving volume group "data" metadata (seqno 190).
  activation/volume_list configuration setting not defined: Checking only host tags for data/lvol0_pmspare.
  Creating data-lvol0_pmspare
  Loading table for data-lvol0_pmspare (252:0).
  Resuming data-lvol0_pmspare (252:0).
  activation/volume_list configuration setting not defined: Checking only host tags for data/data_tmeta.
  Creating data-data_tmeta
  Loading table for data-data_tmeta (252:1).
  Resuming data-data_tmeta (252:1).
  Executing: /usr/sbin/thin_repair -i /dev/data/data_tmeta -o /dev/data/lvol0_pmspare
no compatible roots found
  /usr/sbin/thin_repair failed: 64
  Repair of thin metadata volume of thin pool data/data failed (status:64). Manual repair required!
  Removing data-data_tmeta (252:1)
  Removing data-lvol0_pmspare (252:0)
  Creating volume group backup "/etc/lvm/backup/data" (seqno 191).

Code:
# lvchange -v -ay data/data
  Activating logical volume data/data.
  activation/volume_list configuration setting not defined: Checking only host tags for data/data.
  Creating data-data_tmeta
  Loading table for data-data_tmeta (252:0).
  Resuming data-data_tmeta (252:0).
  Creating data-data_tdata
  Loading table for data-data_tdata (252:1).
  Resuming data-data_tdata (252:1).
  Executing: /usr/sbin/thin_check -q /dev/mapper/data-data_tmeta
  /usr/sbin/thin_check failed: 64
  Check of pool data/data failed (status:64). Manual repair required!
  Removing data-data_tmeta (252:0)
  Removing data-data_tdata (252:1)

Code:
# lvscan -v
  ACTIVE            '/dev/proxmox/swap' [8.00 GiB] inherit
  ACTIVE            '/dev/proxmox/root' [46.56 GiB] inherit
  ACTIVE            '/dev/proxmox/home' [<27.94 GiB] inherit
  ACTIVE            '/dev/proxmox/storage' [<149.02 GiB] inherit
  ACTIVE            '/dev/proxmox/backup' [1.59 TiB] inherit
  ACTIVE            '/dev/proxmox/repair_spare' [512.00 MiB] inherit
  inactive          '/dev/data/data' [<3.64 TiB] inherit
  inactive          '/dev/data/vm-100-disk-0' [64.00 GiB] inherit
  inactive          '/dev/data/vm-100-disk-1' [4.00 MiB] inherit
  inactive          '/dev/data/vm-100-disk-2' [4.00 MiB] inherit
  inactive          '/dev/data/snap_vm-100-disk-0_vm_ad_server2022_clean_14012023' [64.00 GiB] inherit
  inactive          '/dev/data/snap_vm-100-disk-1_vm_ad_server2022_clean_14012023' [4.00 MiB] inherit
  inactive          '/dev/data/snap_vm-100-disk-2_vm_ad_server2022_clean_14012023' [4.00 MiB] inherit
  inactive          '/dev/data/vm-102-disk-0' [8.00 GiB] inherit
  inactive          '/dev/data/vm-102-disk-1' [64.00 GiB] inherit
  inactive          '/dev/data/vm-103-disk-0' [8.00 GiB] inherit
  inactive          '/dev/data/vm-103-disk-1' [64.00 GiB] inherit
  inactive          '/dev/data/snap_vm-102-disk-0_ct_postgres3_debian11_05022023' [8.00 GiB] inherit
  inactive          '/dev/data/snap_vm-102-disk-1_ct_postgres3_debian11_05022023' [64.00 GiB] inherit
  inactive          '/dev/data/vm-104-disk-0' [8.00 GiB] inherit
  inactive          '/dev/data/vm-104-disk-1' [512.00 GiB] inherit
  inactive          '/dev/data/vm-100-state-vm_ad_server2022_demote_14012023' [<8.49 GiB] inherit
  inactive          '/dev/data/snap_vm-100-disk-0_vm_ad_server2022_demote_14012023' [64.00 GiB] inherit
  inactive          '/dev/data/snap_vm-100-disk-1_vm_ad_server2022_demote_14012023' [4.00 MiB] inherit
  inactive          '/dev/data/snap_vm-100-disk-2_vm_ad_server2022_demote_14012023' [4.00 MiB] inherit

Code:
# lvs -a
  LV                                                  VG      Attr       LSize    Pool Origin        Data%  Meta%  Move Log Cpy%Sync Convert
  data                                                data    twi---tz--   <3.64t
  [data_tdata]                                        data    Twi-------   <3.64t
  [data_tmeta]                                        data    ewi-------  120.00m
  [lvol0_pmspare]                                     data    ewi-------  120.00m
  snap_vm-100-disk-0_vm_ad_server2022_clean_14012023  data    Vri---tz-k   64.00g data vm-100-disk-0
  snap_vm-100-disk-0_vm_ad_server2022_demote_14012023 data    Vri---tz-k   64.00g data vm-100-disk-0
  snap_vm-100-disk-1_vm_ad_server2022_clean_14012023  data    Vri---tz-k    4.00m data vm-100-disk-1
  snap_vm-100-disk-1_vm_ad_server2022_demote_14012023 data    Vri---tz-k    4.00m data vm-100-disk-1
  snap_vm-100-disk-2_vm_ad_server2022_clean_14012023  data    Vri---tz-k    4.00m data vm-100-disk-2
  snap_vm-100-disk-2_vm_ad_server2022_demote_14012023 data    Vri---tz-k    4.00m data vm-100-disk-2
  snap_vm-102-disk-0_ct_postgres3_debian11_05022023   data    Vri---tz-k    8.00g data vm-102-disk-0
  snap_vm-102-disk-1_ct_postgres3_debian11_05022023   data    Vri---tz-k   64.00g data vm-102-disk-1
  vm-100-disk-0                                       data    Vwi---tz--   64.00g data
  vm-100-disk-1                                       data    Vwi---tz--    4.00m data
  vm-100-disk-2                                       data    Vwi---tz--    4.00m data
  vm-100-state-vm_ad_server2022_demote_14012023       data    Vwi---tz--   <8.49g data
  vm-102-disk-0                                       data    Vwi---tz--    8.00g data
  vm-102-disk-1                                       data    Vwi---tz--   64.00g data
  vm-103-disk-0                                       data    Vwi---tz--    8.00g data
  vm-103-disk-1                                       data    Vwi---tz--   64.00g data
  vm-104-disk-0                                       data    Vwi---tz--    8.00g data
  vm-104-disk-1                                       data    Vwi---tz--  512.00g data
 
This seems to be a regression in Proxmox VE 9/Debian Trixie.

On Proxmox VE 8/Bookworm repair on a fresh thin pool works:
Code:
[I] root@pve8a1 ~# vgcreate newvg /dev/sdg
  Volume group "newvg" successfully created
[I] root@pve8a1 ~# lvcreate -L 1G --type thin-pool --thinpool newvg/thin
  Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data.
  Logical volume "thin" created.
[I] root@pve8a1 ~# lvchange -an newvg/thin
[I] root@pve8a1 ~# lvconvert --repair newvg/thin
  WARNING: LV newvg/thin_meta0 holds a backup of the unrepaired metadata. Use lvremove when no longer required.

On Proxmox VE 9/Trixie it doesn't:
Code:
[I] root@pve9a1 ~# vgcreate newvg /dev/sdg
  Volume group "newvg" successfully created
[I] root@pve9a1 ~# lvcreate -L 1G --type thin-pool --thinpool newvg/thin
  Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data.
  Logical volume "thin" created.
[I] root@pve9a1 ~# lvchange -an newvg/thin
[I] root@pve9a1 ~# lvconvert --repair newvg/thin
no compatible roots found
  Repair of thin metadata volume of thin pool newvg/thin failed (status:64). Manual repair required!
 
Hi Fiona!,

Thank you very much for you attention to this.

In regards to that regression on proxmox 9. Is it meant to be fixed?. Are you still invetigating?. Do you have any suggestion or could I do anything to fix it?.

I have tried several things with not success. Maybe I should wait for a fix hoping it will be ready soon.

Anyways, I will try to solve this here. If anyone manage to solve it. I would appreciate any piece of advise.

Have a good one and than you a lot!

P.S.: Not sure if it is useful. Timewise, that is a cluster made of two nodes, same configuration, vg called the same way, pretty much the same apart from hardware one server is newer. One server was upgraded on Sunday 10/08/2025, It worked well, it is actually running. The second one was upgraded on Monday 11/08/2025 and here we are. Maybe any package changes over the time.
 
Last edited:
In regards to that regression on proxmox 9. Is it meant to be fixed?. Are you still invetigating?. Do you have any suggestion or could I do anything to fix it?.
I'm still investigating, but I'm not sure I'll have time to look into it in detail until my vacation next week. The post-release time is very busy unfortunately.

There has to be a missing puzzle piece somewhere, because other users can run the command just fine apparently.
 
I'm still investigating, but I'm not sure I'll have time to look into it in detail until my vacation next week. The post-release time is very busy unfortunately.

There has to be a missing puzzle piece somewhere, because other users can run the command just fine apparently.
It's ok, thank you for the update.

Yes you are right. For me it worked in one server and didn't in the other one. Do you have any suggestion quick and dirty?. Maybe downgrading lvm2, dm mapper, any hint or clue until it is fixed?. At the moment I don't know what else to do.

Thank you again!!.

P.S. Just in case I don't hear from you. Enjoy your holidays and have a really fun and good time!
 
Hi,

Today there were some package updates from proxmox repo related to lvm and dm:

Code:
dmeventd
dmsetup
libdevmapper-event1.02.1
libdevmapper1.02.1
liblvm2cmd2.03
lvm2

Despite of it didn't solve the repair issue with:
Code:
lvconvert -v --repair data/data

The main thin pool activation mapping problem was indeed solve after reboot. My only concern now if in case something happen I won't be able to repair the thin pool's metadata. However, system is up and running as expected and you will fix it at some point.

I hope it helps.

Thank you all very much!
 
Last edited:
Today there were some package updates from proxmox repo related to lvm and dm:
[...]
Despite of it didn't solve the repair issue with:
Code:
lvconvert -v --repair data/data
Yes, those packages only fix the defaults required for autoactivation. The lvconvert regression with no compatible roots found is different. You could try booting into a Proxmox VE 8.4 (or other non Debian Trixie-based distro) live-CD and repair the pool from there.
 
This seems to be a regression in Proxmox VE 9/Debian Trixie.

I've hit this one today myself after setting up a fresh thin pool on 9

Code:
root@monolith:~# lvchange -ay -v VMs/thin-pool
  Activating logical volume VMs/thin-pool.
  activation/volume_list configuration setting not defined: Checking only host tags for VMs/thin-pool.
  Creating VMs-thin--pool_tmeta
  Loading table for VMs-thin--pool_tmeta (252:3).
  Resuming VMs-thin--pool_tmeta (252:3).
  Creating VMs-thin--pool_tdata
  Loading table for VMs-thin--pool_tdata (252:4).
  Resuming VMs-thin--pool_tdata (252:4).
  Executing: /usr/sbin/thin_check -q --clear-needs-check-flag /dev/mapper/VMs-thin--pool_tmeta
  /usr/sbin/thin_check failed: 64
  Check of pool VMs/thin-pool failed (status:64). Manual repair required!
  Removing VMs-thin--pool_tmeta (252:3)
  Removing VMs-thin--pool_tdata (252:4)

Code:
root@monolith:~# lvconvert -v --repair VMs/thin-pool
  activation/volume_list configuration setting not defined: Checking only host tags for VMs/lvol0_pmspare.
  Creating VMs-lvol0_pmspare
  Loading table for VMs-lvol0_pmspare (252:3).
  Resuming VMs-lvol0_pmspare (252:3).
  activation/volume_list configuration setting not defined: Checking only host tags for VMs/thin-pool_tmeta.
  Creating VMs-thin--pool_tmeta
  Loading table for VMs-thin--pool_tmeta (252:4).
  Resuming VMs-thin--pool_tmeta (252:4).
  Executing: /usr/sbin/thin_repair -i /dev/VMs/thin-pool_tmeta -o /dev/VMs/lvol0_pmspare
no compatible roots found
  /usr/sbin/thin_repair failed: 64
  Repair of thin metadata volume of thin pool VMs/thin-pool failed (status:64). Manual repair required!
  Removing VMs-thin--pool_tmeta (252:4)
  Removing VMs-lvol0_pmspare (252:3)
I can't seem to use any metadata 'manual repair' tools on it though as the dev nodes don't exist after lvchange exits
 
Last edited: