Thin pool pve-data-tpool (253:4) transaction_id is 32, while expected 29.

flaviu88

New Member
Feb 4, 2021
7
0
1
39
Hello,

I hope you all well!

I have an issue with the proxmox server, and I need your help.
When boots the server show this message: Thin pool pve-data-tpool (253:4) transaction_id is 32, while expected 29.

Checks:
root@srv01:/# lvscan
ACTIVE '/dev/pve/swap' [8.00 GiB] inherit
ACTIVE '/dev/pve/root' [96.00 GiB] inherit
inactive '/dev/pve/data' [<794.29 GiB] inherit
inactive '/dev/pve/vm-101-state-fst1' [<32.49 GiB] inherit
inactive '/dev/pve/snap_vm-101-disk-0_fst1' [250.00 GiB] inherit
inactive '/dev/pve/vm-101-state-dnp2' [<32.49 GiB] inherit
inactive '/dev/pve/snap_vm-101-disk-0_dnp2' [250.00 GiB] inherit
inactive '/dev/pve/snap_vm-101-disk-1_dnp2' [200.00 GiB] inherit
inactive '/dev/pve/vm-100-disk-0' [500.00 GiB] inherit
inactive '/dev/pve/snap_vm-100-disk-0_SNP_08112020' [500.00 GiB] inherit
inactive '/dev/pve/vm-101-state-SNP_08112020' [<32.49 GiB] inherit
inactive '/dev/pve/snap_vm-101-disk-0_SNP_08112020' [250.00 GiB] inherit
inactive '/dev/pve/snap_vm-101-disk-1_SNP_08112020' [200.00 GiB] inherit
inactive '/dev/pve/vm-101-state-SNP_15112020' [<32.49 GiB] inherit
inactive '/dev/pve/snap_vm-101-disk-0_SNP_15112020' [250.00 GiB] inherit
inactive '/dev/pve/snap_vm-101-disk-1_SNP_15112020' [200.00 GiB] inherit
inactive '/dev/pve/vm-101-state-SNP_05122020' [<32.49 GiB] inherit
inactive '/dev/pve/snap_vm-101-disk-0_SNP_05122020' [250.00 GiB] inherit
inactive '/dev/pve/snap_vm-101-disk-1_SNP_05122020' [200.00 GiB] inherit
inactive '/dev/pve/vm-101-disk-0' [250.00 GiB] inherit
inactive '/dev/pve/vm-101-disk-1' [200.00 GiB] inherit
inactive '/dev/pve/vm-101-state-SNP_13012021' [<32.49 GiB] inherit
inactive '/dev/pve/snap_vm-101-disk-0_SNP_13012021' [250.00 GiB] inherit
inactive '/dev/pve/snap_vm-101-disk-1_SNP_13012021' [200.00 GiB] inherit

root@srv01:/# vgdisplay
--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 78
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 25
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size <930.50 GiB
PE Size 4.00 MiB
Total PE 238207
Alloc PE / Size 236186 / 922.60 GiB
Free PE / Size 2021 / 7.89 GiB
VG UUID y9AtDL-2m1z-pTlz-c4wm-VEYh-PA2E-MkjTpV


Many thanks
 
additional info:

root@srv01:/# pvs
PV VG Fmt Attr PSize PFree
/dev/sda3 pve lvm2 a-- <930.50g 7.89g
root@srv01:/# vgs
VG #PV #LV #SN Attr VSize VFree
pve 1 25 0 wz--n- <930.50g 7.89g
root@srv01:/# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi---tz-- <794.29g
data_meta0 pve -wi-a----- <8.11g
root pve -wi-ao---- 96.00g
snap_vm-100-disk-0_SNP_08112020 pve Vri---tz-k 500.00g data vm-100-disk-0
snap_vm-101-disk-0_SNP_05122020 pve Vri---tz-k 250.00g data
snap_vm-101-disk-0_SNP_08112020 pve Vri---tz-k 250.00g data
snap_vm-101-disk-0_SNP_13012021 pve Vri---tz-k 250.00g data vm-101-disk-0
snap_vm-101-disk-0_SNP_15112020 pve Vri---tz-k 250.00g data
snap_vm-101-disk-0_dnp2 pve Vri---tz-k 250.00g data
snap_vm-101-disk-0_fst1 pve Vri---tz-k 250.00g data
snap_vm-101-disk-1_SNP_05122020 pve Vri---tz-k 200.00g data
snap_vm-101-disk-1_SNP_08112020 pve Vri---tz-k 200.00g data
snap_vm-101-disk-1_SNP_13012021 pve Vri---tz-k 200.00g data vm-101-disk-1
snap_vm-101-disk-1_SNP_15112020 pve Vri---tz-k 200.00g data
snap_vm-101-disk-1_dnp2 pve Vri---tz-k 200.00g data
swap pve -wi-ao---- 8.00g
vm-100-disk-0 pve Vwi---tz-- 500.00g data
vm-101-disk-0 pve Vwi---tz-- 250.00g data snap_vm-101-disk-0_SNP_08112020
vm-101-disk-1 pve Vwi---tz-- 200.00g data snap_vm-101-disk-1_SNP_08112020
vm-101-state-SNP_05122020 pve Vwi---tz-- <32.49g data
vm-101-state-SNP_08112020 pve Vwi---tz-- <32.49g data
vm-101-state-SNP_13012021 pve Vwi---tz-- <32.49g data
vm-101-state-SNP_15112020 pve Vwi---tz-- <32.49g data
vm-101-state-dnp2 pve Vwi---tz-- <32.49g data
vm-101-state-fst1 pve Vwi---tz-- <32.49g data
root@srv01:/# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 931G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 512M 0 part /boot/efi
└─sda3 8:3 0 930.5G 0 part
├─pve-swap 253:0 0 8G 0 lvm [SWAP]
├─pve-root 253:1 0 96G 0 lvm /
└─pve-data_meta0 253:2 0 8.1G 0 lvm
root@srv01:/# mount | grep /dev/pve/data
root@srv01:/#
 
Hi Daniel

Yes I try to repair and this is the message
then I tried to increase the space

Bash:
root@srv01:/# lvconvert --repair /dev/pve/data
  Transaction id 29 from pool "pve/data" does not match repaired transaction id 32 from /dev/mapper/pve-lvol1_pmspare.
  Volume group "pve" has insufficient free space (2021 extents): 2075 required.
  WARNING: LV pve/data_meta1 holds a backup of the unrepaired metadata. Use lvremove when no longer required.
root@srv01:/# lvresize --poolmetadatasize +3G pve/data
  Thin pool pve-data-tpool (253:5) transaction_id is 32, while expected 29.
  Failed to activate pve/data.
root@srv01:/# lvresize --poolmetadatasize +3G /dev/pve/data
  Thin pool pve-data-tpool (253:5) transaction_id is 32, while expected 29.
  Failed to activate pve/data.
 
Last edited:
It's your volume group which hasn't enough free space. You either add another disk and add it to the VG, or reduce the swap LV for example.
Code:
swapoff -a
lvreduce -L4G pve/swap
mkswap /dev/pve/swap
# Check in /etc/fstab if the swap is referenced by UUID or by path, update UUID if needed
 
  • Like
Reactions: bobmc
This is the /etc/fstab

Code:
root@srv01:/# cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
UUID=A173-CDEF /boot/efi vfat defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0

I manage to create the meta_data file

Code:
root@srv01:/# ls /dev/pve/
data_meta0  root        swap
 
additional info
Code:
root@srv01:/# lvconvert --repair /dev/pve/data
  WARNING: Sum of all thin volume sizes (<4.05 TiB) exceeds the size of thin pools and the size of whole volume group (<930.50 GiB).
  WARNING: You have not turned on protection against thin pools running out of space.
  WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
  Transaction id 29 from pool "pve/data" does not match repaired transaction id 32 from /dev/mapper/pve-lvol0_pmspare.
  Volume group "pve" has insufficient free space (2021 extents): 2075 required.
  WARNING: LV pve/data_meta2 holds a backup of the unrepaired metadata. Use lvremove when no longer required.
 
done but is still showing this message

Transaction id 29 from pool "pve/data" does not match repaired transaction id 32 from /dev/mapper/pve-lvol0_pmspare.
Volume group "pve" has insufficient free space (2021 extents): 2075 required.
 
yes I know, just I deleted the previous data_meta and I recreated, there was enough space for it
 
In my case the repair didn't help. Also the metadata didn't seem to be corrupted at all.

Many hours later I found this:
https://blog.monotok.org/lvm-transaction-id-mismatch-and-metadata-resize-error/

changing the transaction_id for pve/data fixed the issue for me.
WARNING: This is a pretty dangerous solution, please make sure you have a backup standing by just in case.
Hi mbosa,

I have this transaction id mismatch problem in Proxmox 8 after a restore from backup, plenty of space. Anyway, the link you mention above does not work anymore and I could not find anywhere on the net how to change the transaction id. Do you remember what the solution was?

Thank you!
 
Hi mbosa,

I have this transaction id mismatch problem in Proxmox 8 after a restore from backup, plenty of space. Anyway, the link you mention above does not work anymore and I could not find anywhere on the net how to change the transaction id. Do you remember what the solution was?

Thank you!
As a matter of fact I do, had to do this solution once more a while back.
These are the steps:

1. Backup the VG data:
Code:
vgcfgbackup pve -f lvbackup

2. Edit the ID to match what it's expecting:
Code:
vim lvbackup
1696420956126.png

3. restore the backup you made:
Code:
vgcfgrestore pve -f lvbackup --force

You're not done here yet though.
It still needs a repair, but that depends on the initial problem.

I hope this will at least get you though this part and I wish you luck with your journey!

These are the command I used afterwards. NOTE: not sure if which one of these are necessary and might not help in your case.
Code:
  392  vgcfgbackup pve -f lvbackup
  393  vim lvbackup
  394  vgcfgrestore pve -f lvbackup
  395  lvd
  396  lvs
  397  vgdisplay
  398  vgcfgrestore pve -f lvbackup
  399  lvs
  400  lvchange -an /dev/mapper/pve-data_tdata
  401  vgcfgrestore pve -f lvbackup
  402  vgcfgrestore pve -f lvbackup --force
  403  lvchange -ay pve/data
  404  lvs
  405  lvremove /dev/pve/data_meta0
  406  lvchange -ay pve/data
  407  lvs
  408  lvconvert --repair /dev/pve/data
  409  lvchange -ay pve/data
  410  less /var/log/syslog
  411  service pvestatd stop
  412  service pvedaemon stop
  413  lvchange -an /dev/mapper/pve-data_tdata
  414  vgmknodes -vvv pve
  415  vim /etc/lvm/lvm.conf
  416  update-initramfs -u
  417  /sbin/lvm pvscan --cache --activate ay 8:3 

  Another host:
   31  vgcfgrestore pve -f lvbackup2 --force
   32  lvchange -an /dev/mapper/pve-data_tdata
   33  lvchange -an /dev/mapper/pve-data_tmeta
   34  swapoff -a
   35  lvchange -an /dev/mapper/pve-swap
   36  lvchange -an /dev/mapper/pve-data_meta1
   37  lvchange -an /dev/mapper/pve-data_meta0
   38  vgcfgrestore pve -f lvbackup2 --force
   39  /sbin/lvm pvscan --cache --activate ay 8:3
 
As a matter of fact I do, had to do this solution once more a while back.
These are the steps:

1. Backup the VG data:
Code:
vgcfgbackup pve -f lvbackup

2. Edit the ID to match what it's expecting:
Code:
vim lvbackup
View attachment 56121

3. restore the backup you made:
Code:
vgcfgrestore pve -f lvbackup --force

You're not done here yet though.
It still needs a repair, but that depends on the initial problem.

I hope this will at least get you though this part and I wish you luck with your journey!

These are the command I used afterwards. NOTE: not sure if which one of these are necessary and might not help in your case.
Code:
  392  vgcfgbackup pve -f lvbackup
  393  vim lvbackup
  394  vgcfgrestore pve -f lvbackup
  395  lvd
  396  lvs
  397  vgdisplay
  398  vgcfgrestore pve -f lvbackup
  399  lvs
  400  lvchange -an /dev/mapper/pve-data_tdata
  401  vgcfgrestore pve -f lvbackup
  402  vgcfgrestore pve -f lvbackup --force
  403  lvchange -ay pve/data
  404  lvs
  405  lvremove /dev/pve/data_meta0
  406  lvchange -ay pve/data
  407  lvs
  408  lvconvert --repair /dev/pve/data
  409  lvchange -ay pve/data
  410  less /var/log/syslog
  411  service pvestatd stop
  412  service pvedaemon stop
  413  lvchange -an /dev/mapper/pve-data_tdata
  414  vgmknodes -vvv pve
  415  vim /etc/lvm/lvm.conf
  416  update-initramfs -u
  417  /sbin/lvm pvscan --cache --activate ay 8:3

  Another host:
   31  vgcfgrestore pve -f lvbackup2 --force
   32  lvchange -an /dev/mapper/pve-data_tdata
   33  lvchange -an /dev/mapper/pve-data_tmeta
   34  swapoff -a
   35  lvchange -an /dev/mapper/pve-swap
   36  lvchange -an /dev/mapper/pve-data_meta1
   37  lvchange -an /dev/mapper/pve-data_meta0
   38  vgcfgrestore pve -f lvbackup2 --force
   39  /sbin/lvm pvscan --cache --activate ay 8:3
Thank you! I had this same issue when backing up my node with clonezilla and restoring to a larger drive. The error did not seem to cause any problems but it's gone now after following steps 1-3. Thanks again!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!