[SOLVED] lvconvert --repair pve/data and insufficient free space

gno

New Member
Sep 4, 2023
17
1
3
I get the following error in /var/log/syslog


Code:
Sep  4 08:44:30 pve pvedaemon[264889]: activating LV 'pve/data' failed:   Check of pool pve/data failed (status:1). Manual repair required!

When I try to repair I get

Code:
lvconvert --repair pve/data
  Volume group "pve" has insufficient free space (144 extents): 4048 required.

Here some system information:

Code:
lsblk
NAME               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                  8:0    0   3.6T  0 disk
└─sda1               8:1    0   3.6T  0 part /mnt/backup
sdb                  8:16   0   3.6T  0 disk
├─sdb1               8:17   0  1007K  0 part
├─sdb2               8:18   0   512M  0 part /boot/efi
└─sdb3               8:19   0   3.6T  0 part
  ├─pve-swap       253:0    0     8G  0 lvm  [SWAP]
  ├─pve-root       253:1    0    96G  0 lvm  /
  └─pve-data_meta0 253:2    0  15.8G  0 lvm 
sdc                  8:32   0 447.1G  0 disk
├─sdc1               8:33   0 447.1G  0 part
└─sdc9               8:41   0     8M  0 part
sdd                  8:48   0 447.1G  0 disk
├─sdd1               8:49   0 447.1G  0 part
└─sdd9               8:57   0     8M  0 part


Code:
pvs
  PV         VG  Fmt  Attr PSize  PFree 
  /dev/sdb3  pve lvm2 a--  <3.64t 576.00m


Code:
vgs
  VG  #PV #LV #SN Attr   VSize  VFree 
  pve   1  18   0 wz--n- <3.64t 576.00m


Code:
lvs
  LV                                          VG  Attr       LSize    Pool Origin        Data%  Meta%  Move Log Cpy%Sync Convert
  data                                        pve twi---tz--   <3.49t                                                           
  data_meta0                                  pve -wi-a-----   15.81g                                                           
  data_meta1                                  pve -wi-------   15.81g                                                           
  root                                        pve -wi-ao----   96.00g                                                           
  snap_vm-101-disk-0_vor_kea_installation     pve Vri---tz-k    8.00g data vm-101-disk-0                                       
  snap_vm-106-disk-0_update                   pve Vri---tz-k    8.00g data vm-106-disk-0                                       
  snap_vm-118-disk-0_Installation_Keycloak_17 pve Vri---tz-k    8.00g data vm-118-disk-0                                       
  swap                                        pve -wi-ao----    8.00g                                                           
  vm-100-disk-0                               pve Vwi---tz--    8.00g data                                                     
  vm-101-disk-0                               pve Vwi---tz--    8.00g data                                                     
  vm-102-disk-0                               pve Vwi---tz--    8.00g data                                                     
  vm-103-disk-0                               pve Vwi---tz-- 1000.00g data                                                     
  vm-104-disk-0                               pve Vwi---tz--    8.00g data                                                     
  vm-105-disk-0                               pve Vwi---tz--  500.00g data                                                     
  vm-106-disk-0                               pve Vwi---tz--    8.00g data                                                     
  vm-118-disk-0                               pve Vwi---tz--    8.00g data                                                     
  vm-123-disk-0                               pve Vwi---tz--   50.00g data                                                     
  vm-999-disk-0                               pve Vwi---tz--    4.00g data

data_meta0 and data_meta1 were created by 'lvconvert --repair'

I tryed to get more space to repair the system by deleting vm-123-disk-0 and vm-999-disk-0

Code:
lvremove -ff /dev/pve/vm-123-disk-0
lvremove -ff /dev/pve/vm-999-disk-0

But that didn't help.
They are not listed by lvs anymore but the free disk space did not change

How can I solve "Volume group "pve" has insufficient free space"?
 
Code:
df -h
Filesystem                Size  Used Avail Use% Mounted on
udev                       12G     0   12G   0% /dev
tmpfs                     2.4G  1.3M  2.4G   1% /run
/dev/mapper/pve-root       94G   87G  2.3G  98% /
tmpfs                      12G   58M   12G   1% /dev/shm
tmpfs                     5.0M     0  5.0M   0% /run/lock
/dev/sda1                 3.6T 1003G  2.5T  29% /mnt/backup
/dev/sdb2                 511M  328K  511M   1% /boot/efi
tank01                    345G  128K  345G   1% /tank01
tank01/subvol-108-disk-0  8.0G  3.7G  4.4G  46% /tank01/subvol-108-disk-0
tank01/subvol-109-disk-0   50G  1.9G   49G   4% /tank01/subvol-109-disk-0
tank01/subvol-113-disk-0  8.0G  2.1G  6.0G  26% /tank01/subvol-113-disk-0
tank01/subvol-114-disk-0  8.0G  4.9G  3.2G  61% /tank01/subvol-114-disk-0
tank01/subvol-110-disk-0  8.0G  1.9G  6.2G  23% /tank01/subvol-110-disk-0
tank01/subvol-107-disk-0   50G   20G   31G  40% /tank01/subvol-107-disk-0
tank01/subvol-121-disk-0  8.0G  5.1G  3.0G  64% /tank01/subvol-121-disk-0
tank01/subvol-122-disk-0  8.0G  3.6G  4.5G  45% /tank01/subvol-122-disk-0
tank01/subvol-115-disk-0  8.0G  1.2G  6.9G  15% /tank01/subvol-115-disk-0
tank01/subvol-120-disk-0  8.0G  5.3G  2.8G  67% /tank01/subvol-120-disk-0
tank01/subvol-119-disk-0  8.0G  1.4G  6.7G  17% /tank01/subvol-119-disk-0
tank01/subvol-112-disk-0  8.0G  2.0G  6.1G  25% /tank01/subvol-112-disk-0
tank01/subvol-116-disk-0  8.0G  3.1G  5.0G  39% /tank01/subvol-116-disk-0
tank01/subvol-117-disk-0  8.0G  7.2G  886M  90% /tank01/subvol-117-disk-0
tank01/subvol-111-disk-0  8.0G  4.6G  3.5G  58% /tank01/subvol-111-disk-0
/dev/fuse                 128M   36K  128M   1% /etc/pve
tmpfs                     2.4G     0  2.4G   0% /run/user/0
tank01/subvol-124-disk-0  8.0G  3.5G  4.6G  43% /tank01/subvol-124-disk-0
 
Well it appears your local is 98% full. You can run something like du -Shx / | sort -rh | head -15 to see what uses the most storage and then move data away
 
I deleted 2 container backups from /var/lib/vz/dump. Now df -h shows 27G instead of 2,3G before
Code:
df -h
...
/dev/mapper/pve-root       94G   63G   27G  70% /
...

But still no luck
Code:
lvconvert --repair pve/data
  Volume group "pve" has insufficient free space (144 extents): 4048 required.

I think that
144 extents = 576 MB = VFree of VG
4048 extents = 16336 MB

I don't understand why there are only 576.00 MB free from 3,64 TB

I have no idea how to solve this problem
 
Last edited:
Im not 100% sure but I think your /dev/mapper/pve-data is full. you can do a similar think with du -Shx /dev/mapper/pve-data | sort -rh | head -15
 
/dev/mapper/pve-data does not exist
Code:
ls -la /dev/mapper/
total 0
drwxr-xr-x  2 root root     120 Sep  4 16:20 .
drwxr-xr-x 20 root root    4520 Sep  4 16:20 ..
crw-------  1 root root 10, 236 Sep  3 17:18 control
lrwxrwxrwx  1 root root       7 Sep  3 17:58 pve-data_meta0 -> ../dm-2
lrwxrwxrwx  1 root root       7 Sep  3 17:18 pve-root -> ../dm-1
lrwxrwxrwx  1 root root       7 Sep  3 17:18 pve-swap -> ../dm-0
 
Ok I think I understand now. Your pool is so completely filled, it can not mount the pve-data volume. The only way recovering from this situation is to make some space available on the volume group. You can do this by

  • Adding another hard disk to the volume group
  • After backing up the Data (IMPORTANT) delete the pve-root partition
Alternatively it might be easier to restore from backup
 
Thank you very much for your help.

I have backups of all LXC containers.

Could you please explain a little deeper what I have to do (and how to prevent this error in the future).
 
I tried this to free space

Code:
lvremove /dev/pve/data_meta1
  Logical volume "data_meta1" successfully removed

lvremove /dev/pve/data_meta0
Do you really want to remove active logical volume pve/data_meta0? [y/n]: y
  Logical volume "data_meta0" successfully removed

lvconvert --repair pve/data
  Transaction id 1543 from pool "pve/data" does not match repaired transaction id 1542 from /dev/mapper/pve-lvol0_pmspare.
  WARNING: LV pve/data_meta0 holds a backup of the unrepaired metadata. Use lvremove when no longer required.

Seems that the repair was succesfull but I get the error Manual repair required! again
Code:
lvchange -an pve/data_tdata

lvchange -an pve/data_tmeta

lvchange -ay pve/data
cannot perform fix without a full examination
Usage: thin_check [options] {device|file}
Options:
  {-q|--quiet}
  {-h|--help}
  {-V|--version}
  {-m|--metadata-snap}
  {--auto-repair}
  {--override-mapping-root}
  {--clear-needs-check-flag}
  {--ignore-non-fatal-errors}
  {--skip-mappings}
  {--super-block-only}
  Check of pool pve/data failed (status:1). Manual repair required!

And I still have the question what is using 3,5TB in LV data
All LVs together are less than 2TB
 
Last edited:
Now I found the cause of my last problem.

Because I was affected of this Bug https://forum.proxmox.com/threads/l...rnel-update-on-pve-7.97406/page-2#post-430860 I added --skip-mappings to thin_check_options in my /etc/lvm/lvm.conf

This caused lvchange -ay pve/data to exit with cannot perform fix without a full examination

I realized this when I exrcuted lvchange -ay -v pve/data with option -v
Code:
lvchange -ay -v pve/data
  Activating logical volume pve/data.
  activation/volume_list configuration setting not defined: Checking only host tags for pve/data.
  Creating pve-data_tmeta
  Loading table for pve-data_tmeta (253:3).
  Resuming pve-data_tmeta (253:3).
  Creating pve-data_tdata
  Loading table for pve-data_tdata (253:4).
  Resuming pve-data_tdata (253:4).
  Executing: /usr/sbin/thin_check -q --clear-needs-check-flag --skip-mappings /dev/mapper/pve-data_tmeta
cannot perform fix without a full examination
...
Check of pool pve/data failed (status:1). Manual repair required!
  Removing pve-data_tdata (253:4)
  Removing pve-data_tmeta (253:3)

After removing option --skip-mappings from /etc/lvm/lvm.conf it worked.

Code:
lvchange -ay -v pve/data
  Activating logical volume pve/data.
  activation/volume_list configuration setting not defined: Checking only host tags for pve/data.

And then
Code:
lvs
  LV                                          VG  Attr       LSize    Pool Origin        Data%  Meta%  Move Log Cpy%Sync Convert
  data                                        pve twi-aotz--   <3.49t                    21.73  2.67                         
  data_meta0                                  pve -wi-a-----   15.81g                                                        
  root                                        pve -wi-ao----   96.00g                                                        
  snap_vm-101-disk-0_vor_kea_installation     pve Vri---tz-k    8.00g data vm-101-disk-0                                     
  snap_vm-106-disk-0_update                   pve Vri---tz-k    8.00g data vm-106-disk-0                                     
  snap_vm-118-disk-0_Installation_Keycloak_17 pve Vri---tz-k    8.00g data vm-118-disk-0                                     
  swap                                        pve -wi-ao----    8.00g                                                        
  vm-100-disk-0                               pve Vwi---tz--    8.00g data                                                   
  vm-101-disk-0                               pve Vwi---tz--    8.00g data                                                   
  vm-102-disk-0                               pve Vwi---tz--    8.00g data                                                   
  vm-103-disk-0                               pve Vwi-aotz-- 1000.00g data               43.13                               
  vm-104-disk-0                               pve Vwi---tz--    8.00g data                                                   
  vm-105-disk-0                               pve Vwi-aotz--  500.00g data               58.55                               
  vm-106-disk-0                               pve Vwi---tz--    8.00g data                                                   
  vm-118-disk-0                               pve Vwi---tz--    8.00g data

PS: see also this post for options of thin_check : https://bugzilla.redhat.com/show_bug.cgi?id=2028905#c2
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!