activating pve/data failed

vuss

New Member
May 3, 2025
2
0
1
After a power outage, Proxmox is failing to start the one VM I have on it. The error received is
TASK ERROR: activating LV 'pve/data' failed: Check of pool pve/data failed (status:1). Manual repair required!

It's not the first time I got this error, normally I can resolve it by running this in Proxmox shell:
lvconvert --repair pve/data

But now it returns this error, and I still can't start the VM:
Volume group "pve" has insufficient free space (1965 extents): 2132 required.
I'm not familiar enough with Proxmox to understand exactly what free space it's complaining about.
Proxmox is installed on a 1TB SSD, with 100GB allocated to Proxmox itself, and the remainder used for data.
The VM is allocated 300GB.
There are two 4TB SATA disks are passed through to the VM and not used by Proxmox.

Some info below, let me know if anything else is required.
Code:
df -h
Filesystem            Size  Used Avail Use% Mounted on
udev                   32G     0   32G   0% /dev
tmpfs                 6.3G  2.6M  6.3G   1% /run
/dev/mapper/pve-root   94G  5.7G   84G   7% /
tmpfs                  32G   46M   32G   1% /dev/shm
tmpfs                 5.0M     0  5.0M   0% /run/lock
/dev/fuse             128M   20K  128M   1% /etc/pve
tmpfs                 6.3G     0  6.3G   0% /run/user/0

Code:
lsblk
NAME               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                  8:0    0   3.6T  0 disk
├─sda1               8:1    0   3.6T  0 part
└─sda9               8:9    0     8M  0 part
sdb                  8:16   0   3.6T  0 disk
├─sdb1               8:17   0   3.6T  0 part
└─sdb9               8:25   0     8M  0 part
nvme0n1            259:0    0 953.9G  0 disk
├─nvme0n1p1        259:1    0  1007K  0 part
├─nvme0n1p2        259:2    0     1G  0 part
└─nvme0n1p3        259:3    0 952.9G  0 part
  ├─pve-swap       252:0    0     8G  0 lvm  [SWAP]
  ├─pve-root       252:1    0    96G  0 lvm  /
  ├─pve-data_meta0 252:2    0   8.3G  0 lvm
  └─pve-data_meta1 252:3    0   8.3G  0 lvm

Code:
pvs
  PV             VG  Fmt  Attr PSize    PFree
  /dev/nvme0n1p3 pve lvm2 a--  <952.87g <7.68g

Code:
vgs
  VG  #PV #LV #SN Attr   VSize    VFree
  pve   1   6   0 wz--n- <952.87g <7.68g

Code:
lvs
  LV            VG  Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve twi---tz-- <816.21g                                                 
  data_meta0    pve -wi-a-----   <8.33g                                                 
  data_meta1    pve -wi-a-----   <8.33g                                                 
  root          pve -wi-ao----   96.00g                                                 
  swap          pve -wi-ao----    8.00g                                                 
  vm-101-disk-0 pve Vwi---tz--  300.00g data

Code:
du -Shx / | sort -rh | head -15
2.9G    /var/lib/vz/template/iso
262M    /usr/bin
215M    /usr/share/kvm
184M    /usr/lib/x86_64-linux-gnu
168M    /var/log/journal/3cde4dab4dab4453bdbf6ada13e94c8e
85M     /var/lib/apt/lists
80M     /boot
78M     /var/cache/apt
59M     /usr/lib/firmware/amdgpu
56M     /usr/lib/x86_64-linux-gnu/proxmox-backup/file-restore
55M     /usr/lib/ceph/denc
50M     /var/cache/proxmox-backup
43M     /usr/lib
38M     /usr/sbin
33M     /usr/lib/firmware
 
Last edited:
Been researching the issue for a couple days now without much progress. It seems the issue is that it's looking for free space in the pve volume group, but not finding the required "2132 extents" because the group has almost completely been allocated to logical volumes, even though the volumes themselves are well underused.

From what I can gather, that would mean I need to free up space in the pve volume group, which I can achieve either by shrinking an existing logical volume, or adding more storage to the volume group.

Shrinking seems viable, since both pve/data and pve/root have plenty of room. But possibly it would be safer to shrink pve/root rather than pve/data, since the "broken" state of the latter may result in data loss from shrinking.

The alternative is to buy another SSD and try adding it to the pve volume group. I would have thought this would be safer than shrinking, but then I found this post on the forum where the user tried that and after a successful repair got another error, which it seems was not resolved.

Not sure how to proceed, which of these options is safer or if I'm even on the right track. I anyone has any insights that would be helpful.