LVM nearly full, can't free space

dmbaty

Member
Jan 4, 2018
5
0
21
36
My LVM seems to be filling up and not freeing space, even when some of the containers/VMs stored there are removed or trimmed (fstrim -v / in the container/VMs). I have ensured that discards=on is set for the VMs using this storage pool. This appears to be preventing me running backups. What am I missing to actually free up space here? Whatever I do, it shows as 15.80g free...
Code:
root@pve:~# lvs
  LV            VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve twi-aotz-- 199.59g             42.50  20.13
  root          pve -wi-ao----  74.25g
  swap          pve -wi-ao----   8.00g
  vm-101-disk-1 pve Vwi-aotz--  32.00g data        44.46
  vm-106-disk-1 pve Vwi-aotz--  32.00g data        15.95
  vm-107-disk-1 pve Vwi-aotz--  32.00g data        22.66
  vm-110-disk-1 pve Vwi-a-tz--  16.00g data        20.71
  vm-111-disk-1 pve Vwi-a-tz--  32.00g data        16.15
  vm-112-disk-2 pve Vwi-aotz--  32.00g data        21.52
  vm-123-disk-1 pve Vwi-aotz--  64.00g data        19.98
  vm-150-disk-1 pve Vwi-a-tz--  37.00g data        81.35
root@pve:~# pvs
  PV         VG  Fmt  Attr PSize   PFree
  /dev/sdb3  pve lvm2 a--  297.84g 15.80g
Thanks for the help, I'm pulling my hair out here and I'm hoping I'm missing something simple.
 
Trimming VM/CT disks only frees up space in the thinpool (pve-data) not in the volume group (pve) - AFAIK shrinking a thinpool does not work.
If you want to backup VMs, you could create a thin-volume in the pool, create a filesystem on it, and add it as directory storage in PVE
 
Trimming VM/CT disks only frees up space in the thinpool (pve-data) not in the volume group (pve) - AFAIK shrinking a thinpool does not work.
If you want to backup VMs, you could create a thin-volume in the pool, create a filesystem on it, and add it as directory storage in PVE

Thanks. To clarify - I'm backing up to a different drive/storage, the only issue is I can't complete the backup on VMs/containers in the local-lvm. Looking around at that error, it seems to be related to running out of space in the LVM. Does that make sense?
 
please post the error-message - but at least for container-backups you need some space in (see `man vzdump` the whole ct gets backed up there an then sent to the remote storage) - so I guess this is where your problem originates
 
please post the error-message - but at least for container-backups you need some space in (see `man vzdump` the whole ct gets backed up there an then sent to the remote storage) - so I guess this is where your problem originates

After digging in more, I was able to backup other VMs/containers in the LVM. Not sure yet if this is related to the size of the disc in this VM, or something else is going on. I'll investigate later today

Code:
Stop
INFO: starting new backup job: vzdump 102 --storage backup --compress lzo --mode snapshot --node pve --remove 0
INFO: Starting Backup of VM 102 (qemu)
INFO: status = stopped
INFO: update VM 102: -lock backup
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: VM Name: piaf
INFO: include disk 'scsi0' 'ssd:102/vm-102-disk-1.qcow2' 32G
INFO: creating archive '/mnt/backup/dump/vzdump-qemu-102-2018_12_20-09_45_25.vma.lzo'
INFO: starting kvm to execute backup task
INFO: started backup task 'ec95bc4f-95f1-4391-ae0e-8058170743bd'
INFO: status: 0% (155058176/34359738368), sparse 0% (136982528), duration 3, read/write 51/6 MB/s
INFO: status: 1% (368443392/34359738368), sparse 0% (174387200), duration 12, read/write 23/19 MB/s
INFO: status: 2% (699400192/34359738368), sparse 0% (230244352), duration 15, read/write 110/91 MB/s
INFO: status: 3% (1117519872/34359738368), sparse 0% (258949120), duration 19, read/write 104/97 MB/s
INFO: status: 4% (1444413440/34359738368), sparse 0% (302407680), duration 22, read/write 108/94 MB/s
INFO: status: 31% (10928128000/34359738368), sparse 28% (9623470080), duration 25, read/write 3161/54 MB/s
INFO: status: 32% (11227955200/34359738368), sparse 28% (9626976256), duration 28, read/write 99/98 MB/s
INFO: status: 33% (11391860736/34359738368), sparse 28% (9633206272), duration 31, read/write 54/52 MB/s
INFO: status: 34% (11749163008/34359738368), sparse 28% (9679650816), duration 34, read/write 119/103 MB/s
INFO: status: 42% (14673313792/34359738368), sparse 35% (12333211648), duration 37, read/write 974/90 MB/s
INFO: status: 56% (19498074112/34359738368), sparse 49% (16922177536), duration 40, read/write 1608/78 MB/s
ERROR: job failed with err -5 - Input/output error
INFO: aborting backup job
INFO: stopping kvm after backup task
ERROR: Backup of VM 102 failed - job failed with err -5 - Input/output error
INFO: Backup job finished with errors
TASK ERROR: job errors
 
hmm - the Input/output error could very well be related to a broken disk?
* what is mounted on /mnt/backup/dump? (the `mount` command should provide the information)
* do you see anything relevant in `dmesg`
 
hmm - the Input/output error could very well be related to a broken disk?
* what is mounted on /mnt/backup/dump? (the `mount` command should provide the information)
* do you see anything relevant in `dmesg`

I have another drive with a single ext4 partition mounted on /mnt/backup.

Dmesg shows a checksum error.... So maybe I missed something simple by not checking that as closely. I'll check smart and dig into that more.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!