VM running out of space - io error

empty555

New Member
Jun 13, 2023
4
1
3
Hi guys, proxmox beginner here.

I searched trough the forum but couldn't solve my problem. My VM ran out of space today, seems like the "data" folder is full.
I can't find how to free up space in this folder.

Tried to give you guys some info with these commands, let me know if i can provide anything else useful.. Thanks!

Code:
root@pve1:/dev# df -h
Filesystem                      Size  Used Avail Use% Mounted on
udev                             16G     0   16G   0% /dev
tmpfs                           3.2G  1.3M  3.2G   1% /run
/dev/mapper/pve-root             68G   14G   51G  22% /
tmpfs                            16G   46M   16G   1% /dev/shm
tmpfs                           5.0M     0  5.0M   0% /run/lock
/dev/nvme0n1p2                  511M  336K  511M   1% /boot/efi
/dev/fuse                       128M   20K  128M   1% /etc/pve
192.168.0.100:/volume1/proxmox   11T  3.7T  6.9T  35% /mnt/pve/SynoProxmox
tmpfs                           3.2G     0  3.2G   0% /run/user/0
root@pve1:/dev# lvs
  LV              VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  base-500-disk-0 pve Vri-a-tz-k   8.00g data        29.90
  data            pve twi-aotzD- 141.91g             100.00 4.30
  root            pve -wi-ao---- <69.61g
  swap            pve -wi-ao----   7.56g
  vm-100-disk-0   pve Vwi-aotz--  64.00g data        79.08
  vm-100-disk-1   pve Vwi-aotz--  64.00g data        85.94
  vm-110-disk-0   pve Vwi-aotz--   4.00g data        75.65
  vm-120-disk-0   pve Vwi-a-tz--   8.00g data        41.69
  vm-130-disk-0   pve Vwi-a-tz--   4.00m data        14.06
  vm-130-disk-1   pve Vwi-a-tz--  32.00g data        30.42
  vm-200-disk-0   pve Vwi-a-tz--  64.00g data        27.82
root@pve1:/dev# dmsetup ls --tree
pve-base--500--disk--0 (253:13)
 └─pve-data-tpool (253:4)
    ├─pve-data_tdata (253:3)
    │  └─ (259:3)
    └─pve-data_tmeta (253:2)
       └─ (259:3)
pve-data (253:5)
 └─pve-data-tpool (253:4)
    ├─pve-data_tdata (253:3)
    │  └─ (259:3)
    └─pve-data_tmeta (253:2)
       └─ (259:3)
pve-root (253:1)
 └─ (259:3)
pve-swap (253:0)
 └─ (259:3)
pve-vm--100--disk--0 (253:11)
 └─pve-data-tpool (253:4)
    ├─pve-data_tdata (253:3)
    │  └─ (259:3)
    └─pve-data_tmeta (253:2)
       └─ (259:3)
pve-vm--100--disk--1 (253:12)
 └─pve-data-tpool (253:4)
    ├─pve-data_tdata (253:3)
    │  └─ (259:3)
    └─pve-data_tmeta (253:2)
       └─ (259:3)
pve-vm--110--disk--0 (253:7)
 └─pve-data-tpool (253:4)
    ├─pve-data_tdata (253:3)
    │  └─ (259:3)
    └─pve-data_tmeta (253:2)
       └─ (259:3)
pve-vm--120--disk--0 (253:8)
 └─pve-data-tpool (253:4)
    ├─pve-data_tdata (253:3)
    │  └─ (259:3)
    └─pve-data_tmeta (253:2)
       └─ (259:3)
pve-vm--130--disk--0 (253:9)
 └─pve-data-tpool (253:4)
    ├─pve-data_tdata (253:3)
    │  └─ (259:3)
    └─pve-data_tmeta (253:2)
       └─ (259:3)
pve-vm--130--disk--1 (253:10)
 └─pve-data-tpool (253:4)
    ├─pve-data_tdata (253:3)
    │  └─ (259:3)
    └─pve-data_tmeta (253:2)
       └─ (259:3)
pve-vm--200--disk--0 (253:6)
 └─pve-data-tpool (253:4)
    ├─pve-data_tdata (253:3)
    │  └─ (259:3)
    └─pve-data_tmeta (253:2)
       └─ (259:3)
 
Also, I see the size of my weekly backup went up a lot. This VM is mainly use to run a couple of docker containers.
I used ''docker system prune'' inside the VM and free up 3-4G but nothings crazy. Trying to figure out where the 79% and 86% usage lvs command returns for the vm-100 is coming from...

1686682350124.png
 
Why not simply delete the VM to free some space and restore the VM from backup (from when it was smaller)? Since the VM could not write all data, there is probably already some file corruption or incompleteness. I can't comment on the space usage of Docker, sorry. Make sure trim/discard is enabled and working all the way (from inside the VM to the thin provisioned LVM) otherwise your backups/used space will never decrease.
 
Why not simply delete the VM to free some space and restore the VM from backup (from when it was smaller)? Since the VM could not write all data, there is probably already some file corruption or incompleteness. I can't comment on the space usage of Docker, sorry. Make sure trim/discard is enabled and working all the way (from inside the VM to the thin provisioned LVM) otherwise your backups/used space will never decrease.
Thank you for your help. I deleted a VM I was not using anymore and freed up some space to get things going but the problem will just happen again soon if I don't find the source of the problem
I still don't know how to figure out why the Data volume is full.
 
Ok I learned something :)
I had discard checked on but didn't rung fstrim.
Ran fstrim -a on the guest and got the space back.
 
  • Like
Reactions: leesteken

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!