[SOLVED] Proxmox <local> disk space

molnart

Active Member
Jul 30, 2020
41
7
28
41
I have noticed the free space on my local partition is decreasing significantly, while in theory there should be basically nothing on this partition except the proxmox system and a few iso images or lxc templates.
1618921064677.png

i was trying to check with ncdu what's on the disk, but it shows only 8,4 GB is used up from the system
1618921131231.png

so the big questions, where is the rest of the data hiding?
 
hi,

what do you get from du -ha /var/lib/vz | sort -h ?
 
Last edited:
Code:
# du -h /var/lib/vz | sort -h
4.0K    /var/lib/vz/dump
4.0K    /var/lib/vz/images
4.0K    /var/lib/vz/template/qemu
1.3G    /var/lib/vz/template/cache
2.0G    /var/lib/vz/template/iso
3.2G    /var/lib/vz
3.2G    /var/lib/vz/template
 
can you also post outputs from:

* cat /etc/pve/storage.cfg
* df -h
* lsblk -f
 
sure
Code:
# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content vztmpl,images,iso
        shared 0

lvmthin: local-lvm
        thinpool data
        vgname pve
        content images,rootdir

nfs: omv-backup
        export /export/Backups
        path /mnt/pve/omv-backup
        server 192.168.50.8
        content backup,images,rootdir
        maxfiles 5

nfs: omv_data
        export /export/OMV-data
        path /mnt/pve/omv_data
        server 192.168.50.8
        content snippets

dir: backup-ext
        path /mnt/backup
        content backup
        prune-backups keep-last=2,keep-monthly=1
        shared 0

Code:
df -h
Filesystem                     Size  Used Avail Use% Mounted on
udev                           7.8G     0  7.8G   0% /dev
tmpfs                          1.6G  156M  1.4G  10% /run
/dev/mapper/pve-root            16G   14G  1.7G  89% /
tmpfs                          7.8G   31M  7.8G   1% /dev/shm
tmpfs                          5.0M     0  5.0M   0% /run/lock
tmpfs                          7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/fuse                       30M   28K   30M   1% /etc/pve
192.168.50.8:/export/OMV-data   43G  802M   42G   2% /mnt/pve/omv_data
192.168.50.8:/export/Backups   3.6T  3.5T  157G  96% /mnt/pve/omv-backup
tmpfs                          1.6G     0  1.6G   0% /run/user/0

Code:
# lsblk -f
NAME                                FSTYPE    LABEL     UUID                                   FSAVAIL FSUSE% MOUNTPOINTsda
└─sda1                              btrfs     Data1     296f7598-8d50-4ab2-83ec-d8c4e00a4458
sdb
└─sdb1                              btrfs     Data2     d442c777-f428-4fdf-b852-3438d50faf1b
sdc
└─sdc1                              btrfs     Data3     3598073d-4e12-4467-b4cd-a699e452d4cd
sdd
└─sdd1                              ext4      Vertex3   d6505444-e522-417e-adc9-654b169e5b88
sde
└─sde1                              ext4      snapraid2 4ca1dab4-dabe-459f-9ced-bdb6f0d4fcb2
sdf
└─sdf1
sdg
├─sdg1
├─sdg2                              vfat                BF6E-6C30
└─sdg3                              LVM2_memb           lMyste-QxI3-Ckni-Q37t-DbNg-Frr9-QlDBBG
  ├─pve-root                        ext4                731d604b-c480-4a30-bec1-ebe7b173c369      1.7G    84% /
  ├─pve-swap                        swap                4edfa5a7-978e-4bce-8ee6-3f8f904670d4                  [SWAP]
  ├─pve-data_tmeta
  │ └─pve-data-tpool
  │   ├─pve-data
  │   ├─pve-vm--102--disk--0
  │   ├─pve-vm--103--disk--0        ext4                01c6957c-9ea6-42c3-99ea-d5fd18075546
  │   ├─pve-vm--104--disk--0
  │   ├─pve-vm--105--disk--0        ext4                9e97c5de-509b-404c-bb5b-65edd95740af
  │   ├─pve-vm--100--disk--0
  │   ├─pve-vm--109--disk--0        ext4                cbef9794-ae1f-4bc0-9f66-fe06cf92ef42
  │   ├─pve-vm--110--disk--0        ext4                7d8bc69c-ec1c-40ae-93cc-9bcbb319696a
  │   ├─pve-vm--111--disk--0
  │   ├─pve-vm--101--disk--0
  │   ├─pve-vm--101--state--v20_7_8
  │   └─pve-vm--106--disk--1        ext4                5efd511c-c5a6-4018-be12-994a908f727a
  └─pve-data_tdata
    └─pve-data-tpool
      ├─pve-data
      ├─pve-vm--102--disk--0
      ├─pve-vm--103--disk--0        ext4                01c6957c-9ea6-42c3-99ea-d5fd18075546
      ├─pve-vm--104--disk--0
      ├─pve-vm--105--disk--0        ext4                9e97c5de-509b-404c-bb5b-65edd95740af
      ├─pve-vm--100--disk--0
      ├─pve-vm--109--disk--0        ext4                cbef9794-ae1f-4bc0-9f66-fe06cf92ef42
      ├─pve-vm--110--disk--0        ext4                7d8bc69c-ec1c-40ae-93cc-9bcbb319696a
      ├─pve-vm--111--disk--0
      ├─pve-vm--101--disk--0
      ├─pve-vm--101--state--v20_7_8
      └─pve-vm--106--disk--1        ext4                5efd511c-c5a6-4018-be12-994a908f727a
 
Is /mnt/backup a share? It doesn't seem to be mounted, so that might be the problem.
and we have the culprit. it's an external drive used for redundant backups of selected VMs that apparently got unmounted a few months ago and instead of throwing an error pve kept backuping to local... the question remaining is where are these phantom backups on local and how to get rid of them...?
 
From Proxmox's view it's just a folder, it doesn't know if there's supposed to be an external disk.
If you unmount the drive, the backups will still be there. You can move them to another directory and even move them onto the external disk after remounting it again.
 
Last edited:
  • Like
Reactions: molnart
Hi,
I'm having very similar issue. Pve-root free space is gradually decreasing.
Code:
root@pve1:~# du -ha /var/lib/vz | sort -h
4.0K    /var/lib/vz/dump
4.0K    /var/lib/vz/images
4.0K    /var/lib/vz/template/cache
4.0K    /var/lib/vz/template/iso
12K     /var/lib/vz/template
24K     /var/lib/vz
root@pve1:~#
Code:
root@pve1:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content vztmpl,backup,iso

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

zfspool: NVME
        pool NVME
        content images,rootdir
        mountpoint /NVME
        nodes pve1

zfspool: pool1
        pool pool1
        content images,rootdir
        mountpoint /pool1
        nodes pve1

dir: NVME_mp
        path /mnt/nvme
        content vztmpl,backup,images,iso,snippets,rootdir
        prune-backups keep-all=1
        shared 0

dir: pool1_mp
        path /mnt/pool1
        content backup,vztmpl,rootdir,snippets,iso,images
        prune-backups keep-all=1
        shared 1
Code:
root@pve1:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
udev                   32G     0   32G   0% /dev
tmpfs                 6.3G  3.1M  6.3G   1% /run
/dev/mapper/pve-root   94G   92G     0 100% /
tmpfs                  32G   40M   32G   1% /dev/shm
tmpfs                 5.0M     0  5.0M   0% /run/lock
/dev/sda2             511M  336K  511M   1% /boot/efi
NVME                  900G  128K  900G   1% /NVME
NVME/nvme             900G  128K  900G   1% /mnt/NVME
pool1                 9.0T  128K  9.0T   1% /pool1
pool1/storage         9.0T  128K  9.0T   1% /mnt/storage
pool1/pool1            11T  1.9T  9.0T  17% /mnt/pool1
/dev/fuse             128M   20K  128M   1% /etc/pve
tmpfs                 6.3G     0  6.3G   0% /run/user/0
Code:
root@pve1:~# lsblk -f
NAME FSTYPE FSVER LABEL UUID                                   FSAVAIL FSUSE% MOUNTPOINT
sda
├─sda1
│
├─sda2
│    vfat   FAT32       3701-D3DD                               510.7M     0% /boot/efi
└─sda3
     LVM2_m LVM2        N3EJj0-sAGD-gjrw-RFBT-pnsL-Iwf9-6YUc8K
  ├─pve-swap
  │  swap   1           338091f9-de81-4306-b395-291a650f9966                  [SWAP]
  ├─pve-root
  │  ext4   1.0         f9543ea1-9d63-408d-b6d3-cd007acc3a00         0    98% /
  ├─pve-data_tmeta
  │
  │ └─pve-data-tpool
  │
  │   ├─pve-data
  │   │                                                                         
  │   ├─pve-vm--200--disk--0
  │   │                                                                         
  │   └─pve-vm--200--disk--1
  │                                                                             
  └─pve-data_tdata

    └─pve-data-tpool

      ├─pve-data
      │                                                                         
      ├─pve-vm--200--disk--0
      │                                                                         
      └─pve-vm--200--disk--1
                                                                                
sdb
├─sdb1
│    zfs_me 5000  pool1 798891316861960
└─sdb9

sdc
├─sdc1
│    zfs_me 5000  pool1 798891316861960
└─sdc9

sdd
├─sdd1
│    zfs_me 5000  pool1 798891316861960
└─sdd9

nvme0n1
│
├─nvme0n1p1
│    zfs_me 5000  NVME  12504286443149755293
└─nvme0n1p9



Best regards
 
another thing that can take up a lot of space on older installation is old linux kernels. for some reason apt-autoremove does not take care of them and they have to be manually uninstalled. i gained 3 gigs of free space just by doing dpkg --list | grep pve-kernel and apt purge pve-kernel-5.15.3*
 
its so weird cause my syslog looks like its stuck in a loop. Every few min 0.01gb is added to my pve root drive.
Code:
Apr 08 00:32:08 pve1 pveproxy[4215]: Warning: unable to close filehandle GEN5 properly: No space left on device at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1791.
Apr 08 00:32:08 pve1 pveproxy[4215]: error writing access log
Apr 08 00:32:08 pve1 pveproxy[2143]: worker 4216 finished
Apr 08 00:32:08 pve1 pveproxy[2143]: starting 1 worker(s)
Apr 08 00:32:08 pve1 pveproxy[2143]: worker 4218 started
Apr 08 00:32:09 pve1 pveproxy[4215]: worker exit
Apr 08 00:32:09 pve1 pveproxy[2143]: worker 4215 finished
Apr 08 00:32:09 pve1 pveproxy[2143]: starting 1 worker(s)
Apr 08 00:32:09 pve1 pveproxy[2143]: worker 4220 started
Apr 08 00:32:09 pve1 pveproxy[4217]: Warning: unable to close filehandle GEN5 properly: No space left on device at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1791.
Apr 08 00:32:09 pve1 pveproxy[4217]: error writing access log
Apr 08 00:32:10 pve1 pveproxy[4217]: worker exit
Apr 08 00:32:10 pve1 pveproxy[2143]: worker 4217 finished
Apr 08 00:32:10 pve1 pveproxy[2143]: starting 1 worker(s)
Apr 08 00:32:10 pve1 pveproxy[2143]: worker 4226 started
Apr 08 00:32:10 pve1 pveproxy[4218]: Warning: unable to close filehandle GEN5 properly: No space left on device at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1791.
Apr 08 00:32:10 pve1 pveproxy[4218]: error writing access log
Apr 08 00:32:11 pve1 pveproxy[4218]: worker exit
Apr 08 00:32:11 pve1 pveproxy[2143]: worker 4218 finished
Apr 08 00:32:11 pve1 pveproxy[2143]: starting 1 worker(s)
Apr 08 00:32:11 pve1 pveproxy[2143]: worker 4227 started
Apr 08 00:32:11 pve1 pveproxy[4220]: Warning: unable to close filehandle GEN5 properly: No space left on device at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1791.
Apr 08 00:32:11 pve1 pveproxy[4220]: error writing access log
Apr 08 00:32:11 pve1 pveproxy[4220]: worker exit
Apr 08 00:32:11 pve1 pveproxy[2143]: worker 4220 finished
Apr 08 00:32:11 pve1 pveproxy[2143]: starting 1 worker(s)
Apr 08 00:32:11 pve1 pveproxy[2143]: worker 4229 started
Apr 08 00:32:11 pve1 pveproxy[4226]: Warning: unable to close filehandle GEN5 properly: No space left on device at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1791.
Apr 08 00:32:11 pve1 pveproxy[4226]: error writing access log
Apr 08 00:32:11 pve1 pveproxy[4226]: worker exit
Apr 08 00:32:11 pve1 pveproxy[2143]: worker 4226 finished
Apr 08 00:32:11 pve1 pveproxy[2143]: starting 1 worker(s)
Apr 08 00:32:11 pve1 pveproxy[2143]: worker 4232 started
Apr 08 00:32:11 pve1 pveproxy[4227]: Warning: unable to close filehandle GEN5 properly: No space left on device at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1791.
Apr 08 00:32:11 pve1 pveproxy[4227]: error writing access log
Apr 08 00:32:11 pve1 pveproxy[4227]: worker exit
Apr 08 00:32:11 pve1 pveproxy[4229]: Warning: unable to close filehandle GEN5 properly: No space left on device at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1791.
Apr 08 00:32:11 pve1 pveproxy[4229]: error writing access log
Apr 08 00:32:11 pve1 pveproxy[2143]: worker 4227 finished
Apr 08 00:32:11 pve1 pveproxy[2143]: starting 1 worker(s)
Apr 08 00:32:11 pve1 pveproxy[2143]: worker 4233 started
Apr 08 00:32:12 pve1 pveproxy[4229]: worker exit
Apr 08 00:32:12 pve1 pveproxy[2143]: worker 4229 finished
Apr 08 00:32:12 pve1 pveproxy[2143]: starting 1 worker(s)
Apr 08 00:32:12 pve1 pveproxy[2143]: worker 4234 started
Apr 08 00:32:12 pve1 pveproxy[4232]: Warning: unable to close filehandle GEN5 properly: No space left on device at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1791.
Apr 08 00:32:12 pve1 pveproxy[4232]: error writing access log
Apr 08 00:32:12 pve1 pveproxy[4232]: worker exit
Apr 08 00:32:12 pve1 pveproxy[2143]: worker 4232 finished
Apr 08 00:32:12 pve1 pveproxy[2143]: starting 1 worker(s)
Apr 08 00:32:12 pve1 pveproxy[2143]: worker 4236 started
Apr 08 00:32:12 pve1 pveproxy[4233]: Warning: unable to close filehandle GEN5 properly: No space left on device at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1791.
Apr 08 00:32:12 pve1 pveproxy[4233]: error writing access log
Apr 08 00:32:12 pve1 pveproxy[4233]: worker exit
Apr 08 00:32:12 pve1 pveproxy[4236]: Warning: unable to close filehandle GEN5 properly: No space left on device at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1791.
Apr 08 00:32:12 pve1 pveproxy[4236]: error writing access log
Apr 08 00:32:12 pve1 pveproxy[2143]: worker 4233 finished
Apr 08 00:32:12 pve1 pveproxy[2143]: starting 1 worker(s)
Apr 08 00:32:12 pve1 pveproxy[2143]: worker 4239 started
Apr 08 00:32:13 pve1 pveproxy[4236]: worker exit
Apr 08 00:32:13 pve1 pveproxy[2143]: worker 4236 finished
Apr 08 00:32:13 pve1 pveproxy[2143]: starting 1 worker(s)
Apr 08 00:32:13 pve1 pveproxy[2143]: worker 4243 started
Apr 08 00:32:13 pve1 pveproxy[4234]: Warning: unable to close filehandle GEN5 properly: No space left on device at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1791.
Apr 08 00:32:13 pve1 pveproxy[4234]: error writing access log
Apr 08 00:32:14 pve1 pveproxy[4234]: worker exit
Apr 08 00:32:14 pve1 pveproxy[4239]: Warning: unable to close filehandle GEN5 properly: No space left on device at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1791.
Apr 08 00:32:14 pve1 pveproxy[4239]: error writing access log
Apr 08 00:32:14 pve1 pveproxy[2143]
 
Is /mnt/backup a share? It doesn't seem to be mounted, so that might be the problem.
Wooohooo! Solved my problem too :) I had an error in a mount path of a USB drive and the backups were piling up in "/medai" instead of /media without showing up in the GUI.

Thanks a lot
 
Wooohooo! Solved my problem too :) I had an error in a mount path of a USB drive and the backups were piling up in "/medai" instead of /media without showing up in the GUI.

Thanks a lot
This could have been easily avoided by setting up the "is_mountpoint" option of your directory storage. Then the storage would fail instead of filling up your root filesystem until your PVE stops working: pvesm set YourStorageID --is_mountpoint /path/to/your/mountpoint
 
This could have been easily avoided by setting up the "is_mountpoint" option of your directory storage. Then the storage would fail instead of filling up your root filesystem until your PVE stops working: pvesm set YourStorageID --is_mountpoint /path/to/your/mountpoint
Thanks, I'll look into it. Learned a lot in the past few weeks
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!