Help clean up free space on the server

user_a

New Member
Dec 2, 2023
5
0
1
I have an SSD on which Proxmox VE is installed. The configuration is as follows:
root@pve:~# cat /etc/pve/storage.cfg dir: local path /var/lib/vz content vztmpl,iso,backup lvmthin: local-lvm thinpool data vgname pve content images,rootdir nfs: Backup_drive export /export/PVE path /mnt/pve/Backup_drive server 192.168.10.12 content backup prune-backups keep-all=1

ncdu /
Знімок екрана 2023-12-02 132215.png
We do not pay attention to the /mnt folder, as it is a mounted network drive.

But at the same time I see 53 GB of occupied disk space!
How to find these files?
Знімок екрана 2023-12-02 132444.png
 
I am having the same issue and question. My Local on "pve" shows 71.14 GB of 100.86 GB used. But I can only account for about 3 GB of that. Screenshot 2023-12-08 at 2.04.17 PM.png

I see 67GB used on /
Screenshot 2023-12-08 at 2.04.01 PM.png

But...where is it used???
Screenshot 2023-12-08 at 2.10.30 PM.png
 
Everything turned out to be very simple for me... When the network drive was unavailable, some data was written to the /mnt folder. I thought that its entire volume is just a disk on the network. After unmounting the network drive, I scanned the space again and saw that there were some files in this directory.

Try it: ncdu /
 
Tried that and it didn't tell me anything I don't already know. There's nothing on / taking up nearly 70GB. The summary showing 71GB used makes no sense to me at all.
 
The way the used storage is calculated depends on what kind of storage it is; it is sometimes possible, that e.g. df -h doesn't report disk usage accurately, because there might be some additional context that the command cannot take into account.

For example, if your storage is managed via an LVM thin pool, it might report false numbers, because it's not aware of LVM's logical volumes, etc. Now if you have - let's say - a storage with type "directory" that is on top of ext4, which in turn is on top of an LVM LV, the storage will (or rather, can) usually only see what the ext4 filesystem reports to it.

So, it's not as straightforward or simple as it might seem.

To show a more concrete example, let me demonstrate a similar thing using the ZFS root pool of my workstation:

Code:
# zfs list -r rpool
NAME               USED  AVAIL  REFER  MOUNTPOINT
rpool              118G   603G   104K  /rpool
rpool/ROOT         118G   603G    96K  /rpool/ROOT
rpool/ROOT/pve-1   118G   603G   118G  /
rpool/data          96K   603G    96K  /rpool/data
Code:
# df -h | head -n 1; df -h | grep rpool
rpool/ROOT/pve-1                    721G  118G  604G  17% /
rpool                               604G  128K  604G   1% /rpool
rpool/ROOT                          604G  128K  604G   1% /rpool/ROOT
rpool/data                          604G  128K  604G   1% /rpool/data

As you can see, my pool has approximately 603G of space available. But, comparing the outputs of the two commands, you can see that df thinks that rpool isn't actually using any data - the way zfs list and df report used and available storage just differs in that regard.

I think you get the point - many technologies intermingling can lead to discrepancies.

Now, in regards to @ssspinball's case there could be a couple different things going on:
  1. Mountpoint shadowing. You already checked this from what I understand, but still posting this for completeness's sake:
    Maybe something was written to some directory where later a drive or network share was mounted. You can remount your / like this:
    Code:
    # mkdir -p /root-alt
    # mount -o bind / /root-alt
    You can then check if there's something in /root-alt/macmini_backups that's been hidden by your network share.

  2. inode exhaustion. Your filesystem usually has a limited amount of inodes. This is unlikely to happen unless you have lots of little files. You can check this via df -i.

  3. As @alexskysilk mentioned, you might share the space of your root partition with something else. If you're on LVM, you can check via lvs, lvdisplay, vgs, vgdisplay, pvs and pvdisplay. On ZFS you can check your datasets / pools via zfs list. Really depends on what you're using.

  4. Also unlikely: Something on your system is keeping deleted files in memory that have already been deleted from disk (e.g. large log files). You can check this via lsof:
    Code:
    # lsof -n | grep deleted &> lsof.txt
    # less lsof.txt
    (Piping to a file here because the output may be large.) The 8th column (usually) contains the file size in bytes. These deleted-but-still-present files also count to the "orphaned" inode count mentioned in 2. (IIRC), meaning they're still allocated on the disk and will actually be removed once whatever program keeps them open stops using them.
That's all I can think of OTOH - if anybody has any other ideas, please let me know. @ssspinball, if you're willing to provide more details on your storage configuration, I could perhaps help you figure out what's going on: cat /etc/pve/storage.cfg

Also, for the curious: Here's a great thread on SO that goes in depth about measuring disk usage.
 
Now, in regards to @ssspinball's case there could be a couple different things going on:
  1. Mountpoint shadowing. You already checked this from what I understand, but still posting this for completeness's sake:
    Maybe something was written to some directory where later a drive or network share was mounted. You can remount your / like this:
    Code:
    # mkdir -p /root-alt
    # mount -o bind / /root-alt
    You can then check if there's something in /root-alt/macmini_backups that's been hidden by your network share.

This was it! It was hidden by that mount point and there were some backups in there that I wasn't aware were on the local disk at all from times when the network mount was connected.

Thanks @Max Carrara ! :)
 
Last edited:
  • Like
Reactions: Max Carrara
This was it! It was hidden by that mount point and there were some backups in there that I wasn't aware were on the local disk at all from times when the network mount was connected.

Thanks @Max Carrara ! :)
You're welcome! Glad that fixed it!