Deleted many GBs from datastore but only freed a few MBs!?!?!?!?!

proxwolfe

Well-Known Member
Jun 20, 2020
499
51
48
49
Hi,

I have a PBS running that became full. Pruning didn't help because the garbage collection couldn't successfully complete due to unavailability of enough disk space.

So I decided to manually remove backups that I no longer need (removed the entire backup for each obsolete VM, not just single backup timestamps). Altogether I deleted a couple of hundred GBs worth of data (okay, probably part of that was compressed 0s, but still). But in the dashboard the available space went from 28 MB to 110 MB only.

So where is my space?

Is that still a matter of a successful garbage collection? The data is gone from the datastore when looking in the terminal. If it still takes a garbage collection, what can I do to make that run through to completion?

Thanks!
 
An yes, GC is necessary to run.
Thanks. That then leaves me with this problem:
Pruning didn't help because the garbage collection couldn't successfully complete due to unavailability of enough disk space.
When I manually start the GC, it stops after about 7% complaining about not enough disk space. I'm assuming the same will happen, when GC runs automatically.

Any idea how I can give PBS the disk space it needs to delete stuff?

The datastore sits on a zpool. So I guess I could add another vdev for the operation and then remove it again after the space has been freed. The problem ist that this system is remote so I can't just walk over and put in another disk...

Thanks!
 
Last edited:
Thanks. That then leaves me with this problem:

When I manually start the GC, it stops after about 7% complaining about not enough disk space. I'm assuming the same will happen, when GC runs automatically.

Any idea how I can give PBS the disk space it needs to delete stuff?

The datastore sits on a zpool. So I guess I could add another vdev for the operation and then remove it again after the space has been freed. The problem ist that this system is remote so I can't just walk over and put in another disk...

Thanks!
Depends. In case your pool is using any raidz1/2/3 you won't be able to remove the new disk afterwards. See the limitation of the "zpool remove" command:
https://openzfs.github.io/openzfs-docs/man/8/zpool-remove.8.html said:
Top-level vdevs can only be removed if the primary pool storage does not contain a top-level raidz vdev, all top-level vdevs have the same sector size, and the keys for all encrypted datasets are loaded

And you probably only deleted the index files and logs of those GBs of backups which are just a few MBs. All those chunk files of all the backups (which contain the GBs of data) of which you deleted the index files are still there and can only be removed by a GC.

For the future you really should set a quota so you can't end up in such a situation in the first place. A ZFS pool will become slow when filling more than 80-90%, so its a good to set a quota so it's not possible to fill it more than 90% by accident. And you should monitor your pool with email alerts so you get notified when the pool is exceeding 80%, so you can delete backups or add more disks.
If you then fill it by accident to 90% so that nothing willnwoek anymore becausw no space is left, you could temporarily increase the quota from 90 to 95% to get plenty of usable space so your GC could run again. After deleting backups so you pool is less than 80% filled you could lower that quota again to 90%.

A datastore is just a folder with tons of files and subdirs. You could move that datastore folder to any external disk, SAN, NAS or whatever with enough space, run the GC and then move the trimmed datstore folder back to your ZFS pool.
 
Last edited:
Actually I am not sure which path needs to have "enough" free space during GC. My blind guess would be "/tmp" which sits on the root filesystem "/" on my PBS.

Any chance you can delete other files? Sometimes there are very old logfiles in "/var/log/" or large journals can be shrunk by "journalctl --vacuum-size=100M". And "apt-get clean" should remove packages from "/var/cache/apt/archives/".

Are you on ZFS? What gives "zfs list"?

Not on ZFS? What gives "df -h"? Perhaps you can _move_ some random folders to another file system and set a soft-link to avoid breaking functionality. Be careful though, I am just brainstorming...


Good luck!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!