How to reserve space for one LVM thin volume when thin pool is full?

shodan

Active Member
Sep 1, 2022
226
63
33
Hi,


I have an LVM2 thin pool on my Proxmox system that contains multiple thin volumes. Occasionally, one of the volumes unexpectedly grows very large, causing the thin pool to run out of space. This usually isn't a big problem, I just delete some data, maybe reboot the system, and I’ve never experienced any corruption from it.


However, there's one specific thin volume (root) that I’d like to always be able to write to, even when the thin pool is full. It doesn’t need much, reserving just 1 GB of space would be enough.


Is there a way to reserve space for a specific thin volume like root, so that it still has 1 GB of writable space available when the rest of the thin pool is full and goes read-only?

I am very space constrained on a 256gb ssd and given ssd prices it does not look like I will be able to upgrade any time soon. I also can't reserve a large amount of empty space for this volume as a thick volume either.

One potential solution, was to create 5 dummy volumes of 1gb each, and somehow setup a script when space is running out, to delete these dummy volumes and send me a warning email. Maybe also shut down VMs that have grown in size recently into the buffer space of my thin volume.

I also HAVE to use LVM, can't use zfs or ceph.


Thanks!
 
Would it be possible to "exile" this one specific volume into a small second thin pool (or even just plain LV of the maximum size required) on the same disk?
 
Would it be possible to "exile" this one specific volume into a small second thin pool (or even just plain LV of the maximum size required) on the same disk?
Would then be possible for this second thin pool to automatically grow, taking space away from the main thin pool ?

Right now this root volume only needs 10G, I don't know how much temporary space it might need in the future, possibly all of what is left in the main thin pool

Does there exist already made scripts to perform this tasks of resizing the allocated space of multiple thin pool on a volume group ?
 
I don't know of any way that one volume could "push" another one out of the way, sorry.
So my suggestion would be either / or, i.e. all data from that one volume would have to go into their own LV which would have to be configured to be big enough. Any overhead would be lost to the combined system as the price for this safety margin.
But perhaps somebody with more experience will come up with a better suggestion.
 
Look around amzn for " Hynix Beetle " external SSD, and " SSK ssd " - can get usb3/usb-c for under $100 and they offer credit / payments

Unknown TBW rating, but will give you pretty much immediately more free space for a separate dedicated zfs (lz4 compressed) or lvm-thin pool.

https://github.com/kneutron/ansitest/blob/master/proxmox/proxmox-create-additional-lvm-thin.sh

I'm running a 2018 Intel mac mini single-disk rpool off a 238GB.~and-change SSK external SSD with write mitigation, and have a spare 1TB Beetle if needed to replace it

/ of course, make sure everything is on UPS power and you have NUT configured to shutdown nicely
 
Last edited:
BTW, if you don't have backups - if your single 256GB SSD dies, you have NOTHING to restore.

/ srsly, start a gofundme or something if you don't have budget for backups
 
Kingneutron, thanks for the suggestion but this is not an option.

So far the best strategy appears to be a script that monitors space left on the thin pool 10 times per second.

And if the space remaining goes below a certain threshold for a certain amount of time, then the thing that has been using the space will be suspended. And if the space runs out on the system, all virtual machines and LXC containers will be suspended and the dummy thin volume will be deleted to immediately free the space for the one volume that must not run out of space.

I think that's the best solution so far. I kind of "thin pool watchdog".

It's kind of weird no one has had a solution for this problem other than "just buy more SSDs", which, is soon no longer going to be an option for basically everyone. So we're going to need some actual solutions for this issue.
 
Last edited: