SOLVED - You have not turned on protection against thin pools running out of space.

filipealvarez

Active Member
Feb 17, 2020
39
5
28
36
Hi,

I have a VM with a schedule snapshot every 2 hours.

The sum of this snapshot's is higher than the whole volume group as message below:

WARNING: Sum of all thin volume sizes (<3.52 TiB) exceeds the size of thin pool dados/ssd and the size of whole volume group (<1.75 TiB).
WARNING: You have not turned on protection against thin pools running out of space.
WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.

But, look:


root@silver1:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
snap_vm-222-disk-0_autoexpediente210615160003 dados Vri---tz-k 120.00g ssd vm-222-disk-0
snap_vm-222-disk-0_autoexpediente210615180003 dados Vri---tz-k 120.00g ssd vm-222-disk-0
snap_vm-222-disk-0_autoexpediente210616124913 dados Vri---tz-k 120.00g ssd vm-222-disk-0
snap_vm-222-disk-0_autoexpediente210616180004 dados Vri---tz-k 120.00g ssd vm-222-disk-0
snap_vm-222-disk-0_autoexpediente210617100003 dados Vri---tz-k 120.00g ssd vm-222-disk-0
snap_vm-222-disk-1_autoexpediente210615160003 dados Vri---tz-k 460.00g ssd vm-222-disk-1
snap_vm-222-disk-1_autoexpediente210615180003 dados Vri---tz-k 460.00g ssd vm-222-disk-1
snap_vm-222-disk-1_autoexpediente210616124913 dados Vri---tz-k 460.00g ssd vm-222-disk-1
snap_vm-222-disk-1_autoexpediente210616180004 dados Vri---tz-k 460.00g ssd vm-222-disk-1
snap_vm-222-disk-1_autoexpediente210617100003 dados Vri---tz-k 460.00g ssd vm-222-disk-1
ssd dados twi-aotz-- <1.75t 17.53 39.19
vm-222-disk-0 dados Vwi-aotz-- 120.00g ssd 89.71
vm-222-disk-1 dados Vwi-aotz-- 460.00g ssd 27.47

The volume is using only 17% of space and meta is about 39.19 %

The question is if this is safe ? Assuming that I will not explode the size / meta of the volume...

This snapshots are fine ? (usable)

Thank you,
 
yes, technically lvm-thin snapshots are 'normal' volumes and could be used independently, so it simply sum the size up and tells you that
the sum of the (theoretical) size of all volumes is bigger than your thin pool

as long as you monitor your space usage, nothing should happen
 
But how do I enable these?
WARNING: You have not turned on protection against thin pools running out of space.
WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
 
Is there a way to disable this warning?
not that i know of. this warnings comes directly from lvm-thin, so maybe there is an option for /etc/lvm/lvm.conf (i looked though and didn't find any)
 
  • Like
Reactions: utkonos
I can see from the LVM2 source code that some of the warnings are able to be disabled. I would need to read the source further to trace out how that is silenced and whether it would also silence other needed warnings. I wouldn't want to throw the baby out with the bath water so-to-speak. Unfortunately, the log calls there only apply to some of the warnings. The first warning where it calculates and displays the total size of the thin volumes is displayed using a different logging call.

C:
if (sz != UINT64_C(~0)) {
    log_warn("WARNING: Sum of all thin volume sizes (%s) exceeds the "
        "size of thin pool%s%s%s (%s).",
        display_size(cmd, thinsum),
        more_pools ? "" : " ",
        more_pools ? "s" : display_lvname(pool_lv),
        txt,
        (sz > 0) ? display_size(cmd, sz) : "no free space in volume group");
    if (max_threshold > 99 || !min_percent)
        log_print_unless_silent("WARNING: You have not turned on protection against thin pools running out of space.");
    if (max_threshold > 99)
        log_print_unless_silent("WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.");
    if (!min_percent)
        log_print_unless_silent("WARNING: Set activation/thin_pool_autoextend_percent above 0 to specify by how much to extend thin pools reaching the threshold.");
    /* FIXME Also warn if there isn't sufficient free space for one pool extension to occur? */
}

https://github.com/lvmteam/lvm2/blo...4379322c2/lib/metadata/thin_manip.c#L413-L428

Looking at the source code: determining how to set log_print_unless_silent to silent would be a pointless exercise because it would not silence that first warning at the top. I think my course of action is to raise the topic on the LVM2 mailing list stating my use case and propose a patch that adds a configuration option to silence this whole set of warnings.

Thanks for taking a look at this!
 
Ok. I have run this one down to the point where it needs to be fixed upstream. There are two threads about this very issue on their mailing list, one of which includes a proposed fix that I think would solve the issue in Proxmox as well. The proposal is from zkabelac at redhat.com to add an envvar "LVM_SUPPRESS_POOL_WARNINGS" which would suppress the four warnings that we're looking at. This was proposed here:
https://listman.redhat.com/archives/linux-lvm/2017-September/024332.html

Here are the mailing list threads:
https://listman.redhat.com/archives/linux-lvm/2016-April/023529.html
https://listman.redhat.com/archives/linux-lvm/2017-September/024323.html

Here is one bug tracker issue (unfortunately closed at the moment):
https://bugzilla.redhat.com/show_bug.cgi?id=1465974

I have made a posting to their mailing list to raise the topic again and see what the next steps should be. I see that they have begun using Github issues for bug tracking, so I think one course of action will be to open an issue explaining the use case and pointing to the proposed fix. Then everyone here on this forum thread and anyone in their mailing list can register support by liking that issue. I want to avoid brigading their list and issue tracker with "me too" replies since this is always counterproductive.
 
  • Like
Reactions: Dunuin
I have made a post to the LVM2 mailing list here:

https://listman.redhat.com/archives/linux-lvm/2022-May/026168.html

Already, there is a post from a developer from Qubes OS representing that project who concurs with the request. I am going to make a followup post asking where we can begin tracking interest (maybe Github issue w/likes) so that the list doesn't get brigaded with "+1" emails. Once I know where that project wants to track this, I'll let people know here so we can all register interest.

Also, in the LVM2 developer's suggestion shown in a previous post, the LVM_SUPPRESS_POOL_WARNINGS envvar has a similar suppress warnings envvar in their source tree for a different set of specific warnings. This is good because I think I can look at how they have implemented that feature and copy it to suppress the warnings we're wanting to suppress. I can then submit that code as a PR or however they recieve code contributions. I think this can make the change as smooth as possible for LVM2 to accept the PR.
 
We use this command on a cron on the host to monitor the total real used space on the thin-pools

Code:
pvesm status |grep lvmthin | awk ' $7 >=70 {print $1,$7}'

Values over (>=70) are returned, "lvmthin" is the "type" reported by the command
 
I have a similar problem.
The container's HD already has 720GB on a 1TB SSD.


the backup log:

Code:
INFO: starting new backup job: vzdump 113 --quiet 1 --mode snapshot --notes-template '{{vmid}} {{guestname}} - {{node}}' --mailnotification always --storage PBS-Backup
INFO: Starting Backup of VM 113 (lxc)
INFO: Backup started at 2023-11-21 06:27:06
INFO: status = running
INFO: CT Name: IP051-nextcloud
INFO: including mount point rootfs ('/') in backup
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: create storage snapshot 'vzdump'
  WARNING: You have not turned on protection against thin pools running out of space.
  WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
  Logical volume "snap_vm-113-disk-0_vzdump" created.
  WARNING: Sum of all thin volume sizes (<2.10 TiB) exceeds the size of thin pool SSD1TB/SSD1TB and the size of whole volume group (<953.87 GiB).
INFO: creating Proxmox Backup Server archive 'ct/113/2023-11-21T09:27:06Z'
INFO: run: lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- /usr/bin/proxmox-backup-client backup --crypt-mode=none pct.conf:/var/tmp/vzdumptmp3951348_113/etc/vzdump/pct.conf root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/./ --skip-lost-and-found --exclude=/tmp/?* --exclude=/var/tmp/?* --exclude=/var/run/?*.pid --backup-type ct --backup-id 113 --backup-time 1700558826 --repository admin@pbs@192.168.2.3:PBS2
INFO: Starting backup: ct/113/2023-11-21T09:27:06Z
INFO: Client name: pve
INFO: Starting backup protocol: Tue Nov 21 06:27:07 2023
INFO: Downloading previous manifest (Tue Nov 21 05:18:30 2023)
INFO: Upload config file '/var/tmp/vzdumptmp3951348_113/etc/vzdump/pct.conf' to 'admin@pbs@192.168.2.3:8007:PBS2' as pct.conf.blob
INFO: Upload directory '/mnt/vzsnap0' to 'admin@pbs@192.168.2.3:8007:PBS2' as root.pxar.didx
INFO: root.pxar: had to backup 248.303 MiB of 671.46 GiB (compressed 95.47 MiB) in 3717.39s
INFO: root.pxar: average backup speed: 68.397 KiB/s
INFO: root.pxar: backup was done incrementally, reused 671.218 GiB (100.0%)
INFO: Uploaded backup catalog (7.778 MiB)
INFO: Duration: 3719.79s
INFO: End Time: Tue Nov 21 07:29:07 2023
INFO: adding notes to backup
INFO: cleanup temporary 'vzdump' snapshot
  Logical volume "snap_vm-113-disk-0_vzdump" successfully removed.
INFO: Finished Backup of VM 113 (01:02:26)
INFO: Backup finished at 2023-11-21 07:29:32
INFO: Backup job finished successfully
TASK OK
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!