[SOLVED] pve-root directory run out of space: Backup Purging, Backup Retention, Data Storage Configurations

jravin

New Member
Jun 21, 2022
4
0
1
My pve-root directory is out of space. I am using Proxmox Virtual Environment 6.4-13 (no license/subscription).

I have tried to show my system settings and situation to save posts. Also, I realize this is one thread and I am asking multiple questions in one thread. My reasoning is these questions are all related to one another. If a Moderator or Admin would prefer I break this up, please let me know and I can do so.

I would prefer to use the GUI as much as possible for the solutions, as I tend to believe these types of edits last longer through upgrades and the likes. If there is a CLI only type of solution, I am game, just hesistant to make future problems and additional maintenance.


Code:
abc@xyz:/var/lib/vz# df -h
Filesystem            Size  Used Avail Use% Mounted on
udev                  189G     0  189G   0% /dev
tmpfs                  38G  434M   38G   2% /run
/dev/mapper/pve-root   94G   94G     0 100% /
tmpfs                 189G   37M  189G   1% /dev/shm
tmpfs                 5.0M     0  5.0M   0% /run/lock
tmpfs                 189G     0  189G   0% /sys/fs/cgroup
/dev/fuse              30M   20K   30M   1% /etc/pve
tmpfs                  38G     0   38G   0% /run/user/0

I read, "pve-root directory runs out of space: Why and What should I do?" and this is along the same lines as one of the questions I have, however, I seem to be using LVM-Thin, which negates part of the provided solution (unless I am mistaken).

Using the 'du' command I can track down where heavy files are located.

Code:
abc@xyz:/# du -h --max-depth=1
[...edited...]
92G     ./var
[...edited...]

I cd into /var and repeat the du command, until I end up at /var/lib/vz .

Code:
abc@xyz:/var/lib/vz# du -h --max-depth=1
4.0K    ./images
80G     ./dump
12G     ./template
91G     .

Inside of the ./dump directory are a bunch of .zst and .log files. The .zst files are each a couple gigabytes (2.5GB average). Reading this forum, I can see these .zst files are backup files, and the associated logs are ... logs related to each backup. Great! Now for Question 1...

QUESTION 1
How do I set Proxmox VE to automatically purge older backups? I would like to keep 1 backup per day for the past 3 days, 1 backup per week for the past 4 weeks, then 1 backup per month for the past 2 months, and everything else can be removed. This would be a total of 9 backups per vm.

QUESTION 2
I can see, with both the GUI (Node :: Disks :: LVM-Thin) and command line (lsblk) that there is a pve-data location, and from that same GUI location I can see that it has 73+GB of data in it, but I cannot see what actually is in there. I go through various GUI menus trying to find references to this, and I have so far been unable to find any references. How can I access and use the pve-data LVM?

Code:
abc@xyz:/etc# lsblk
NAME                         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                            8:0    0  3.3T  0 disk
├─sda1                         8:1    0 1007K  0 part
├─sda2                         8:2    0  512M  0 part
└─sda3                         8:3    0  3.3T  0 part
  ├─pve-swap                 253:0    0    8G  0 lvm  [SWAP]
  ├─pve-root                 253:1    0   96G  0 lvm  /
  ├─pve-data_tmeta           253:2    0 15.8G  0 lvm
  │ └─pve-data-tpool         253:4    0  3.1T  0 lvm
  │   ├─pve-data             253:5    0  3.1T  0 lvm
  │   ├─pve-vm--107--disk--0 253:6    0   20G  0 lvm
  │   ├─pve-vm--102--disk--0 253:7    0   20G  0 lvm
  │   ├─pve-vm--103--disk--0 253:8    0   16G  0 lvm
  │   ├─pve-vm--100--disk--0 253:9    0   32G  0 lvm
  │   ├─pve-vm--104--disk--0 253:10   0   32G  0 lvm
  │   ├─pve-vm--105--disk--0 253:11   0   64G  0 lvm
  │   └─pve-vm--106--disk--0 253:12   0   32G  0 lvm
  └─pve-data_tdata           253:3    0  3.1T  0 lvm
    └─pve-data-tpool         253:4    0  3.1T  0 lvm
      ├─pve-data             253:5    0  3.1T  0 lvm
      ├─pve-vm--107--disk--0 253:6    0   20G  0 lvm
      ├─pve-vm--102--disk--0 253:7    0   20G  0 lvm
      ├─pve-vm--103--disk--0 253:8    0   16G  0 lvm
      ├─pve-vm--100--disk--0 253:9    0   32G  0 lvm
      ├─pve-vm--104--disk--0 253:10   0   32G  0 lvm
      ├─pve-vm--105--disk--0 253:11   0   64G  0 lvm
      └─pve-vm--106--disk--0 253:12   0   32G  0 lvm
sdb                            8:16   0  3.3T  0 disk
└─sdb1                         8:17   0  3.3T  0 part /mnt/archive

Question 2 may be a little moot with Question 3. If Q3 makes Q2 moot, it would spawn another thread where I will ask how to expand the pve-root size and shrink the pve-data size.

QUESTION 3
How can I move the backup location from pve-root to a separate mount point, for instance /mnt/archive ? This is related to the automatic purging and backup configurations from Question 1.

BONUS QUESTION
Can I live without the /var/lib/vz/template directory? It seems to be taking up a lot of space.

Thank you in advance!
 
I would prefer to use the GUI as much as possible for the solutions, as I tend to believe these types of edits last longer through upgrades and the likes. If there is a CLI only type of solution, I am game, just hesistant to make future problems and additional maintenance.
Proxmox VE is is full linux OS and not an appliance. Doing an upgrade won't undo changes you did through CLI. Indeed, the PVE webUI is very basic and you will quite often need to use the CLI.
I read, "pve-root directory runs out of space: Why and What should I do?" and this is along the same lines as one of the questions I have, however, I seem to be using LVM-Thin, which negates part of the provided solution (unless I am mistaken).
94GB should be plenty of space. Making your root filesystem bigger won't help you much, it just buys you some more time. You should find out what is using all that space. Just PVE alone shouldn't use much more than 10-20GB.
Using the 'du' command I can track down where heavy files are located.

Code:
abc@xyz:/# du -h --max-depth=1
[...edited...]
92G     ./var
[...edited...]

I cd into /var and repeat the du command, until I end up at /var/lib/vz .

Code:
abc@xyz:/var/lib/vz# du -h --max-depth=1
4.0K    ./images
80G     ./dump
12G     ./template
91G     .

Inside of the ./dump directory are a bunch of .zst and .log files. The .zst files are each a couple gigabytes (2.5GB average). Reading this forum, I can see these .zst files are backup files, and the associated logs are ... logs related to each backup. Great! Now for Question 1...

QUESTION 1
How do I set Proxmox VE to automatically purge older backups? I would like to keep 1 backup per day for the past 3 days, 1 backup per week for the past 4 weeks, then 1 backup per month for the past 2 months, and everything else can be removed. This would be a total of 9 backups per vm.
Are you sure you want to use Vzdump for that? Vzdump can't store incremental backups. So backing up 9 copies of a 10GB VM will consume up to 90GB of space (or even 100GB because temporarily 10 backups will need to be stored). So with just around 70GB of space for backups on your root filesystem you would be limited to around 7GB of VMs. You also told us that you are using LVM-Thin for your VMs. So you are storing your backups of the VMs on the same disk you are running those VMs? That would be bad, because when your disks dies you would loose all your VMs and all of your backups.
I would really recommend to get a dedicated disk for your backups with enough capacity. And if you want to keep 9 backups of each VM you might want to have a look into the Proxmox Backup Server (PBS). PBS can deduplicate your backups so 9 copies of a 10GB VM might just need something like 12GB and not full 90GB.

But to your question:
Go to "Datastore -> Backups -> YourBackupTask -> Edit". there is a "backup retention" tab where you can set "keep daily=3, keep weekly=4, keep-monthly=2".

You might also have a look at the pruning simulator: https://pbs.proxmox.com/docs/prune-simulator/
QUESTION 2
I can see, with both the GUI (Node :: Disks :: LVM-Thin) and command line (lsblk) that there is a pve-data location, and from that same GUI location I can see that it has 73+GB of data in it, but I cannot see what actually is in there. I go through various GUI menus trying to find references to this, and I have so far been unable to find any references. How can I access and use the pve-data LVM?

Code:
abc@xyz:/etc# lsblk
NAME                         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                            8:0    0  3.3T  0 disk
├─sda1                         8:1    0 1007K  0 part
├─sda2                         8:2    0  512M  0 part
└─sda3                         8:3    0  3.3T  0 part
  ├─pve-swap                 253:0    0    8G  0 lvm  [SWAP]
  ├─pve-root                 253:1    0   96G  0 lvm  /
  ├─pve-data_tmeta           253:2    0 15.8G  0 lvm
  │ └─pve-data-tpool         253:4    0  3.1T  0 lvm
  │   ├─pve-data             253:5    0  3.1T  0 lvm
  │   ├─pve-vm--107--disk--0 253:6    0   20G  0 lvm
  │   ├─pve-vm--102--disk--0 253:7    0   20G  0 lvm
  │   ├─pve-vm--103--disk--0 253:8    0   16G  0 lvm
  │   ├─pve-vm--100--disk--0 253:9    0   32G  0 lvm
  │   ├─pve-vm--104--disk--0 253:10   0   32G  0 lvm
  │   ├─pve-vm--105--disk--0 253:11   0   64G  0 lvm
  │   └─pve-vm--106--disk--0 253:12   0   32G  0 lvm
  └─pve-data_tdata           253:3    0  3.1T  0 lvm
    └─pve-data-tpool         253:4    0  3.1T  0 lvm
      ├─pve-data             253:5    0  3.1T  0 lvm
      ├─pve-vm--107--disk--0 253:6    0   20G  0 lvm
      ├─pve-vm--102--disk--0 253:7    0   20G  0 lvm
      ├─pve-vm--103--disk--0 253:8    0   16G  0 lvm
      ├─pve-vm--100--disk--0 253:9    0   32G  0 lvm
      ├─pve-vm--104--disk--0 253:10   0   32G  0 lvm
      ├─pve-vm--105--disk--0 253:11   0   64G  0 lvm
      └─pve-vm--106--disk--0 253:12   0   32G  0 lvm
sdb                            8:16   0  3.3T  0 disk
└─sdb1                         8:17   0  3.3T  0 part /mnt/archive

Question 2 may be a little moot with Question 3. If Q3 makes Q2 moot, it would spawn another thread where I will ask how to expand the pve-root size and shrink the pve-data size.
pve-data is your LVM-Thin pool. You can't see any files there because there are no files. LVM-Thin is a block storage so there is no filesystem. so the 73GB of data there are your VMs/LXCs virtual disks.
QUESTION 3
How can I move the backup location from pve-root to a separate mount point, for instance /mnt/archive ? This is related to the automatic purging and backup configurations from Question 1.
You create a new storage of type "directroy" pointing to "/mnt/archive" and select "Vzdump" as a content type. Also make sure to run pvesm set YourStorageID --is_mountpoint=yes after creating it.
BONUS QUESTION
Can I live without the /var/lib/vz/template directory? It seems to be taking up a lot of space.

Thank you in advance!
Thats where your LXC templates are stored. You could store them somewhere else by creating a new directory storage like I told you before, just with "Container Templates" as content type instead.
 
  • Like
Reactions: jravin
Thank you for the quick and insightful response! I am replying inline...

Proxmox VE is is full linux OS and not an appliance. Doing an upgrade won't undo changes you did through CLI. Indeed, the PVE webUI is very basic and you will quite often need to use the CLI.

I see that, but I have been burned too many times by GUI's doing their own thing, too often overwriting CLI type config's. I will cautiously take your words to heart ....

94GB should be plenty of space. Making your root filesystem bigger won't help you much, it just buys you some more time. You should find out what is using all that space. Just PVE alone shouldn't use much more than 10-20GB.

It was .zst and .log files that were filling that space. I deleted them all, and now there is some breathing room, and the poor system is not warning me about not having space to do just about anything.

Code:
abc@xyz:/etc# df -h
Filesystem            Size  Used Avail Use% Mounted on
udev                  189G     0  189G   0% /dev
tmpfs                  38G  522M   38G   2% /run
/dev/mapper/pve-root   94G   15G   75G  17% /
tmpfs                 189G   40M  189G   1% /dev/shm
tmpfs                 5.0M     0  5.0M   0% /run/lock
tmpfs                 189G     0  189G   0% /sys/fs/cgroup
/dev/fuse              30M   20K   30M   1% /etc/pve
tmpfs                  38G     0   38G   0% /run/user/0
/dev/sdb1             3.3T   89M  3.1T   1% /mnt/archive

Using the GUI, when I go to "Datacenter :: Node :: Disks :: LVM" it shows /dev/sda3 with 100% usage, even though the above disk free shows 17% used. I have tried the "reload" button, but it still shows 100%.

How do I reset this, and/or when does it automatically reset itself?

Are you sure you want to use Vzdump for that? Vzdump can't store incremental backups. So backing up 9 copies of a 10GB VM will consume up to 90GB of space (or even 100GB because temporarily 10 backups will need to be stored). So with just around 70GB of space for backups on your root filesystem you would be limited to around 7GB of VMs. You also told us that you are using LVM-Thin for your VMs. So you are storing your backups of the VMs on the same disk you are running those VMs? That would be bad, because when your disks dies you would loose all your VMs and all of your backups.
I would really recommend to get a dedicated disk for your backups with enough capacity. And if you want to keep 9 backups of each VM you might want to have a look into the Proxmox Backup Server (PBS). PBS can deduplicate your backups so 9 copies of a 10GB VM might just need something like 12GB and not full 90GB.

I have a lot of storage. In the above disk free (df) command, the last line shows 3.1T just sitting there. I also have a few other network attached storage servers just staying on to get updates. I get that I do not want to be wasteful, but I can afford to do full backups right now.

But to your question:
Go to "Datastore -> Backups -> YourBackupTask -> Edit". there is a "backup retention" tab where you can set "keep daily=3, keep weekly=4, keep-monthly=2".

When I go to "Datacenter :: Backup" it is empty.

I do not have a "Datastore", that I know of. When I ask the oracle about this, it just points me to Proxmox Backup Server articles. To my knowledge, I do not have Proxmox Backup Server.

AH! I was responding down below and jumped back up here. I did find a "Backup Retention" tab when I was making a new Directory, per your instructions below. Awesome! For me, it is at, "Datacenter :: Storage :: select-an-id :: click edit :: Backup Retention tab :: set retention config". Maybe this is different than what you were saying, but it is very similar.

You might also have a look at the pruning simulator: https://pbs.proxmox.com/docs/prune-simulator/

Thank you. Yes, I have looked at this, and I cringe nearly every time I see it.

pve-data is your LVM-Thin pool. You can't see any files there because there are no files. LVM-Thin is a block storage so there is no filesystem. so the 73GB of data there are your VMs/LXCs virtual disks.

Awesome! Good to know!

You create a new storage of type "directroy" pointing to "/mnt/archive" and select "Vzdump" as a content type. Also make sure to run pvesm set YourStorageID --is_mountpoint=yes after creating it.

** I jumped up in this thread to respond to your Datastore -> ... Backup Retention tab guidance.

As to creating the new storage. I have added /mnt/archive as a Directory, set the vzbackup dump option and the backup retention options. It would be nice if PVE ran that pvesm command for me, or at least gave me a pop-up to do so after creating it. Without you mentioning this to me, I would not have even guessed to do this. Thank you!

Code:
abc@xyz:/var/lib/vz# pvesm set archive --is_mountpoint=yes

Thats where your LXC templates are stored. You could store them somewhere else by creating a new directory storage like I told you before, just with "Container Templates" as content type instead.

I will leave them there for now, given I just created the archive Directory for vzdump usage. This should eliminate pve-root getting full from backups.

Dunuin, thank you very much for your time and knowledge.
 
Uh, all my vm's are saying they cannot do anything because "local-lvm" does not exist. Any ideas?

I have only made the changes from this thread, and the pve-root directory runs out of space: Why and What should I do? thread.

EDIT:

I just freaked out a wee bit, as all my vm's began to thrash in the ether as they appeared to be dying. The storage 'local-lvm' does not exists ( backup at least ) post, specifically the OP and Chris' great posts allowed me to modify my storage.cfg file. Why was it wiped to begin with?!!! I did not manually destroy this file, yet it was obviously purged in some manner.

I even did a rookie gamble, and rebooted the entire PVE server. This was bundles of fun given how long it takes for this server to go from running for a year or so to being bounced (rebooted). Time stretched in my office while this was happening. Time dilation. It was no bueno.

This is what it looked like when everything was dying:

Code:
abc@xyz:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content vztmpl,iso,backup,images
        prune-backups keep-daily=7,keep-last=14,keep-monthly=1
        shared 0

dir: archive
        path /mnt/archive/
        content images,backup
        is_mountpoint yes
        prune-backups keep-daily=3,keep-monthly=2,keep-weekly=4
        shared 0

Based on Chris' post in the storage 'local-lvm' does not exists ( backup at least ) thread, I added to it as:

Code:
abc@xyz:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content vztmpl,iso,backup,images
        prune-backups keep-daily=7,keep-last=14,keep-monthly=1
        shared 0

dir: archive
        path /mnt/archive/
        content images,backup
        is_mountpoint yes
        prune-backups keep-daily=3,keep-monthly=2,keep-weekly=4
        shared 0

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images
        nodes name_of_my_server_node

I was so concerned about the CLI messing with things and yet it seems the GUI somehow did something nefarious right under my nose. Perhaps it was one of the commands I manually ran? This was a very unpleasant feeling, and even now I am curious what else may have been misconfigured unbeknownst to me.

I went through the same steps as the OP in the above mentioned thread:

Code:
abc@xyz:~# vgs
  VG  #PV #LV #SN Attr   VSize VFree
  pve   1  10   0 wz--n- 3.27t <16.38g

then:

Code:
abc@xyz:~# lvs
  LV            VG  Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve twi-aotz-- <3.13t             2.16   0.37
  root          pve -wi-ao---- 96.00g
  swap          pve -wi-ao----  8.00g
  vm-100-disk-0 pve Vwi-a-tz-- 32.00g data        25.36
  vm-102-disk-0 pve Vwi-aotz-- 20.00g data        80.11
  vm-103-disk-0 pve Vwi-aotz-- 16.00g data        20.13
  vm-104-disk-0 pve Vwi-a-tz-- 32.00g data        45.95
  vm-105-disk-0 pve Vwi-a-tz-- 64.00g data        5.23
  vm-106-disk-0 pve Vwi-aotz-- 32.00g data        23.89
  vm-107-disk-0 pve Vwi-a-tz-- 20.00g data        80.45

And once I made the changes, to verify I could get a similar result as the OP I ran (randomly picking one of my vm's (102):

Code:
abc@xyz:~# vzdump 102 --compress lzo --mode snapshot --storage local --node name_of_my_server_node

That successfully ran!

As I mentioned earlier, I bounced my server, and since the config was messed up, none of the VM's automatically started. They all said the same error, claiming local-lvm did not exist. Because it did not exist. So, I manually started each of my vm's and they fired up happy as a bumble bee in a pile of pollen. One by one, they all came back online.

WTH?
 
Last edited:
Using the GUI, when I go to "Datacenter :: Node :: Disks :: LVM" it shows /dev/sda3 with 100% usage, even though the above disk free shows 17% used. I have tried the "reload" button, but it still shows 100%.

How do I reset this, and/or when does it automatically reset itself?
Thats normal. The "LVM" only shows how much of your LVM is allocated. You allocated 100% of that VG to either your LV used as the root filesystem (storage name "local") or your LV that stores your LVM-Thin pool (storage name "local-lvm"). If you want to know how full your root filesystem ("local") is have a look at your nodes summary page. And to keep an eye on the LVM-Thin pool you can look at the summary page of your LVM-Thin storage ("local-lvm") summary page.
When I go to "Datacenter :: Backup" it is empty.

I do not have a "Datastore", that I know of. When I ask the oracle about this, it just points me to Proxmox Backup Server articles. To my knowledge, I do not have Proxmox Backup Server.
Your right, my fault. I meant 'Go to "Datacenter -> Backups -> YourBackupTask -> Edit".'.
It would be nice if PVE ran that pvesm command for me, or at least gave me a pop-up to do so after creating it. Without you mentioning this to me, I would not have even guessed to do this. Thank you!
Jep. That option is needed when your directory storage is pointing to a mountpoint. If you don't run that command PVE can run into problems. One problem could be that PVE will create subfolders there and then the mount to that mountpoint will fail as the mountpoint already contains folders. Another problem could be that PVE won't wait until that mountpoint is mounted and then everything written to that storage will end up on your root filesystem instead of on the disk you wanted to mount there and your root filesystem would be at 100% again.
And this is one of the cases where an important step can only be done using the CLI. I already complained multiple times that this should be added to the GUI because most people don't know that this command exists nor that it is needed. So they run into the problems mentioned above, start a new thread here and someone needs to tell them to run that command. I hoped adding it to the GUI could help. So that people might google what that options means, if they see a "mountpoint" checkbox when adding a directory storage, so less people run into those problems in the first place.
 
Last edited:
  • Like
Reactions: jravin

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!