Some containers backups are hundreds of GBs

completion

Active Member
Sep 14, 2017
8
1
43
I just installed PBS and ran my first backup. For some of my CTs the backup size is a few GBs, which is what I expect. For others though, they are a couple hundred GBs large. I think this is impossible because the root disk is only like 8 GBs (no other disks). PBS seems to be deduplicating the data away because my dedupe factor is at 130 and the datastore has only a few GBs of spaced used. So thats good but the main problem is the 100s of GBs are being transferred across my network and for each of the problem CTs it takes a while.

Untitled.png

Is this supposed to be happening? Where is the 200 GB of data even coming from?
 
Last edited:
can you post the container config from pve?

(pct config 101)

also the log from the backup would be interesting...
 
Code:
arch: amd64
cores: 2
hostname: ansible
memory: 2048
net0: name=eth0,bridge=vmbr0,hwaddr=2e:bf:5c:6f:03:c7,ip=dhcp,tag=17,type=veth
onboot: 0
ostype: ubuntu
rootfs: local-zfs:subvol-101-disk-0,size=16G
startup: up=10
swap: 0
unprivileged: 1
lxc.idmap: u 0 100000 65536
lxc.idmap: g 0 100000 65536
lxc.idmap: u 832000000 1000 1
lxc.idmap: g 832000000 1000 1
lxc.idmap: u 832000001 832000001 199999
lxc.idmap: g 832000001 832000001 199999

Code:
2020-11-11 10:59:25 INFO: Starting Backup of VM 101 (lxc)
2020-11-11 10:59:25 INFO: status = running
2020-11-11 10:59:25 INFO: CT Name: ansible
2020-11-11 10:59:25 INFO: including mount point rootfs ('/') in backup
2020-11-11 10:59:25 INFO: backup mode: snapshot
2020-11-11 10:59:25 INFO: ionice priority: 7
2020-11-11 10:59:25 INFO: create storage snapshot 'vzdump'
2020-11-11 10:59:25 INFO: creating Proxmox Backup Server archive 'ct/101/2020-11-11T20:59:25Z'
2020-11-11 10:59:25 INFO: run: lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -m u:832000000:1000:1 -m g:832000000:1000:1 -m u:832000001:832000001:199999 -m g:832000001:832000001:199999 -- /usr/bin/proxmox-backup-client backup --crypt-mode=none pct.conf:/var/tmp/vzdumptmp174552_101/etc/vzdump/pct.conf root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/./ --skip-lost-and-found --backup-type ct --backup-id 101 --backup-time 1605128365 --repository wraith@pbs@pve.net.cater.pw:backups
2020-11-11 10:59:25 INFO: Starting backup: ct/101/2020-11-11T20:59:25Z
2020-11-11 10:59:25 INFO: Client name: pve
2020-11-11 10:59:25 INFO: Starting backup protocol: Wed Nov 11 10:59:25 2020
2020-11-11 10:59:25 INFO: Upload config file '/var/tmp/vzdumptmp174552_101/etc/vzdump/pct.conf' to 'comp@pbs@pve.net.cater.pw:8007:backups' as pct.conf.blob
2020-11-11 10:59:25 INFO: Upload directory '/mnt/vzsnap0' to 'comp@pbs@pve.net.cater.pw:8007:backups' as root.pxar.didx
2020-11-11 11:20:45 INFO: root.pxar: had to upload 1.33 GiB of 227.61 GiB in 1279.78s, average speed 1.06 MiB/s).
2020-11-11 11:20:45 INFO: root.pxar: backup was done incrementally, reused 226.28 GiB (99.4%)
2020-11-11 11:20:45 INFO: Uploaded backup catalog (1.18 MiB)
2020-11-11 11:20:45 INFO: Duration: 1279.83s
2020-11-11 11:20:45 INFO: End Time: Wed Nov 11 11:20:45 2020
2020-11-11 11:20:46 INFO: remove vzdump snapshot
2020-11-11 11:20:46 INFO: Finished Backup of VM 101 (00:21:21)
Huh it only had to transfer 1.33 GB but I still have no idea where its pulling the 227 GB from.
I was thinking it was transferring the whole 227 GB based on the amount of time it took but its just really slow.
 
Last edited:
Okay, after ignoring this problem for way too long, I finally tried to troubleshoot it some more. It seems the file var/log/lastlog is the problem. When I restore it and use ls to look at its size its 200GB+ but with du its only 12 KB. I try to open the file but cat and such just hang.

my current solution to excluding lastlog:

add --exclude-path /var/log/lastlog to the end of the vzdump command in /etc/cron.d/vzdump
 
Last edited:
when it walks like a loop and quaks like a loop its probably a cat in a loop :)
 
Here is some more information in case anyone else has this problem.

from lastlog's manpage:
The lastlog file is a database which contains info on the last login of each user. You should not rotate it. It is a sparse file, so its size on the disk is usually much smaller than the shown by "ls -l" (which can indicate a really big file if you have in passwd users with a UID). You can display its real size with "ls -s".
I use freeipa which by default has user uid in the power of 1e8 to 1e9. I think this counts as "high" which indeed does make the lastlog appear very large if you do not treat it as a sparse file.

Here are some related links of other users having similar problems.

The fix I described in my previous post is working like a charm. Although annoyingly, it only affects the automatic backups done via the cronjob so I have to remember to exclude lastlog if I do manual backups. However I am only doing manual backups to test this issue so once I stop messing with this that will not be a problem.

Two alternatives to my fix:
clear lastlog: echo > /var/log/lastlog (source)
link lastlog to /dev/null: ln -sfn /dev/null /var/log/lastlog (source)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!