LXC Slow update (CentOS 7)

liberodark

Member
Apr 26, 2021
104
21
23
31
Hi,

I am backing up with PBS 2.0 with a PVE 7.0. I noticed a bug when saving CentOS 7. Unlike Alma 8 or Rocky 8.

Example 1 :

VMIDNAMESTATUSTIMESIZEFILENAME
100centos7-1OK00:23:35
444.46GB​
ct/100/2021-10-18T01:00:02Z
101centos7-2OK00:22:52
444.41GB​
ct/101/2021-10-18T01:23:37Z
102centos7-3OK00:22:52
444.41GB​
ct/102/2021-10-18T01:46:29Z
103centos7-4OK00:22:47
444.43GB​
ct/103/2021-10-18T02:09:21Z
104centos7-5OK00:34:54
488.18GB​
ct/104/2021-10-18T02:32:08Z
105centos7-6OK00:25:58
447.12GB​
ct/105/2021-10-18T03:07:02Z
106centos7-7OK00:23:42
446.11GB​
ct/106/2021-10-18T03:33:00Z
114alma8OK00:02:38
5.61GB​
ct/114/2021-10-18T03:56:42Z
TOTAL​
02:59:183.09TB

Example 2 :

Code:
100: 2021-10-18 03:00:02 INFO: Starting Backup of VM 100 (lxc)
100: 2021-10-18 03:00:02 INFO: status = running
100: 2021-10-18 03:00:02 INFO: backup mode: suspend
100: 2021-10-18 03:00:02 INFO: ionice priority: 0
100: 2021-10-18 03:00:02 INFO: CT Name: centos7-1
100: 2021-10-18 03:00:02 INFO: including mount point rootfs ('/') in backup
100: 2021-10-18 03:00:02 INFO: starting first sync /proc/1295/root/ to /var/tmp/vzdumptmp2630580_100/
100: 2021-10-18 03:07:49 INFO: first sync finished - transferred 477.23G bytes in 467s
100: 2021-10-18 03:07:49 INFO: suspending guest
100: 2021-10-18 03:07:49 INFO: starting final sync /proc/1295/root/ to /var/tmp/vzdumptmp2630580_100/
100: 2021-10-18 03:07:50 INFO: final sync finished - transferred 5.51M bytes in 1s
100: 2021-10-18 03:07:50 INFO: resuming guest
100: 2021-10-18 03:07:50 INFO: guest is online again after 1 seconds
100: 2021-10-18 03:07:50 INFO: creating Proxmox Backup Server archive 'ct/100/2021-10-18T01:00:02Z'
100: 2021-10-18 03:07:50 INFO: run: /usr/bin/proxmox-backup-client backup --crypt-mode=none pct.conf:/var/tmp/vzdumptmp2630580_100//etc/vzdump/pct.conf fw.conf:/var/tmp/vzdumptmp2630580_100//etc/vzdump/pct.fw root.pxar:/var/tmp/vzdumptmp2630580_100/ --include-dev /var/tmp/vzdumptmp2630580_100//. --skip-lost-and-found --exclude=/tmp/?* --exclude=/var/tmp/?* --exclude=/var/run/?*.pid --backup-type ct --backup-id 100 --backup-time 1634518802 --repository root@pam@10.17.100.3:backup
100: 2021-10-18 03:07:50 INFO: Starting backup: ct/100/2021-10-18T01:00:02Z
100: 2021-10-18 03:07:50 INFO: Client name: FR-PVE
100: 2021-10-18 03:07:50 INFO: Starting backup protocol: Mon Oct 18 03:07:50 2021
100: 2021-10-18 03:07:50 INFO: Downloading previous manifest (Sun Oct 17 03:00:01 2021)
100: 2021-10-18 03:07:50 INFO: Upload config file '/var/tmp/vzdumptmp2630580_100//etc/vzdump/pct.conf' to 'root@pam@10.17.100.3:8007:backup' as pct.conf.blob
100: 2021-10-18 03:07:50 INFO: Upload config file '/var/tmp/vzdumptmp2630580_100//etc/vzdump/pct.fw' to 'root@pam@10.17.100.3:8007:backup' as fw.conf.blob
100: 2021-10-18 03:07:50 INFO: Upload directory '/var/tmp/vzdumptmp2630580_100/' to 'root@pam@10.17.100.3:8007:backup' as root.pxar.didx
100: 2021-10-18 03:23:34 INFO: root.pxar: had to backup 56.69 MiB of 444.46 GiB (compressed 8.59 MiB) in 944.18s
100: 2021-10-18 03:23:34 INFO: root.pxar: average backup speed: 61.48 KiB/s
100: 2021-10-18 03:23:34 INFO: root.pxar: backup was done incrementally, reused 444.41 GiB (100.0%)
100: 2021-10-18 03:23:34 INFO: Uploaded backup catalog (618.25 KiB)
100: 2021-10-18 03:23:34 INFO: Duration: 944.51s
100: 2021-10-18 03:23:34 INFO: End Time: Mon Oct 18 03:23:34 2021
100: 2021-10-18 03:23:37 INFO: Finished Backup of VM 100 (00:23:35)

All backups of CT CentOS 7 are 444.41GB While it does not weigh more than 8 to 15 GB.
It really slows down the backup time.
Would you be able to correct this problem ?

Best Regards
 
hi,

All backups of CT CentOS 7 are 444.41GB While it does not weigh more than 8 to 15 GB.
can you show us your container configurations? cat /etc/pve/lxc/100.conf
 
Hi,

CT 100 :

Code:
arch: amd64
cores: 2
hostname: centos7-1.myhost.fr
memory: 2048
nameserver: 10.17.32.2
net0: name=eth0,bridge=vmbr0,gw=10.17.102.254,hwaddr=62:03:72:E8:DE:99,ip=10.17.102.57/24,tag=102,type=veth
ostype: centos
rootfs: netapp:100/vm-100-disk-0.raw,size=40G
searchdomain: myhost.fr
swap: 512

This CT use exactly 1.51GB

1634565149278.png

Best Regards
 
Last edited:
rootfs: netapp:100/vm-100-disk-0.raw,size=40G
what kind of storage is netapp? (you can check /etc/pve/storage.cfg file)

also are you mounting anything inside the containers? for example NFS share or similar? if you run df -h inside the centos container what do you get? please post the output.

also please post this configuration file as well: /var/lib/lxc/100/config
 
Hi,


My Storage is NFS on NetAPP :

Code:
nfs: netapp
        export /fr_l_prx_sata_001
        path /mnt/pve/netapp
        server 10.17.16.40
        content backup,rootdir,images
        prune-backups keep-last=5

Have no mounting anything on CTs.

My CT 100 df -h :

Code:
Filesystem      Size  Used Avail Use% Mounted on
/dev/loop0       40G  1.6G   36G   5% /
none            492K  4.0K  488K   1% /dev
tmpfs           7.8G     0  7.8G   0% /dev/shm
tmpfs           7.8G   41M  7.8G   1% /run
tmpfs           7.8G     0  7.8G   0% /sys/fs/cgroup
tmpfs           205M     0  205M   0% /run/user/0

My CT 100 LXC Config :

Code:
lxc.cgroup.relative = 0
lxc.cgroup.dir.monitor = lxc.monitor/100
lxc.cgroup.dir.container = lxc/100
lxc.cgroup.dir.container.inner = ns
lxc.arch = amd64
lxc.include = /usr/share/lxc/config/centos.common.conf
lxc.apparmor.profile = generated
lxc.apparmor.raw = deny mount -> /proc/,
lxc.apparmor.raw = deny mount -> /sys/,
lxc.monitor.unshare = 1
lxc.tty.max = 2
lxc.environment = TERM=linux
lxc.uts.name = centos7-1.myhost.fr
lxc.cgroup.memory.limit_in_bytes = 2147483648
lxc.cgroup.memory.memsw.limit_in_bytes = 2684354560
lxc.cgroup.cpu.shares = 1024
lxc.rootfs.path = /var/lib/lxc/100/rootfs
lxc.net.0.type = veth
lxc.net.0.veth.pair = veth100i0
lxc.net.0.hwaddr = 62:03:72:E8:DE:99
lxc.net.0.name = eth0
lxc.net.0.script.up = /usr/share/lxc/lxcnetaddbr
lxc.cgroup.cpuset.cpus =


Best Regards
 
Last edited:
Couldn't large sparse files inside those CT explains it ? I had similare issues with AD domain members having huge sparse /var/log/lastlog files
 
That a strange bug :

In CT 100 :
Code:
ls -lha /var/log/lastlog
-rw-r--r-- 1 root root 444G Oct 18 19:13 /var/log/lastlog

All CTs is connected to AD.

@danielb You have idea for fix that ?

You add exception :
Code:
--exclude=/var/log/lastlog

Best Regards
 
Last edited:
Yep have exclude /var/log/lastlog in VZDump but i think need a fixe for that.
This is only a workaround.
 
Yep have exclude /var/log/lastlog in VZDump but i think need a fixe for that.
This is only a workaround.
since the /var/log/lastlog file is sparse, it doesn't actually use the amount of space reported by ls. to see the real size it occupies you can use du -hs /var/log/lastlog.

also the PBS interface shows the backup as having size >400G but in reality it should also be taking less space on the disk itself, you can check it on your PBS node.
 
Hi,


That not the probleme of the storage used on PBS.
PBS make deduplication on /var/log/lastlog so i don't take space on PBS.
But is taking very long time to backup 3H now without /var/log/lastlog is taking 22Min.

Best Regards
 
Last edited:
you can try rotating the lastlog to make it smaller, then your backups will get faster. or just exclude from the backups as discussed.

hope this helps
 
Lastlog can't be rotated, and it's a binary file. If it's removed, it'll be recreated with the same size (and still sparse). IMHO the only reasonable thing to do is to exclude it from backups
 
Last edited:
  • Like
Reactions: oguz

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!