container backup using much more space after update

CanisLupus

Member
Aug 27, 2020
6
1
8
34
Hello,

I'm using pbs since a few weeks now and am very happy with its features. Especially the deduplication is great. It worked well for over a month now. I'm doing daily backups and set up the retention policy on 5-7-8-24 (last-daily-weekly-monthly).

The last update of the pbs-server and client was performed to version 0.8.11 three days ago.

Since then the backup started to use much more space and taking longer, as much less of the existing backups is being reused (from 98.8% to only 48.9%, see atached example logs). There was no apparend reason (from the container side) for this behaviour, as the backuped container didn't had an major update or any noticable change in usage. The main point being, that this low 'reuse-rate' is showing up not once, but since then. I would have expected, it maybe shows up once, but the following backups should have gone back to a better reuse-rate again. It does not only affect the copied data, it also seems to use more space on the pbs as well, as I can clearly see a steeper increase in used storage than before.

This behaviour seems to show up on all containers, with varying degrees off severity (all were above 90% reuse-rate, now ranging from about 20%-70%). Even stopped containers show this behaviour. VMs dont seem to be affected.

Is this a known behaviour after the recent pbs update? Has anyone else experienced this? Is there anything I could do about that? Can I provide further logs for investigation?

Unbenannt.png

BEFORE:
Code:
122: 2020-08-24 01:01:32 INFO: Starting Backup of VM 122 (lxc)
122: 2020-08-24 01:01:32 INFO: status = running
122: 2020-08-24 01:01:32 INFO: CT Name: nextcloud
122: 2020-08-24 01:01:32 INFO: including mount point rootfs ('/') in backup
122: 2020-08-24 01:01:32 INFO: including mount point mp0 ('/mnt/mp0') in backup
122: 2020-08-24 01:01:32 INFO: backup mode: snapshot
122: 2020-08-24 01:01:32 INFO: ionice priority: 7
122: 2020-08-24 01:01:32 INFO: suspend vm to make snapshot
122: 2020-08-24 01:01:32 INFO: create storage snapshot 'vzdump'
122: 2020-08-24 01:01:33 INFO: resume vm
122: 2020-08-24 01:01:33 INFO: guest is online again after 1 seconds
122: 2020-08-24 01:01:33 INFO: creating Proxmox Backup Server archive 'ct/122/2020-08-23T23:01:32Z'
122: 2020-08-24 01:01:33 INFO: run: lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- /usr/bin/proxmox-backup-client backup --crypt-mode=none pct.conf:/var/tmp/vzdumptmp29006/etc/vzdump/pct.conf root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/./ --include-dev /mnt/vzsnap0/./mnt/mp0 --skip-lost-and-found --backup-type ct --backup-id 122 --backup-time 1598223692 --repository backupuser@pbs@10.X.X.X:mp0
122: 2020-08-24 01:01:33 INFO: Starting backup: ct/122/2020-08-23T23:01:32Z
122: 2020-08-24 01:01:33 INFO: Client name: pve1
122: 2020-08-24 01:01:33 INFO: Starting protocol: 2020-08-24T01:01:33+02:00
122: 2020-08-24 01:01:33 INFO: Upload config file '/var/tmp/vzdumptmp29006/etc/vzdump/pct.conf' to 'backupuser@pbs@10.X.X.X:mp0' as pct.conf.blob
122: 2020-08-24 01:01:33 INFO: Upload directory '/mnt/vzsnap0' to 'backupuser@pbs@10.X.X.X:mp0' as root.pxar.didx
122: 2020-08-24 01:04:41 INFO: root.pxar: had to upload 625.99 MiB of 51.94 GiB in 188.10s, avgerage speed 3.33 MiB/s).
122: 2020-08-24 01:04:41 INFO: root.pxar: backup was done incrementally, reused 51.32 GiB (98.8%)
122: 2020-08-24 01:04:41 INFO: Uploaded backup catalog (7.39 MiB)
122: 2020-08-24 01:04:41 INFO: Duration: PT188.217461646S
122: 2020-08-24 01:04:41 INFO: End Time: 2020-08-24T01:04:41+02:00
122: 2020-08-24 01:04:42 INFO: remove vzdump snapshot
122: 2020-08-24 01:04:43 INFO: Finished Backup of VM 122 (00:03:11)

AFTER:
Code:
122: 2020-08-25 01:01:35 INFO: Starting Backup of VM 122 (lxc)
122: 2020-08-25 01:01:35 INFO: status = running
122: 2020-08-25 01:01:35 INFO: CT Name: nextcloud
122: 2020-08-25 01:01:35 INFO: including mount point rootfs ('/') in backup
122: 2020-08-25 01:01:35 INFO: including mount point mp0 ('/mnt/mp0') in backup
122: 2020-08-25 01:01:35 INFO: backup mode: snapshot
122: 2020-08-25 01:01:35 INFO: ionice priority: 7
122: 2020-08-25 01:01:35 INFO: suspend vm to make snapshot
122: 2020-08-25 01:01:35 INFO: create storage snapshot 'vzdump'
122: 2020-08-25 01:01:36 INFO: resume vm
122: 2020-08-25 01:01:36 INFO: guest is online again after 1 seconds
122: 2020-08-25 01:01:36 INFO: creating Proxmox Backup Server archive 'ct/122/2020-08-24T23:01:35Z'
122: 2020-08-25 01:01:36 INFO: run: lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- /usr/bin/proxmox-backup-client backup --crypt-mode=none pct.conf:/var/tmp/vzdumptmp23570/etc/vzdump/pct.conf root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/./ --include-dev /mnt/vzsnap0/./mnt/mp0 --skip-lost-and-found --backup-type ct --backup-id 122 --backup-time 1598310095 --repository backupuser@pbs@10.X.X.X:mp0
122: 2020-08-25 01:01:36 INFO: Starting backup: ct/122/2020-08-24T23:01:35Z
122: 2020-08-25 01:01:36 INFO: Client name: pve1
122: 2020-08-25 01:01:36 INFO: Starting protocol: 2020-08-25T01:01:36+02:00
122: 2020-08-25 01:01:36 INFO: Upload config file '/var/tmp/vzdumptmp23570/etc/vzdump/pct.conf' to 'backupuser@pbs@10.X.X.X:mp0' as pct.conf.blob
122: 2020-08-25 01:01:36 INFO: Upload directory '/mnt/vzsnap0' to 'backupuser@pbs@10.X.X.X:mp0' as root.pxar.didx
122: 2020-08-25 01:08:34 INFO: root.pxar: had to upload 26.56 GiB of 51.96 GiB in 417.92s, average speed 65.09 MiB/s).
122: 2020-08-25 01:08:34 INFO: root.pxar: backup was done incrementally, reused 25.40 GiB (48.9%)
122: 2020-08-25 01:08:34 INFO: Uploaded backup catalog (7.39 MiB)
122: 2020-08-25 01:08:34 INFO: Duration: PT418.097534313S
122: 2020-08-25 01:08:34 INFO: End Time: 2020-08-25T01:08:34+02:00
122: 2020-08-25 01:08:35 INFO: remove vzdump snapshot
122: 2020-08-25 01:08:36 INFO: Finished Backup of VM 122 (00:07:01)

Regards
CanisLupus
 
hi,

what is on the container? what about the /mnt/mp0 mountpoint? where there any changes in the mountpoint since the problem started occuring? since it's included in the backup..
 
There is a nextcloud running in the container, combined with an nginx-Reverseproxy and the mariadb. The mountpoint contains the data-directory of the nextcloud, but there were no unusual changes here. Just a few small files were changed, as they are almost everyday.

The point that makes me think, is that all containers show this behaviour, even when they are stopped for weeks now. I parsed the logs of the backups of the last month, searching for the "reuse-rate" in the following line:

122: 2020-08-24 01:04:41 INFO: root.pxar: backup was done incrementally, reused 51.32 GiB (98.8%)

Here are the rusults:

VMs dont seem to be affected. But interestingly the containers 141 and 150 were shut down for weeks. As expected the old backups could be reused for 100%. But after the update (the last 3 rows) even they have a much lower deduplication-rate / reuse-rate.

1598523190333.png
 
  • Like
Reactions: lordpit
thank you for the information and the nice spreadsheet :)

we'll take a look at reproducing and diagnosing the problem and keep you updated
 
could you also post your pveversion -v from the node?
 
Same problem here

Code:
Now after upgrade and reboot

INFO: root.pxar: had to upload 1.26 GiB of 2.33 GiB in 213.43s, average speed 6.02 MiB/s).
INFO: root.pxar: backup was done incrementally, reused 1.08 GiB (46.2%)
INFO: Uploaded backup catalog (1.14 MiB)

INFO: root.pxar: had to upload 1.26 GiB of 2.33 GiB in 213.43s, average speed 6.02 MiB/s).
INFO: root.pxar: backup was done incrementally, reused 1.08 GiB (46.2%)


before :

INFO: root.pxar: had to upload 299.23 MiB of 2.33 GiB in 147.33s, avgerage speed 2.03 MiB/s).
INFO: root.pxar: backup was done incrementally, reused 2.04 GiB (87.5%)

INFO: root.pxar: had to upload 286.44 MiB of 2.33 GiB in 732.47s, avgerage speed 400.45 KiB/s).
INFO: root.pxar: backup was done incrementally, reused 2.05 GiB (88.0%)
 
Code:
pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.4.44-2-pve)
pve-manager: 6.2-11 (running version: 6.2-11/22fb4983)
pve-kernel-5.4: 6.2-5
pve-kernel-helper: 6.2-5
pve-kernel-5.3: 6.1-6
pve-kernel-5.4.55-1-pve: 5.4.55-1
pve-kernel-5.4.44-2-pve: 5.4.44-2
pve-kernel-5.4.44-1-pve: 5.4.44-1
pve-kernel-5.4.41-1-pve: 5.4.41-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.18-2-pve: 5.3.18-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-2
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-5
libpve-guest-common-perl: 3.1-2
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.2-6
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
openvswitch-switch: 2.12.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-10
pve-cluster: 6.1-8
pve-container: 3.1-12
pve-docs: 6.2-5
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-2
pve-firmware: 3.1-2
pve-ha-manager: 3.0-9
pve-i18n: 2.1-3
pve-qemu-kvm: 5.0.0-12
pve-xtermjs: 4.7.0-1
qemu-server: 6.2-11
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.4-pve1
 
Code:
pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.4.55-1-pve)
pve-manager: 6.2-11 (running version: 6.2-11/22fb4983)
pve-kernel-5.4: 6.2-5
pve-kernel-helper: 6.2-5
pve-kernel-5.4.55-1-pve: 5.4.55-1
pve-kernel-5.4.44-2-pve: 5.4.44-2
ceph: 14.2.10-pve1
ceph-fuse: 14.2.10-pve1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 7.7-1
ifupdown: residual config
ifupdown2: 3.0.0-1+pve2
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-2
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-5
libpve-guest-common-perl: 3.1-2
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.2-6
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-10
pve-cluster: 6.1-8
pve-container: 3.1-12
pve-docs: 6.2-5
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-2
pve-firmware: 3.1-2
pve-ha-manager: 3.0-9
pve-i18n: 2.1-3
pve-qemu-kvm: 5.0.0-12
pve-xtermjs: 4.7.0-1
qemu-server: 6.2-11
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.4-pve1
 
One LXC is even worse :

Code:
179: 2020-08-25 03:00:03 INFO: Starting Backup of VM 179 (lxc)
179: 2020-08-25 03:00:03 INFO: status = running
179: 2020-08-25 03:00:03 INFO: CT Name: nextcloud
179: 2020-08-25 03:00:03 INFO: including mount point rootfs ('/') in backup
179: 2020-08-25 03:00:03 INFO: backup mode: snapshot
179: 2020-08-25 03:00:03 INFO: ionice priority: 7
179: 2020-08-25 03:00:03 INFO: create storage snapshot 'vzdump'
179: 2020-08-25 03:00:07 INFO: creating Proxmox Backup Server archive 'ct/179/2020-08-25T01:00:03Z'
179: 2020-08-25 03:00:07 INFO: run: /usr/bin/proxmox-backup-client backup --crypt-mode=none pct.conf:/var/tmp/vzdumptmp420796/etc/vzdump/pct.conf root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/./ --skip-lost-and-found --backup-type ct --backup-id 179 --backup-time 1598317203 --repository root@pam@192.168.1.155:Backup_Day
179: 2020-08-25 03:00:07 INFO: Starting backup: ct/179/2020-08-25T01:00:03Z
179: 2020-08-25 03:00:07 INFO: Client name: p2
179: 2020-08-25 03:00:07 INFO: Starting protocol: 2020-08-25T03:00:07+02:00
179: 2020-08-25 03:00:07 INFO: Upload config file '/var/tmp/vzdumptmp420796/etc/vzdump/pct.conf' to 'root@pam@192.168.1.155:Backup_Day' as pct.conf.blob
179: 2020-08-25 03:00:07 INFO: Upload directory '/mnt/vzsnap0' to 'root@pam@192.168.1.155:Backup_Day' as root.pxar.didx
179: 2020-08-25 03:08:55 INFO: root.pxar: had to upload 1.54 GiB of 1.93 GiB in 527.84s, average speed 2.98 MiB/s).
179: 2020-08-25 03:08:55 INFO: root.pxar: backup was done incrementally, reused 407.60 MiB (20.6%)

179: 2020-08-24 03:00:03 INFO: Starting Backup of VM 179 (lxc)
179: 2020-08-24 03:00:03 INFO: status = running
179: 2020-08-24 03:00:03 INFO: CT Name: nextcloud
179: 2020-08-24 03:00:04 INFO: including mount point rootfs ('/') in backup
179: 2020-08-24 03:00:04 INFO: backup mode: snapshot
179: 2020-08-24 03:00:04 INFO: ionice priority: 7
179: 2020-08-24 03:00:04 INFO: create storage snapshot 'vzdump'
179: 2020-08-24 03:00:06 INFO: creating Proxmox Backup Server archive 'ct/179/2020-08-24T01:00:03Z'
179: 2020-08-24 03:00:06 INFO: run: /usr/bin/proxmox-backup-client backup --crypt-mode=none pct.conf:/var/tmp/vzdumptmp1187903/etc/vzdump/pct.conf root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/./ --skip-lost-and-found --backup-type ct --backup-id 179 --backup-time 1598230803 --repository root@pam@192.168.1.155:Backup_Day
179: 2020-08-24 03:00:06 INFO: Starting backup: ct/179/2020-08-24T01:00:03Z
179: 2020-08-24 03:00:06 INFO: Client name: p2
179: 2020-08-24 03:00:06 INFO: Starting protocol: 2020-08-24T03:00:06+02:00
179: 2020-08-24 03:00:06 INFO: Upload config file '/var/tmp/vzdumptmp1187903/etc/vzdump/pct.conf' to 'root@pam@192.168.1.155:Backup_Day' as pct.conf.blob
179: 2020-08-24 03:00:06 INFO: Upload directory '/mnt/vzsnap0' to 'root@pam@192.168.1.155:Backup_Day' as root.pxar.didx
179: 2020-08-24 03:21:02 INFO: root.pxar: had to upload 338.96 MiB of 1.93 GiB in 1256.08s, avgerage speed 276.33 KiB/s).
179: 2020-08-24 03:21:02 INFO: root.pxar: backup was done incrementally, reused 1.60 GiB (82.9%)
179: 2020-08-24 03:21:02 INFO: Uploaded backup catalog (1.98 MiB)
179: 2020-08-24 03:21:02 INFO: Duration: PT1256.352609246S
179: 2020-08-24 03:21:02 INFO: End Time: 2020-08-24T03:21:02+02:00
179: 2020-08-24 03:21:03 INFO: remove vzdump snapshot
179: 2020-08-24 03:21:06 INFO: Finished Backup of VM 179 (00:21:03)
 
Code:
pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.4.44-2-pve)
pve-manager: 6.2-11 (running version: 6.2-11/22fb4983)
pve-kernel-5.4: 6.2-5
pve-kernel-helper: 6.2-5
pve-kernel-5.3: 6.1-6
pve-kernel-5.4.55-1-pve: 5.4.55-1
pve-kernel-5.4.44-2-pve: 5.4.44-2
pve-kernel-5.4.44-1-pve: 5.4.44-1
pve-kernel-5.4.41-1-pve: 5.4.41-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.18-2-pve: 5.3.18-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-2
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-5
libpve-guest-common-perl: 3.1-2
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.2-6
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
openvswitch-switch: 2.12.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-10
pve-cluster: 6.1-8
pve-container: 3.1-12
pve-docs: 6.2-5
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-2
pve-firmware: 3.1-2
pve-ha-manager: 3.0-9
pve-i18n: 2.1-3
pve-qemu-kvm: 5.0.0-12
pve-xtermjs: 4.7.0-1
qemu-server: 6.2-11
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.4-pve1

i see you've made a kernel update but haven't rebooted (running kernel doesn't match latest installed). just to rule it out, could you try to reboot the machine and see if it affects anything?
 
I did this over the weekend. Problem still persists.

I can even recreate the problem with a newly created container. I just deploy it, power it down, make two backups. The second backup shows the same behaviour, as any other powered off container.

edit: Typos
 
Just FYI: I have the same issue.
  • After the upgrade to 0.8.11 the lxc backups droped to around 40-50% reusage. Before they had >>90%.
  • VM backups are ok.
  • With 0.8.13 still the same
 
hi,

i've been trying to reproduce the issue and its pretty confusing...
first i was able to reproduce it consistently, but i could never reproduce it with containers which were turned off. (those had consistent high reuse rate)

and after a while the running containers also caught up. i've tested with 0.8.6 and 0.8.11

could you try using 'suspend' or 'stop' mode for the backup and compare the reuse rate?
 
hi,

first of all, I did update the pbs server and client to version 0.8.13 in the meantime (restarted pve and pbs after update ;) ). But the problem still persisted.

So I just set up a little test for your last question with a new container. And to my surprise: everthing works! The same test 'failed' days before.
Snapshot, Suspend, Stop, container powered on -> Reuse rate allways above 95%
powered off -> every time 100%

Just tested it with other conainers, all seem fine. Even started the regular backup job via "run now" -> works perfectly fine again.

Only thing irritating me, last night the scheduled backup still did only had a reuse-rate of about 50%, as all the time. And the update of pbs was performed yesterday, so it already was the new version last night.

I'll check tomorrow morning, if the nightly backup was successful. But I hope and think so.
 
Much better in stop mode
Code:
INFO: starting new backup job: vzdump 179 --mode stop --node p2 --storage Backup_Day --remove 0
INFO: Starting Backup of VM 179 (lxc)
INFO: Backup started at 2020-09-02 00:13:43
INFO: status = running
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: CT Name: nextcloud
INFO: including mount point rootfs ('/') in backup
INFO: stopping vm
INFO: creating Proxmox Backup Server archive 'ct/179/2020-09-01T22:13:43Z'
/dev/rbd5
INFO: run: /usr/bin/proxmox-backup-client backup --crypt-mode=none pct.conf:/var/tmp/vzdumptmp2243839/etc/vzdump/pct.conf root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/./ --skip-lost-and-found --backup-type ct --backup-id 179 --backup-time 1598998423 --repository root@pam@192.168.1.155:Backup_Day
INFO: Starting backup: ct/179/2020-09-01T22:13:43Z
INFO: Client name: p2
INFO: Starting protocol: 2020-09-02T00:13:53+02:00
INFO: Upload config file '/var/tmp/vzdumptmp2243839/etc/vzdump/pct.conf' to 'root@pam@192.168.1.155:Backup_Day' as pct.conf.blob
INFO: Upload directory '/mnt/vzsnap0' to 'root@pam@192.168.1.155:Backup_Day' as root.pxar.didx
INFO: root.pxar: had to upload 248.27 MiB of 1.94 GiB in 305.08s, average speed 833.33 KiB/s).
INFO: root.pxar: backup was done incrementally, reused 1.70 GiB (87.5%)
INFO: Uploaded backup catalog (1.98 MiB)
INFO: Duration: PT305.314463927S
INFO: End Time: 2020-09-02T00:18:59+02:00
INFO: restarting vm
INFO: guest is online again after 316 seconds
INFO: Finished Backup of VM 179 (00:05:16)
INFO: Backup finished at 2020-09-02 00:18:59
INFO: Backup job finished successfully
TASK OK


backup failed in suspend
 
Just checked last nights backup: everything works fine again. Reuse rates are high, backuptimes low again.

Seems to have fixed itself. At least for me. Nevertheless, thank you very much for investigating this problem.
 
Absolut weird.... the same for me... I did not change anything (no update/no extra reboot/etc)

Now:
Code:
INFO: root.pxar: backup was done incrementally, reused 1.91 GiB (97.3%)
INFO: root.pxar: backup was done incrementally, reused 70.79 GiB (99.8%)
INFO: root.pxar: backup was done incrementally, reused 31.36 GiB (95.7%)
INFO: root.pxar: backup was done incrementally, reused 14.69 GiB (99.1%)
INFO: root.pxar: backup was done incrementally, reused 10.32 GiB (96.6%)
INFO: root.pxar: backup was done incrementally, reused 955.53 MiB (94.7%)
INFO: root.pxar: backup was done incrementally, reused 2.93 GiB (78.4%)
INFO: root.pxar: backup was done incrementally, reused 6.58 GiB (97.0%)
INFO: root.pxar: backup was done incrementally, reused 2.81 GiB (98.3%)
INFO: root.pxar: backup was done incrementally, reused 1.38 GiB (90.3%)

Backup 30.08.2020:
Code:
INFO: root.pxar: backup was done incrementally, reused 1.00 GiB (50.9%)
INFO: root.pxar: backup was done incrementally, reused 35.05 GiB (49.4%)
INFO: root.pxar: backup was done incrementally, reused 24.64 GiB (75.2%)
INFO: root.pxar: backup was done incrementally, reused 12.51 GiB (83.8%)
INFO: root.pxar: backup was done incrementally, reused 6.85 GiB (64.1%)
INFO: root.pxar: backup was done incrementally, reused 194.86 MiB (19.3%)
INFO: root.pxar: backup was done incrementally, reused 836.15 MiB (22.0%)
INFO: root.pxar: backup was done incrementally, reused 4.46 GiB (65.7%)
INFO: root.pxar: backup was done incrementally, reused 2.30 GiB (80.6%)
INFO: root.pxar: backup was done incrementally, reused 493.76 MiB (31.4%)

Self-healing after 1week, really strange.... But nevertheless as CanisLupus said, thank you very much for your support and the wonderful products.
 
Absolut weird.... the same for me... I did not change anything (no update/no extra reboot/etc)

--> same here . after the test twith a backup in a stop mod yesterday, today all is ok on all backup

strange

Code:
Proxmox
Virtual Environment 6.2-11
Rechercher
Datacenter
Rechercher:
Vue Serveur
Journaux
INFO: starting new backup job: vzdump 179 198 167 --storage Backup_Day --mailto alexandre@mendes63.fr --quiet 1 --mailnotification always --mode snapshot --compress zstd
INFO: skip external VMs: 167, 198
INFO: Starting Backup of VM 179 (lxc)
INFO: Backup started at 2020-09-02 03:00:03
INFO: status = running
INFO: CT Name: nextcloud
INFO: including mount point rootfs ('/') in backup
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: create storage snapshot 'vzdump'
/dev/rbd11
INFO: creating Proxmox Backup Server archive 'ct/179/2020-09-02T01:00:03Z'
INFO: run: /usr/bin/proxmox-backup-client backup --crypt-mode=none pct.conf:/var/tmp/vzdumptmp2393968/etc/vzdump/pct.conf root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/./ --skip-lost-and-found --backup-type ct --backup-id 179 --backup-time 1599008403 --repository root@pam@192.168.1.155:Backup_Day
INFO: Starting backup: ct/179/2020-09-02T01:00:03Z
INFO: Client name: p2
INFO: Starting protocol: 2020-09-02T03:00:05+02:00
INFO: Upload config file '/var/tmp/vzdumptmp2393968/etc/vzdump/pct.conf' to 'root@pam@192.168.1.155:Backup_Day' as pct.conf.blob
INFO: Upload directory '/mnt/vzsnap0' to 'root@pam@192.168.1.155:Backup_Day' as root.pxar.didx
INFO: root.pxar: had to upload 102.48 MiB of 1.94 GiB in 339.38s, average speed 309.20 KiB/s).
INFO: root.pxar: backup was done incrementally, reused 1.84 GiB (94.8%)
INFO: Uploaded backup catalog (1.98 MiB)
INFO: Duration: PT339.591738071S
INFO: End Time: 2020-09-02T03:05:44+02:00
INFO: remove vzdump snapshot
Removing snap: 100% complete...done.
INFO: Finished Backup of VM 179 (00:05:44)
INFO: Backup finished at 2020-09-02 03:05:47
INFO: Backup job finished successfully
TASK OK

and for the other a didn't touch

Code:
Proxmox
Virtual Environment 6.2-11
Rechercher
Datacenter
Rechercher:
Vue Serveur
Journaux
INFO: starting new backup job: vzdump 179 198 167 --mode snapshot --quiet 1 --compress zstd --storage Backup_Day --mailto alexandre@mendes63.fr --mailnotification always
INFO: skip external VMs: 167, 179
INFO: Starting Backup of VM 198 (lxc)
INFO: Backup started at 2020-09-02 03:00:03
INFO: status = running
INFO: CT Name: Mx
INFO: including mount point rootfs ('/') in backup
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: create storage snapshot 'vzdump'
/dev/rbd2
INFO: creating Proxmox Backup Server archive 'ct/198/2020-09-02T01:00:03Z'
INFO: run: lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- /usr/bin/proxmox-backup-client backup --crypt-mode=none pct.conf:/var/tmp/vzdumptmp527968/etc/vzdump/pct.conf root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/./ --skip-lost-and-found --backup-type ct --backup-id 198 --backup-time 1599008403 --repository root@pam@192.168.1.155:Backup_Day
INFO: Starting backup: ct/198/2020-09-02T01:00:03Z
INFO: Client name: p3
INFO: Starting protocol: 2020-09-02T03:00:05+02:00
INFO: Upload config file '/var/tmp/vzdumptmp527968/etc/vzdump/pct.conf' to 'root@pam@192.168.1.155:Backup_Day' as pct.conf.blob
INFO: Upload directory '/mnt/vzsnap0' to 'root@pam@192.168.1.155:Backup_Day' as root.pxar.didx
INFO: root.pxar: had to upload 298.24 MiB of 2.34 GiB in 236.64s, average speed 1.26 MiB/s).
INFO: root.pxar: backup was done incrementally, reused 2.05 GiB (87.6%)
INFO: Uploaded backup catalog (1.14 MiB)
INFO: Duration: PT238.121732979S
INFO: End Time: 2020-09-02T03:04:03+02:00
INFO: remove vzdump snapshot
Removing snap: 100% complete...done.
INFO: Finished Backup of VM 198 (00:04:02)
INFO: Backup finished at 2020-09-02 03:04:05
INFO: Backup job finished successfully
TASK OK

very strange behavior
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!