1st Full Backup of filesystem shows incremental in logs

nikhilbhalwankar

Active Member
Jun 8, 2019
11
0
41
41
Hi,

I initiated first full backup of the filesystem to Proxmox Backup Server using Proxmox Backup Client. Backup is completed successfully however log states that backup was done incrementally. Is this correct? Below are the last few lines from the log. Size of the source filesystem is 338 GB approx.

Code:
processed 376.76 GiB in 1h 45m 0s, uploaded 311.136 GiB
processed 381.649 GiB in 1h 46m 0s, uploaded 313.96 GiB
processed 385.389 GiB in 1h 47m 0s, uploaded 317.209 GiB
processed 389.399 GiB in 1h 48m 0s, uploaded 319.941 GiB
processed 392.815 GiB in 1h 49m 0s, uploaded 322.988 GiB
processed 396.27 GiB in 1h 50m 0s, uploaded 326.172 GiB
processed 400.354 GiB in 1h 51m 0s, uploaded 329.361 GiB
sales01.mpxar: had to backup 1.786 MiB of 1.786 MiB (compressed 433.604 KiB) in 6679.09 s (average 280 B/s)
sales01.ppxar: had to backup 330.384 GiB of 401.494 GiB (compressed 246.237 GiB) in 6679.17 s (average 50.652 MiB/s)
sales01.ppxar: backup was done incrementally, reused 71.11 GiB (17.7%)
Duration: 6679.36s  
End Time: Wed Apr 30 10:58:16 2025
 
Last edited:
If parts of the data are already stored in the datastore this is possible. Did you already backup a system with the same operating system or data files?
 
This question has come up a couple times.
Staff members have weighed in, and they did not seem to regard it as unusual or strange.

Ya know, I get that this is ok. I've seen it myself in the very first backup done by a brand new PBS system.
... But the way it reads in the logs. "backup was done incrementally". That's just a really weird readout.

The next bit is understandable though. There are repeating data structures throughout any hard disk.
PBS will only get one copy of those data structures, and after that it will reuse it.
So "reused 71.11 GiB (17.7%)" makes sense to me.
 
Last edited:
Hi,

I initiated first full backup of the filesystem to Proxmox Backup Server using Proxmox Backup Client. Backup is completed successfully however log states that backup was done incrementally. Is this correct? Below are the last few lines from the log. Size of the source filesystem is 338 GB approx.

Code:
processed 376.76 GiB in 1h 45m 0s, uploaded 311.136 GiB
processed 381.649 GiB in 1h 46m 0s, uploaded 313.96 GiB
processed 385.389 GiB in 1h 47m 0s, uploaded 317.209 GiB
processed 389.399 GiB in 1h 48m 0s, uploaded 319.941 GiB
processed 392.815 GiB in 1h 49m 0s, uploaded 322.988 GiB
processed 396.27 GiB in 1h 50m 0s, uploaded 326.172 GiB
processed 400.354 GiB in 1h 51m 0s, uploaded 329.361 GiB
sales01.mpxar: had to backup 1.786 MiB of 1.786 MiB (compressed 433.604 KiB) in 6679.09 s (average 280 B/s)
sales01.ppxar: had to backup 330.384 GiB of 401.494 GiB (compressed 246.237 GiB) in 6679.17 s (average 50.652 MiB/s)
sales01.ppxar: backup was done incrementally, reused 71.11 GiB (17.7%)
Duration: 6679.36s
End Time: Wed Apr 30 10:58:16 2025
According to the output, there might be already a previous backup snapshot in that backup group? Chunks of that snapshot can be reused, creating an (fully self contained) incremental backup. Are you sure that there was no previous snapshot present for that group? Please share also the start of the backup log output, that should tell more.

Please note that there are 2 optimizations at play here:
  • The incremental backup is created when there is a previous backup snapshot present in that backup group. A list of chunks present on the server are then send to the client when starting the backup, so the client can skip the re-upload of these. This does not require any logical connection of the snapshots per-se, chunks can be re-used as long as their digests match.
  • The change detection mode set to metadata, which in addition to the incremental mode will also use a matching metadata archive of the previous snapshot (identified by the archive name) to check if files have been changed since the last backup snapshot run, and use this additional information to possibly avoid having to re-read and re-chunk these files, thereby further speeding up the backup.
 
Last edited:
  • Like
Reactions: Johannes S
If parts of the data are already stored in the datastore this is possible. Did you already backup a system with the same operating system or data files?
Please note: While this is true for the server side, the client does not know about all chunks on the server side. It only gets a list of known chunks from the previous backup snapshot and is only allowed to skip re-upload for these. If it does however re-upload a chunk which is not in the known chunk list, but already present on the server, the server will do de-duplication without the client knowing about this. This is for security reasons, as the client should not be able to see which chunks are present on the server.
If the backup contains many identical chunks (e.g. by large all zero blocks or multiple identical files) than the client will also be able to skip re-upload subsequent encounters of the same chunk digests.