[SOLVED] proxmox-backup-client to backup photos : Strange deduplication factor and long backup backup time after some days

cryonie

Member
May 8, 2020
40
6
13
43
Hello,

Some days ago, i set up proxmox-backup-client on my OMV VM to backup my photos and documents to my PBS server.
I have something like 350Go to backup (2 directory in the same batch).
My backup server is not really fast and behind a slow ethernet (100-150Mb/s).
So it's slow but ... i know why most of the time :)

My first backup took 7h39 to make and here is the summary :
Code:
2021-02-23T16:34:26+01:00: Upload statistics for 'photos.pxar.didx'
2021-02-23T16:34:26+01:00: UUID: 236b18a10b5e4e43bff32f28e8e1017e
2021-02-23T16:34:26+01:00: Checksum: c64a25eef5b2a21e0123f0f8c8b5950031a54216def3a6f677f6422b5dade377
2021-02-23T16:34:26+01:00: Size: 332049471286
2021-02-23T16:34:26+01:00: Chunk count: 87115
2021-02-23T16:34:26+01:00: Upload size: 300195113528 (90%)
2021-02-23T16:34:26+01:00: Duplicates: 9118+3586 (14%)
2021-02-23T16:34:26+01:00: Compression: 99%

2021-02-23T17:22:43+01:00: Upload statistics for 'virginie.pxar.didx'
2021-02-23T17:22:43+01:00: UUID: 2fbc8b3d2e8f4e3182faa83253fe0081
2021-02-23T17:22:43+01:00: Checksum: 7cff664860efb644096dd02490dbf4e50f0ad5bd673799da6b6ecf2342dc12b1
2021-02-23T17:22:43+01:00: Size: 35343914756
2021-02-23T17:22:43+01:00: Chunk count: 9377
2021-02-23T17:22:43+01:00: Upload size: 34954527391 (98%)
2021-02-23T17:22:43+01:00: Duplicates: 144+1040 (12%)
2021-02-23T17:22:43+01:00: Compression: 95%
So from here everything looked cool. It took around 350Go on the backup datastore. Ok seems fine.

Second day took 1h :
Code:
2021-02-24T01:52:37+01:00: Upload statistics for 'photos.pxar.didx'
2021-02-24T01:52:37+01:00: UUID: 9ec55587c735494a9410d3a72bd92388
2021-02-24T01:52:37+01:00: Checksum: ee1ebcd1ea88dbb2a3a1a3a8416dabb304ec45265769e847941e14f1bfe30afd
2021-02-24T01:52:37+01:00: Size: 332049471290
2021-02-24T01:52:37+01:00: Chunk count: 87115
2021-02-24T01:52:37+01:00: Upload size: 0 (0%)
2021-02-24T01:52:37+01:00: Duplicates: 87115+0 (100%)

2021-02-24T01:58:30+01:00: Upload statistics for 'virginie.pxar.didx'
2021-02-24T01:58:30+01:00: UUID: 6d883b04703d4873a5c04e127be83736
2021-02-24T01:58:30+01:00: Checksum: 7cff664860efb644096dd02490dbf4e50f0ad5bd673799da6b6ecf2342dc12b1
2021-02-24T01:58:30+01:00: Size: 35343914756
2021-02-24T01:58:30+01:00: Chunk count: 9377
2021-02-24T01:58:30+01:00: Upload size: 0 (0%)
2021-02-24T01:58:30+01:00: Duplicates: 9377+0 (100%)
Perfect ! Just faster and no unneeded things transfered or space taken.

My last normal backup was this one (took 1h43) :
Code:
2021-03-08T02:34:53+01:00: Upload statistics for 'photos.pxar.didx'
2021-03-08T02:34:53+01:00: UUID: 025ff7d9b26b4a43bc36696f33660210
2021-03-08T02:34:53+01:00: Checksum: ee1ebcd1ea88dbb2a3a1a3a8416dabb304ec45265769e847941e14f1bfe30afd
2021-03-08T02:34:53+01:00: Size: 332049471290
2021-03-08T02:34:53+01:00: Chunk count: 87115
2021-03-08T02:34:53+01:00: Upload size: 0 (0%)
2021-03-08T02:34:53+01:00: Duplicates: 87115+0 (100%)

2021-03-08T02:34:54+01:00: download 'virginie.pxar.didx' from previous backup.
2021-03-08T02:34:54+01:00: created new dynamic index 3 ("host/OMV/2021-03-08T00:00:01Z/virginie.pxar.didx")
2021-03-08T02:43:20+01:00: Upload statistics for 'virginie.pxar.didx'
2021-03-08T02:43:20+01:00: UUID: 282dbe66cc544bb7b2f0169e5e793828
2021-03-08T02:43:20+01:00: Checksum: 7cff664860efb644096dd02490dbf4e50f0ad5bd673799da6b6ecf2342dc12b1
2021-03-08T02:43:20+01:00: Size: 35343914756
2021-03-08T02:43:20+01:00: Chunk count: 9377
2021-03-08T02:43:20+01:00: Upload size: 0 (0%)
2021-03-08T02:43:20+01:00: Duplicates: 9377+0 (100%)
Seems ... really good :)
Didn't add/modify the documents/photos so 0 space on datastore because everything is duplicated from previous backup.
Perfect !
And this is the same (not always 100% duplicate but nearly) for some days.

At this time, proxmox-backup-client got killed every morning for OOM by OMV kernel for like 2 or 3 days (i reduced RAM allocation to OMV because i thought i gave it too much RAM .. well ... a mistake :)).
Gave OMV more RAM and ran the backup again ... and here ... i think something is wrong.
Backup took 4h50 :
Code:
2021-03-10T21:17:37+01:00: Upload statistics for 'photos.pxar.didx'
2021-03-10T21:17:37+01:00: UUID: 3a2515f174614f2d9fa05cef0d207a7a
2021-03-10T21:17:37+01:00: Checksum: ef8dec6132d7ecd838f5b406102a3560597d20be41457995966a3398691566ca
2021-03-10T21:17:37+01:00: Size: 332049471290
2021-03-10T21:17:37+01:00: Chunk count: 87115
2021-03-10T21:17:37+01:00: Upload size: 215657613523 (64%)
2021-03-10T21:17:37+01:00: Duplicates: 40482+0 (46%)
2021-03-10T21:17:37+01:00: Compression: 98%

2021-03-10T21:17:37+01:00: created new dynamic index 3 ("host/OMV/2021-03-10T15:52:06Z/virginie.pxar.didx")
2021-03-10T21:42:58+01:00: Upload statistics for 'virginie.pxar.didx'
2021-03-10T21:42:58+01:00: UUID: e686039c92c44810be2a0fbccd26023a
2021-03-10T21:42:58+01:00: Checksum: 5207bb0d0603c774cfbaf3f118867dc4e948459dcbf1583683b6096f2b442108
2021-03-10T21:42:58+01:00: Size: 35343914756
2021-03-10T21:42:58+01:00: Chunk count: 9376
2021-03-10T21:42:58+01:00: Upload size: 21627468095 (61%)
2021-03-10T21:42:58+01:00: Duplicates: 4518+466 (53%)
2021-03-10T21:42:58+01:00: Compression: 95%
The deduplication ratio is really bad, upload size is enormous ... (and it took me 250Go approx on my datastore)
In 2 days I don't even know if i changed any document or photo. I think everything was the same the 08/03 and the 10/03 on my drives.

So here I need help to understand.
Why PBS did that ? Did i do something wrong? Because if I change nothing in the files and just stop the backup for 2 days and run it again ... duplication should stay at 100% in my mind.

If you have lights for me on this one ... I'll apreciate :)

Thanks
 
Last edited:
Again this morning ... A backup was made automatically just 3h after the last one :
It took 3h39 :
Code:
2021-03-11T05:08:42+01:00: Upload statistics for 'photos.pxar.didx'
2021-03-11T05:08:42+01:00: UUID: 0ba6f55d32914a01a44f518f9d41b650
2021-03-11T05:08:42+01:00: Checksum: 688d2251f655deb948f2fb8da7c9efe95322ddc7828a3a5213abbc0f5335f1e2
2021-03-11T05:08:42+01:00: Size: 332049471290
2021-03-11T05:08:42+01:00: Chunk count: 87117
2021-03-11T05:08:42+01:00: Upload size: 215657613523 (64%)
2021-03-11T05:08:42+01:00: Duplicates: 40483+0 (46%)
2021-03-11T05:08:42+01:00: Compression: 98%

2021-03-11T05:39:14+01:00: Upload statistics for 'catalog.pcat1.didx'
2021-03-11T05:39:14+01:00: UUID: 1ee862ea4189454c87c25513e3671751
2021-03-11T05:39:14+01:00: Checksum: 7e3cbd0fb2e43dd89f2663c09d9b6444b322486398bb0e5c6551be34d9a784de
2021-03-11T05:39:14+01:00: Size: 2979151
2021-03-11T05:39:14+01:00: Chunk count: 5
2021-03-11T05:39:14+01:00: Upload size: 2979151 (100%)
2021-03-11T05:39:14+01:00: Duplicates: 0+5 (100%)
2021-03-11T05:39:14+01:00: Compression: 30%

First part of the backup has only 46% duplicate ... files were exactly the same as 3h before.
Second part of the backup is good (100% duplicate).

And because of those last 2 backup (this morning and yesterday) that were near 50% duplicated only, my datastore is now full ...
 
if you did not change the files, this looks very weird.... can you post the versions of the client and server, and also the proxmox-backup-client invokation you use?
 
Hello,

Here they are :
PBS Server info :

Code:
proxmox-backup: 1.0-4 (running kernel: 5.4.103-1-pve)
proxmox-backup-server: 1.0.9-1 (running version: 1.0.9)
pve-kernel-5.4: 6.3-7
pve-kernel-helper: 6.3-7
pve-kernel-5.4.103-1-pve: 5.4.103-1
pve-kernel-5.4.101-1-pve: 5.4.101-1
pve-kernel-5.4.98-1-pve: 5.4.98-1
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-5.4.65-1-pve: 5.4.65-1
ifupdown2: 3.0.0-1+pve3
libjs-extjs: 6.0.1-10
proxmox-backup-docs: 1.0.9-1
proxmox-backup-client: 1.0.9-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-6
pve-xtermjs: 4.7.0-3
smartmontools: 7.2-pve2
zfsutils-linux: 2.0.3-pve2

Client on OMV :
Code:
sudo proxmox-backup-client version
client version: 1.0.9

Backup command :
Code:
proxmox-backup-client backup photos.pxar:/srv/50a5e521-892b-424e-ba95-013a8af3afec/Photos virginie.pxar:/srv/50a5e521-892b-424e-ba95-013a8af3afec/Virginie --exclude .recycle | tee /home/backup_pbs-`date +%Y-%m-%d-%H-%M`.log

One thing ... i think client and server had an update pending yesterday and i made it on both before the basckups. (but i don't know how to find if i'm true and what version i was yesterday ... this is fare above my competences :))

Little more information on the disks on OMV :
I have 2 disks that are merged (mergerfs).
Both directory are each on a disk (not splitted between disks) and they didn't move from one disk to another.
I say this but in my point of view, because i point my backup on the merged disk, even if they moved, PBS should not be affected. But i checked and nothing moved.

Thanks
 
ok so if the data did not change, did maybe the metadata? filenames/mtimes/etc ? that can also have an impact on the chunks that get reused
 
Hello,

When i check on windows (via SMB) the creation and modification date are OK (old).
Names are untouched ... really i don't see any difference.

Is there a way to extract (on linux) the metadata that PBS look for to try to see if i have strange things ?

I doubt it comes from the files because between my save the 10/03 ant the 11/03 we didn't touch the files and still only 46% of photos are duplicated.
If I have a way to extract the metadata PBS is using i can control on some files to be sure but for me ... nothing changed on my files.
 
Last edited:
Took an example between first and last backup of one directory.
Names are the same, dates also.
2021-03-11 15_50_53-Proxmox-Backup-Server.png2021-03-11 15_51_15-Proxmox-Backup-Server.png

I don't know how to check further ... do you have a exhaustive list of information that PBS is using and that when modified can lead to "new" chunk ?
Thanks
 
I restored a directory (40 or so files) from my first backup and i compared their md5sum ... they are totally identical.
Only one file in the restored directory have different date/time and the date/time is the moment i made the restore (date superior of the actual date of the backup).
ls -la.pngmd5sum.png

Left is restored directory.
Right is actual "live" directory.

If you have any idea of what i could test/try to find why.
Right now (because my datastore is now full because of 2 <50% duplicate backup on those files) i'll have to destroy my backups and redo again a full backup from nothing hoping that this issue will not occur again ... but i would prefer to why it happened and how i can prevent it in the future :)

Thanks
 
ok so it seems the basic metadata + data does not change

this leaves basically acls and/or xattrs

do you (or omv) set acls or xattrs ?

can you post the output of a file 'getfattr FILENAME' and 'getfacl FILENAME'
and maybe check if that changes?

also could you post your mergerfs mount options? so we can check if we can reproduce it here
 
Hello,

Here are my fstab lines regarding the 2 disks (the disks themeselves + the mergerfs line)
Code:
/dev/disk/by-label/WDRED2fs             /srv/dev-disk-by-label-WDRED2fs ext4    defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl  0 2
/dev/disk/by-label/WDRED1fs             /srv/dev-disk-by-label-WDRED1fs ext4    defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl  0 2
/srv/dev-disk-by-label-WDRED1fs:/srv/dev-disk-by-label-WDRED2fs         /srv/50a5e521-892b-424e-ba95-013a8af3afec       fuse.mergerfs   defaults,allow_other,cache.files=off,use_ino,category.create=epmfs,minfreespace=4G,fsname=WD_mergerfs:50a5e521-892b-424e-ba95-013a8af3afec,x-systemd.requires=/srv/dev-disk-by-label-WDRED1fs,x-systemd.requires=/srv/dev-disk-by-label-WDRED2fs        0 0
I don't know much about it, just made it by OMV GUI.

I don't have getfattr on my debian OMV so i used lsattr ... hope it is good
1615538905063.png

and here for the ACL :
1615538774249.png
I compared the results and ... it seems totally identical.

As always, left is restored directory and right is "live" directory.

Regards
 

Attachments

  • lsattr.png
    lsattr.png
    57.9 KB · Views: 1
Regarding the question if I or omv is setting ACL or xattrs i would say ... I don't know.

On the OMV share, I can manage privileges (and that's how i manage access) :
1615539300930.png

And i have another button with ACLbut i don't check any box here :
1615539417396.png

hope it helps.
Thanks :)
 
The problem may be that i'm looking at the wrong directory (maye this one was not consideres as changed by PBS).
Maybe i should try to restore ALL my photos and try to find a way to make comparison like i'm doing on all those files.

Do you think it could help if i was able to do that ?

EDIT : doing it anyway ... don't know how i'll compare all those files but i'll try to find a way
 
Last edited:
thanks for your answer.. we investigated a bit since we could reproduce a similar issue where the pxar archive would slightly change on each subsequent backup
we have pushed a fix for this issue and it will be in a future proxmox-backup-client version
maybe that is the bug you are hitting

commit of the fix for reference: https://git.proxmox.com/?p=pxar.git;a=commit;h=180186c5676d34e6f969ef223241bdbaa01f5f44

oh because i forgot to mention: this issue has nothing to do with acls/xattrs or mergerfs, it was a problem in our byte representation of the mtime
 
Last edited:
Hello,

Thank you for your tests and I hope that this is my problem :)
If I can ask, how will I know when this update will be implemented ? Should I assume that it will be in the next update, or should I somehow monitor something somewhere else ?

Thank you again for your great help :)
 
Should I assume that it will be in the next update, or should I somehow monitor something somewhere else ?
it will be in the next version of the proxmox-backup-client package. the current version in git is 1.0.10-1, so anything higher than that will contain the fix
 
  • Like
Reactions: cryonie
Hello @dcsapak, is there any way to know the list of chunks of a particular snapshot ?
Just installed 1.0.11-1 and backup is running. When finished (in like 10 hours), I will want to compare the list of chunk used between my old backups, my last "bad" backups and the new one to see what chunks are common on what backup.
Is there a way to do this ?
Thanks :)
 
So, night backup information :

Backup on 1.0.11-1 took 4h41 :

Code:
2021-03-24T01:10:21+01:00: starting new backup on datastore 'backup_raid10': "host/OMV/2021-03-24T00:10:21Z"
2021-03-24T01:10:21+01:00: download 'index.json.blob' from previous backup.
2021-03-24T01:10:21+01:00: created new dynamic index 1 ("host/OMV/2021-03-24T00:10:21Z/catalog.pcat1.didx")
2021-03-24T01:10:21+01:00: register chunks in 'photos.pxar.didx' from previous backup.
2021-03-24T01:10:21+01:00: download 'photos.pxar.didx' from previous backup.
2021-03-24T01:10:22+01:00: created new dynamic index 2 ("host/OMV/2021-03-24T00:10:21Z/photos.pxar.didx")
2021-03-24T05:27:44+01:00: Upload statistics for 'photos.pxar.didx'
2021-03-24T05:27:44+01:00: UUID: 4c5e8d803e67429ca4fb6f7e9315e53e
2021-03-24T05:27:44+01:00: Checksum: 8fa78ea3e9a43335822be8afccb093311b360886e3b39511d5095cdabfd91e0d
2021-03-24T05:27:44+01:00: Size: 332049471290
2021-03-24T05:27:44+01:00: Chunk count: 87115
2021-03-24T05:27:44+01:00: Upload size: 215657613523 (64%)
2021-03-24T05:27:44+01:00: Duplicates: 40482+46632 (99%)
2021-03-24T05:27:44+01:00: Compression: 98%
2021-03-24T05:27:44+01:00: successfully closed dynamic index 2
2021-03-24T05:27:44+01:00: register chunks in 'virginie.pxar.didx' from previous backup.
2021-03-24T05:27:44+01:00: download 'virginie.pxar.didx' from previous backup.
2021-03-24T05:27:44+01:00: created new dynamic index 3 ("host/OMV/2021-03-24T00:10:21Z/virginie.pxar.didx")
2021-03-24T05:52:17+01:00: Upload statistics for 'virginie.pxar.didx'
2021-03-24T05:52:17+01:00: UUID: bfc5f96507bd4348a6bdae195c0a7ccf
2021-03-24T05:52:17+01:00: Checksum: bac275dc32046d02c8f5e3169d7acdd5385d2639da88f15d56bd257739ad445c
2021-03-24T05:52:17+01:00: Size: 35343914756
2021-03-24T05:52:17+01:00: Chunk count: 9377
2021-03-24T05:52:17+01:00: Upload size: 21627468095 (61%)
2021-03-24T05:52:17+01:00: Duplicates: 4518+4858 (99%)
2021-03-24T05:52:17+01:00: Compression: 95%
2021-03-24T05:52:17+01:00: successfully closed dynamic index 3
2021-03-24T05:52:17+01:00: Upload statistics for 'catalog.pcat1.didx'
2021-03-24T05:52:17+01:00: UUID: ebc90ff6a6fb450eb38339c56f0a6d1a
2021-03-24T05:52:17+01:00: Checksum: 7e3cbd0fb2e43dd89f2663c09d9b6444b322486398bb0e5c6551be34d9a784de
2021-03-24T05:52:17+01:00: Size: 2979151
2021-03-24T05:52:17+01:00: Chunk count: 5
2021-03-24T05:52:17+01:00: Upload size: 2979151 (100%)
2021-03-24T05:52:17+01:00: Duplicates: 0+5 (100%)
2021-03-24T05:52:17+01:00: Compression: 30%
2021-03-24T05:52:17+01:00: successfully closed dynamic index 1
2021-03-24T05:52:17+01:00: add blob "/mnt/datastore/backup_raid10/host/OMV/2021-03-24T00:10:21Z/index.json.blob" (380 bytes, comp: 380)
2021-03-24T05:52:17+01:00: successfully finished backup
2021-03-24T05:52:17+01:00: backup finished successfully
2021-03-24T05:52:17+01:00: TASK OK

Some things are good, some are not.
Duplication factor = 99% => This is Good ! (and it seems accurate because my overall storage usage on PBS did not increase)
Backup time = 4h41 => This is not good (normal backup took less than 2 hours before the problem).
Upload size = 61% (on photos) and 100% (on virginie) => This is the real not good part here.

If everything is deduplicated, why is Proxmox-backup-client uploading files ?
I'll try a new backup right now just to see if this behavior is only happening on the first backup on version 1.0.11-1 or if it will happen on all backups.
If you have any hints ... i'm open :)
 
So, night backup information :

Backup on 1.0.11-1 took 4h41 :

Code:
2021-03-24T01:10:21+01:00: starting new backup on datastore 'backup_raid10': "host/OMV/2021-03-24T00:10:21Z"
2021-03-24T01:10:21+01:00: download 'index.json.blob' from previous backup.
2021-03-24T01:10:21+01:00: created new dynamic index 1 ("host/OMV/2021-03-24T00:10:21Z/catalog.pcat1.didx")
2021-03-24T01:10:21+01:00: register chunks in 'photos.pxar.didx' from previous backup.
2021-03-24T01:10:21+01:00: download 'photos.pxar.didx' from previous backup.
2021-03-24T01:10:22+01:00: created new dynamic index 2 ("host/OMV/2021-03-24T00:10:21Z/photos.pxar.didx")
2021-03-24T05:27:44+01:00: Upload statistics for 'photos.pxar.didx'
2021-03-24T05:27:44+01:00: UUID: 4c5e8d803e67429ca4fb6f7e9315e53e
2021-03-24T05:27:44+01:00: Checksum: 8fa78ea3e9a43335822be8afccb093311b360886e3b39511d5095cdabfd91e0d
2021-03-24T05:27:44+01:00: Size: 332049471290
2021-03-24T05:27:44+01:00: Chunk count: 87115
2021-03-24T05:27:44+01:00: Upload size: 215657613523 (64%)
2021-03-24T05:27:44+01:00: Duplicates: 40482+46632 (99%)
2021-03-24T05:27:44+01:00: Compression: 98%
2021-03-24T05:27:44+01:00: successfully closed dynamic index 2
2021-03-24T05:27:44+01:00: register chunks in 'virginie.pxar.didx' from previous backup.
2021-03-24T05:27:44+01:00: download 'virginie.pxar.didx' from previous backup.
2021-03-24T05:27:44+01:00: created new dynamic index 3 ("host/OMV/2021-03-24T00:10:21Z/virginie.pxar.didx")
2021-03-24T05:52:17+01:00: Upload statistics for 'virginie.pxar.didx'
2021-03-24T05:52:17+01:00: UUID: bfc5f96507bd4348a6bdae195c0a7ccf
2021-03-24T05:52:17+01:00: Checksum: bac275dc32046d02c8f5e3169d7acdd5385d2639da88f15d56bd257739ad445c
2021-03-24T05:52:17+01:00: Size: 35343914756
2021-03-24T05:52:17+01:00: Chunk count: 9377
2021-03-24T05:52:17+01:00: Upload size: 21627468095 (61%)
2021-03-24T05:52:17+01:00: Duplicates: 4518+4858 (99%)
2021-03-24T05:52:17+01:00: Compression: 95%
2021-03-24T05:52:17+01:00: successfully closed dynamic index 3
2021-03-24T05:52:17+01:00: Upload statistics for 'catalog.pcat1.didx'
2021-03-24T05:52:17+01:00: UUID: ebc90ff6a6fb450eb38339c56f0a6d1a
2021-03-24T05:52:17+01:00: Checksum: 7e3cbd0fb2e43dd89f2663c09d9b6444b322486398bb0e5c6551be34d9a784de
2021-03-24T05:52:17+01:00: Size: 2979151
2021-03-24T05:52:17+01:00: Chunk count: 5
2021-03-24T05:52:17+01:00: Upload size: 2979151 (100%)
2021-03-24T05:52:17+01:00: Duplicates: 0+5 (100%)
2021-03-24T05:52:17+01:00: Compression: 30%
2021-03-24T05:52:17+01:00: successfully closed dynamic index 1
2021-03-24T05:52:17+01:00: add blob "/mnt/datastore/backup_raid10/host/OMV/2021-03-24T00:10:21Z/index.json.blob" (380 bytes, comp: 380)
2021-03-24T05:52:17+01:00: successfully finished backup
2021-03-24T05:52:17+01:00: backup finished successfully
2021-03-24T05:52:17+01:00: TASK OK

Some things are good, some are not.
Duplication factor = 99% => This is Good ! (and it seems accurate because my overall storage usage on PBS did not increase)
Backup time = 4h41 => This is not good (normal backup took less than 2 hours before the problem).
Upload size = 61% (on photos) and 100% (on virginie) => This is the real not good part here.

If everything is deduplicated, why is Proxmox-backup-client uploading files ?
I'll try a new backup right now just to see if this behavior is only happening on the first backup on version 1.0.11-1 or if it will happen on all backups.
If you have any hints ... i'm open :)
the upload size reflects that there is a delta compared to the last snapshot (which was made with the buggy version?). the deduplication factor reflects that of the chunks uploaded, almost all of them where already on the server (just not referenced by the last snapshot, so we only found out after uploading that we already have them).

e.g., if you look at the following:
Code:
2021-03-24T05:27:44+01:00: Upload statistics for 'photos.pxar.didx'
2021-03-24T05:27:44+01:00: UUID: 4c5e8d803e67429ca4fb6f7e9315e53e
2021-03-24T05:27:44+01:00: Checksum: 8fa78ea3e9a43335822be8afccb093311b360886e3b39511d5095cdabfd91e0d
2021-03-24T05:27:44+01:00: Size: 332049471290
2021-03-24T05:27:44+01:00: Chunk count: 87115
2021-03-24T05:27:44+01:00: Upload size: 215657613523 (64%)
2021-03-24T05:27:44+01:00: Duplicates: 40482+46632 (99%)
2021-03-24T05:27:44+01:00: Compression: 98%

that pxar archive consists of 87k chunks. 40k (the first number in the duplicates line) of those were identical to chunks referenced by the previous snapshot. the other not quite 47k (total - client duplicates) had to be uploaded since the client only checks the very last snapshot for duplicates. of those almost 47k, 46.6k (the second number in the duplicates line) were detected as duplicates by the server (they existed in the datastore, likely because of other snapshots from the same group, but possibly also in snapshots in other groups in the same datastore). so yes, you see almost complete deduplication, but only not quite half of that was caught by the client this time because the previous snapshot was made with the buggy version.

your next backup should see normal upload size again (and probably also a return to roughly the previous run time, assuming that the additional uploading was what caused the increase), since it will check this new snapshot and deduplicate based on that on the client side already.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!