Transfer Latest / Bug?

Jun 28, 2023
19
0
1
Hello,
we've got problems with our PBS setup.

Setup:
We got an PBS Server that holds 30 Backups from multiple VMs. (Daily Backup Task + GC set to 30)
We got another PBS that is located offsite. We added an sync job on the offsite PBS to "transfer last" 12 versions from the other PBS.
We added an GC-Task on the offsite to "Keep Last" 30 versions for this namespace.

What we except:
The Sync will be initial transfer the 12 latest Backups. After that. Day by day the Versions on the offsite should be increase up to 30. Then the GC will cleanup every day the oldest Version. So the versioncount should be between 30-31, depending on whether the GC has already run that day.

What we got after weeks of running:
Different Version Count on Vms in the same namespace. 80% of the VMs got "7" versions. 15% got "9" versions. 5% 8-10 versions.

How can this be explained?
The Sync log shows following after hitting it manualy:
Code:
2024-06-05T13:03:07+02:00: skipped: 30 snapshot(s) (2024-05-06T00:00:23Z .. 2024-06-04T00:00:26Z) - older than the newest local snapshot
2024-06-05T13:03:07+02:00: re-sync snapshot vm/107/2024-06-05T00:00:22Z
2024-06-05T13:03:07+02:00: no data changes
2024-06-05T13:03:07+02:00: percentage done: 14.29% (2/14 groups)
2024-06-05T13:03:07+02:00: skipped: 30 snapshot(s) (2024-05-06T00:00:02Z .. 2024-06-04T00:00:11Z) - older than the newest local snapshot
2024-06-05T13:03:07+02:00: re-sync snapshot vm/109/2024-06-05T00:00:04Z
2024-06-05T13:03:07+02:00: no data changes

Are we missing something fundamental?

EDIT: We installed on all PBS Version 3.2-2
 
Last edited:
you'd have to show a bit more details for us to be able to debug it.
especially an example of which snapshots exists on both sides for an example group and the last sync job that ran normally

also the prune settings on both target&source

are you sure there isn't somewhere a prune setting accidentally set too low?
 
you'd have to show a bit more details for us to be able to debug it.
especially an example of which snapshots exists on both sides for an example group and the last sync job that ran normally

also the prune settings on both target&source

are you sure there isn't somewhere a prune setting accidentally set too low?

Sure, no problem.
PBS-01 = Source
PBS-02 = Offsite

Prune Settings Source
prune_source.jpg


Prune Settings Offsite
prune_offsite.jpg

Snapshots Source
snapshot_source.jpg

Snapshots Offsite
snapshot_offsite.jpg

Syncjob
syncjob.jpg

Logfile attached.


In the past we use the same setup, same servers expect the transfer last option. It works like a charm before. Without any problem.
 

Attachments

ok just fyi, on sync we never sync 'older' snapshots even if they don't exist locally. so the way it's currently, with each sync there will be an additional snapshot added
and the prune will not remove any until there are 30 of them

with the given log/screenshots there is no way to tell how this came to be
how did you start this out? were your prune settings differently possibly at the beginning?

you could (temporarily) remove the snapshots on the offsite pbs and then a sync would sync the last 12 (since there is no newer local snapshot anymore that skips the older ones)
 
I put myself into a similar situation. Brought up a remote server and had it start pulling all the jobs. After several servers I noticed it was doing them one server at a time, with oldest snapshots first on each server, and all snapshots before the next server.

All snapshots are important, but the most recent is generally the most important... so I added a job to get the last 1 snapshot. That worked, but now half the machines don't have older snapshots. It would be nice to have an option to include older snapshots. (Perhaps with a warning in the docs or bubble info pop if the retention is shorter, etc...) Main concern is the behavior of always skipping older snapshots was unexpected and doesn't seem to be documented on the sync job help page.

Deleting the group works rather well. As recent data was already transferred and not garbage collected yet, it rebuilt all the missing and deleted snapshots fairly quick.
 
Last edited:
I had the same issue. I have a NAS storing the backups from 2 promox hosts and I wanted an off-site of this backup repository. I thought I would do a quick 2 history backup per vm/ct and then do a full (remove the transfer limit) to get the older ones. I couldn't figure out why it didn't work until finding this thread.

I guess the older ones are not as important if I have the newer ones from an off-site perspective. in the end, it will work itself out.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!