[SOLVED] offsite backup (once again)

Oct 28, 2013
308
47
93
www.nadaka.de
Hi,

since a couple of months we have a PBS which works like a charm!
It's a dedicated server hardware with a bunch of U.2 NVMe disks, one datastore with five namespaces and different retention strategies. In total about 42 TB of backup data - upward trend. :)
Now we would like to do some kind of offsite backup. There is an old server hardware with a bunch of 10 TB HDDs, but buying some new hardware is also thinkable.
I tried some PBS syncing of the whole datastore, but as expected the HDDs are too slow for this. But even playing around with a (daily) rsync showed that there are too much diferences to handle. And offsite backuping the whole datastore isn't necessary at all. Is there any chance to do some kind of replication telling "just do the latest version of everything"? Or do you have other ideas how we can get some kind of offsite backup?

Thanks a lot and many greets
Stephan
 
there are patches for this (to add a "transfer-last X" option), but they are not applied/released yet.

https://bugzilla.proxmox.com/show_bug.cgi?id=3701 tracks this (and some related changes as well ;))

as a workaround it would be possible to do the following
- create a new "sync" namespace for each namespace you want to sync
- sync each pair of namespaces locally (using a remote pointing to the same PBS instance)
-- this sync is really cheap both I/O and space usage wise, since all the chunks are already there
- prune the "sync" namespaces so each group only contains a single snapshot
- now sync the "sync" namespaces offsite

but that is not something that you can reliably implement using the built-in scheduling, since you basically have three interdependent jobs for each namespace
- sync NS -> NS-sync
- prune NS-sync
- ssh remote -- sync NS-sync NS-offsite

so you'd need to script that yourself using proxmox-backup-manager or the API :-/
 
well ;) as always - once it's (positively) reviewed, it will be in git. and then usually a few days or a week or two after, it will be in a package. and then a few days or a week or two after, that package will find it's way from internal testing to public repos (test first, then no-sub, then enterprise, with the migration delays depending on scope of changes and interdependencies).
 
well ;) as always - once it's (positively) reviewed, it will be in git. and then usually a few days or a week or two after, it will be in a package. and then a few days or a week or two after, that package will find it's way from internal testing to public repos (test first, then no-sub, then enterprise, with the migration delays depending on scope of changes and interdependencies).
Would be good, handling syncs with hundreds/(soon thousands) of TB is a little bit "challenging" ;)
 
  • Like
Reactions: sherminator

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!