Slow restore performance

then you'll prune more than you want, but you can delegate the pruning to the PBS doing the sync in this case to avoid that.



I am not sure you know how PBS works under the hood, but I can't really parse what you wrote above ;)

Ok, so it doesn't have a keep at least until duplicated off option. Someone common on some systems, but most are more of a push for replication, and PBS is more of a pull for replication.

I don't know completely how it works under the hood, but the data-structures are pretty obvious and most of the design choices are pretty obvious. The only surprise so far was the pull nature instead of push between pbs systems, but I am sure there are others I just don't know what I don't know yet...

Sorry for my ramblings, but to rephrase...
1. I highly suspect, that your statement "doesn't solve the issue that the transfer between local and remote over http 2.0 will be slower than expected" is incorrect, and it will greatly improve it. (I'll setup and test replication to a remote PBS as I don't want any surprises before ordering hw, but will be awhile)
2. Reasons for #1:
a. There will already be a recurring backup to the remote site, and so only the new chunks from last backup will require transferring between sites, and depending on schedule might have already transferred.

I hope you are not suggesting the sync is still really slow between incremental backups between two PBS systems when the changed chunks is relatively small % of the vm.
 
of course the sync will be done quicker if less is to be transferred - but the same is true of the backup itself (it only transfers new chunks as well).
 
of course the sync will be done quicker if less is to be transferred - but the same is true of the backup itself (it only transfers new chunks as well).
Right, but the original question was about restore.

So
orig --> PBS1 -------slow-link-----> machine

always slow... no incremental for restore....

orig --> PBS1 --------slow-link-------> PBS2 -----> machine

Should be significantly faster, even have to force a sync of a more recent backup (assuming a base was already at PBS2), but does require 2 PBS servers instead of 1...
 
Not directly related to this topic, but I setup PBS ---> PBS to a remove location to see what the speed is like. It seems to be moving along. (remote is about 1633 miles away)

# ping pmbackup.redacted -s 15000 -c 100 -q -A
PING pmbackup.redacted (10.0.2.30) 15000(15028) bytes of data.

--- pmbackup.redacted ping statistics ---
100 packets transmitted, 100 received, 0% packet loss, time 4503ms
rtt min/avg/max/mdev = 39.443/44.895/85.951/6.698 ms, pipe 2, ipg/ewma 45.484/43.210 ms


Interestingly, it's pulling them over HTTP 1.1 instead of HTTP 2, and only seems to have mainly use one connection at a time. Surprised there isn't more or settable concurrency. Appears you can setup multiple sync jobs could if you want to increase bandwidth utilization.

I do have a load balancer in between, which is how I can see it's using HTTP/1.1 instead of 2.
 
Last edited:
it's using HTTP 1.1 for listing groups/snapshots, the actual transfer of chunks happens over a connection upgraded to HTTP 2.0 (there is no other way to do it).
 
it's using HTTP 1.1 for listing groups/snapshots, the actual transfer of chunks happens over a connection upgraded to HTTP 2.0 (there is no other way to do it).

The PVE -> PBS all come in as HTTP/2.0.
The PBS -> PBS all come in as HTTP/1.1 and I see it switches protocols with status code 101 (but load balalancer isn't logging what it switches to). Not sure why PVE -> PBS can start out with HTTP 2.0 and PBS to PBS has to switch. Not important...

For completeness, I should test restore process to see which protocols it uses.

That said, so far performance it is acceptable for our requirements, but only have about 10 vms (and only 2 active). So that isn't necessarily a good indication what it will be like if we move to proxmox with several hundred vms through a PBS to PBS server, but with initial estimates I think it will scale fine for us.

Hoping to be able to commit to proxmox and pbs moving forward at the end of the week. Finalizing testing on a couple of other alternatives as we decide what to do about VMWare broken by Broadcom.
 
you can trust me - all backup and restore/reader sessions (which includes sync/pull, fuse mounting/block dev mapping, file-restore except when done on the PBS server itself via the file browser feature, regular restore, live-restore, ..) use the same mechanism and are going over http 2.0 (and the connection always starts out as 1.1 and is then upgraded). they all use the same client code under the hood.

having one PBS per location with syncing already helps alleviate most of the issues described in this thread, not by virtue of being less affected by latency, but by virtue of decoupling the penalty associated with higher latency from the actual running of guests. fleecing can also help if your backup storage is slower than your guest I/O, since it allows you to sort of "buffer" the backup I/O on a faster storage (e.g., fast local NVMe) if you have the additional capacity there.
 
then you'll prune more than you want, but you can delegate the pruning to the PBS doing the sync in this case to avoid that.



I am not sure you know how PBS works under the hood, but I can't really parse what you wrote above ;)
"but you can delegate the pruning to the PBS doing the sync in this case to avoid that."
How can we do this ? How can we delegate pruning the local PBS to the remote PBS ? From that i have understood, the local pbs is not even aware of the remote pbs doing syncs.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!