Slow throughput on PBS running as a VM behind a FTTC connection

krikey

Well-Known Member
Aug 15, 2018
115
21
58
UK
I have a locally hosted lab setup consisting of PVE 7.4-15 which in turn is running a VM of PBS 3.0-1 called "backup.local". PVE is also running a few other VMs and LXCs.

I have a relatively slow 60mbit/20mbit FTTC connection to my lab. I've tested this and verified that I'm getting close to these speeds (52mbit/16mbit) using speedtest-cli on the PBS instance.

If I backup any of the local VMs or LXCs to backup.local, I'm getting a throughput of ~100MiB/s according to the log, admittedly its not really leaving the physical server which seems fine.

If i pass through and port redirect 8007 through my local router I can then mount backup.local as a PBS device on a PVE instance hosted in a datacentre, lets call this pve.remote. This datacentre provides a 1Gbps/1Gbps connection, again tested.

When I try to run a backup between pve.remote > backup.local, I'm only getting ~900KiB/s which seems quite a bit lower than the FTTC connection will provide.

When I run backups between pve.remote and a separate PBS server in the Datacentre, I'm also getting good understandable speeds.

Looking at the status of the local PVE and backup.local, there's no evidence of a bottleneck there that I can see as network, CPU and memory all look ok. According to the local lab router, it's using ~7000Kbps of a theoretical max of 53,000Kbps when a backup is running, some of which is other traffic not related to PVE or PBS.

Can anyone help with identifying where the issue might be?
 
WAG. network latency, router load, mismatched MTU. Run iperf between datacentre and local server.
 
  • Like
Reactions: krikey
WAG. network latency, router load, mismatched MTU. Run iperf between datacentre and local server.
Using the following iperf3 commands below on both the server and client helped me to narrow down the issue to a specific datacentre server itself as the other datacentre server test exhibited a more appropriate speed. I'm in touch with the DC now to see if they can shed any more light and will update this ticket accordingly if I think it's useful info.

on server (receiving request)

iperf3 -s -p [determine a port if you don't want to use the default 5201]

on client (sending the request)

iperf3 -c [ip address of server] -p [determine a port if you don't want to use the default 5201] -bidir

If either ends are behind a firewall or NAT, don't forget to poke a hole through for the chosen port.
 
how is the latency of your link? the actual backup and restore flow are using HTTP/2.0 , which is quite latency sensitive unfortunately..

could you also maybe try `proxmox-backup-client benchmark --repository ....` both from client to server and on the server itself? this should give you a rough ballpark of HTTP/2.0 and crypto performance that is relevant for PBS operations..
 
how is the latency of your link? the actual backup and restore flow are using HTTP/2.0 , which is quite latency sensitive unfortunately..

could you also maybe try `proxmox-backup-client benchmark --repository ....` both from client to server and on the server itself? this should give you a rough ballpark of HTTP/2.0 and crypto performance that is relevant for PBS operations..
I've run ther benckmark as suggested and these all look ok, at least they seem faster than the throughput to my specific IP address anyway.

1689155887610.png
 
so this was between pve.remote and backup.local (the problematic combination)? could you also post a backup task log for that combination?
 
Here's the test between pve.remote1 (the slow one) and backup.local

1689156905002.png

and here's the test between pve.remote2 and backup.local

1689156890533.png
 
ah, okay. so yeah, it's definitely HTTP/2.0 that is the cause. either somebody is throttling that along the way (intentionally or unintentionally), or the link has a high enough latency to negatively affect throughput..
 
  • Like
Reactions: krikey

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!