Backup speed limited to 1 Gbps?

avn

Active Member
Mar 2, 2015
36
4
28
Hi,

This is how backup and resore looks like in 10 Gbps network:

1596978211881.png

As you can see, backup speed limited at exactly 1 Gbps.
Question: Why backup speed limited and how to remove this limit?

PVE version: 6.2-10;
VMs on SAS storage, storage type: LVM;
Read speed from VM storage not limited (tested on sparse data);
Backups stored on NAS (CIFS or NFS, no matter) in the same network segment on the same 10 Gbps switch;
Write speed to NAS not limited (multiple hosts can write simultaneously at 1 Gbps each up to 3 Gbps);
Backup compression format: zstd.
Seems like bottleneck is vzdump itself.
 
Hi,

Backups stored on NAS (CIFS or NFS, no matter) in the same network segment on the same 10 Gbps switch;
just to be clear. You run a PVE machine with internal storage (LVM), and a NAS machine. Both are connected to a 10 GBit/s network.
Are there any 1GBit/s connections? Are there any further network connections at all?

If your NAS ist linux based (or similar) I would first try some iperf tests between the two machines.

Greets
Stephan
 
You run a PVE machine with internal storage (LVM), and a NAS machine. Both are connected to a 10 GBit/s network.

NAS used only for backups. All VMs and hosts run from SAS storage.

Are there any 1GBit/s connections? Are there any further network connections at all?

Between hosts and NAS - no.

If your NAS ist linux based (or similar) I would first try some iperf tests between the two machines.

NAS is FreeNAS (based on FreeBSD). But there's no issue with NAS. As I said, write speed to NAS is not limited. Multiple hosts can write backups up to 3 Gbps, 1 Gbps each. Also, as you can see, restore speed from the same NAS is not limited by 1 Gbps.
 
What compression settings do you use? Not all of them are multi-threaded, so perhaps compression is the bottleneck. You might also want to ensure that nobody set a BW limit when you weren't looking :cool:
You're right. Only zstd compression limited to 1 Gbps:

1597042896003.png

1 - lzo: max 1.5 Gbps
2 - no compression: max 2.85 Gbps
3 - pigz: max 1.7 Gbps
4 - zstd: max 0.98 Gbps

No bandwidth limits set in the GUI or vzdump.conf.
 
  • Like
Reactions: IxsharpxI
So, since the last post I've enabled multithread for zstd (set 'zstd: 0' in /etc/vzdump.conf). Now zstd can use up to half available cores instead of one. But the setting hardly changed anything. Backup performance with zstd is still worse than any other compression algorithms. It's rarely exceeds 1 Gbps, and only by a small fraction.

I continue to use zstd only because of it's great restore speed, comparable to restore speed of raw dump. But still it's a disappointment. The label 'fast and good' are not so true, unfortunately.
 
Well, despite you stating that read speeds from your vm storage are unlimited - and checking that against sparse data is really no proof, I'd suggest to first benchmark the real read perfpormance of your vm storage. Then, as already suggested, perform a iperf bench between your vm node and your NAS.
 
I didn't said read speed is unlimited. But it's clearly higher than 1 Gbps. Without compression backup gives up to 3 Gbps (as you can see in my previous posts). Lzo and pigz also performing ~1.5 times better than zstd.

Edit: Ok, let's check iperf:
------------------------------------------------------------ Client connecting to 10.1.8.69, TCP port 5001 TCP window size: 85.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.1.8.61 port 59928 connected with 10.1.8.69 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 9.58 GBytes 8.23 Gbits/sec
 
Last edited:
So, if it's not the network - which clearly is not the case, the issue must be somewhere in the read pipe… Have you measured the throughput you get, when reading a large file from the vm storage, pipe it through gzip and pipe that to /dev/null? That should give you the though put you achieve, before the stream hits the network.

While it's doing the reads from the vm storage, keep an eye on the iostats of your storage devices and your cpu/iowait.
 
root@adm61:/home/avn# dd if=/dev/vms4/vm-112-disk-0 of=/dev/null bs=1M status=progress 53406072832 bytes (53 GB, 50 GiB) copied, 146 s, 366 MB/s 51200+0 records in 51200+0 records out 53687091200 bytes (54 GB, 50 GiB) copied, 149.697 s, 359 MB/s root@adm61:/home/avn# dd if=/dev/vms5/vm-114-disk-0 of=/dev/null bs=1M status=progress 53540290560 bytes (54 GB, 50 GiB) copied, 174 s, 308 MB/s 51200+0 records in 51200+0 records out 53687091200 bytes (54 GB, 50 GiB) copied, 177.612 s, 302 MB/s root@adm61:/home/avn# dd if=/dev/vms5/vm-200-disk-0 of=/dev/null bs=1M status=progress 53655633920 bytes (54 GB, 50 GiB) copied, 152 s, 353 MB/s 51200+0 records in 51200+0 records out 53687091200 bytes (54 GB, 50 GiB) copied, 155.575 s, 345 MB/s root@adm61:/home/avn# dd if=/dev/vms5/vm-212-disk-1 of=/dev/null bs=1M status=progress 53519319040 bytes (54 GB, 50 GiB) copied, 137 s, 391 MB/s 51200+0 records in 51200+0 records out 53687091200 bytes (54 GB, 50 GiB) copied, 140.699 s, 382 MB/s root@adm61:/home/avn# dd if=/dev/vms2/vm-267-disk-0 of=/dev/null bs=1M status=progress 53473181696 bytes (53 GB, 50 GiB) copied, 147 s, 364 MB/s 51200+0 records in 51200+0 records out 53687091200 bytes (54 GB, 50 GiB) copied, 151.318 s, 355 MB/s root@adm61:/home/avn# dd if=/dev/vms5/vm-313-disk-0 bs=1M status=progress | cat >/dev/null 53364129792 bytes (53 GB, 50 GiB) copied, 176 s, 303 MB/s 51200+0 records in 51200+0 records out 53687091200 bytes (54 GB, 50 GiB) copied, 180.358 s, 298 MB/s root@adm61:/home/avn# dd if=/dev/vms5/vm-320-disk-0 bs=1M status=progress | gzip >/dev/null 2753560576 bytes (2.8 GB, 2.6 GiB) copied, 74 s, 37.2 MB/s^C 2650+0 records in 2649+0 records out 2777677824 bytes (2.8 GB, 2.6 GiB) copied, 74.9503 s, 37.1 MB/s root@adm61:/home/avn# dd if=/dev/vms5/vm-342-disk-0 bs=1M status=progress | zstd >/dev/null 53457453056 bytes (53 GB, 50 GiB) copied, 278 s, 192 MB/s 51200+0 records in 51200+0 records out 53687091200 bytes (54 GB, 50 GiB) copied, 282.061 s, 190 MB/s root@adm61:/home/avn# dd if=/dev/vms5/vm-190-disk-0 bs=1M status=progress | zstd --threads=28 >/dev/null 64290291712 bytes (64 GB, 60 GiB) copied, 320 s, 201 MB/s 61440+0 records in 61440+0 records out 64424509440 bytes (64 GB, 60 GiB) copied, 324.725 s, 198 MB/s root@adm61:/home/avn# dd if=/dev/vms2/vm-120-disk-0 bs=1M status=progress | lzop >/dev/null 33877393408 bytes (34 GB, 32 GiB) copied, 78 s, 434 MB/s 32768+0 records in 32768+0 records out 34359738368 bytes (34 GB, 32 GiB) copied, 81.0532 s, 424 MB/s root@adm61:/home/avn# dd if=/dev/vms2/vm-124-disk-0 bs=1M status=progress | pigz >/dev/null 34199306240 bytes (34 GB, 32 GiB) copied, 128 s, 267 MB/s 32768+0 records in 32768+0 records out 34359738368 bytes (34 GB, 32 GiB) copied, 130.772 s, 263 MB/s

Summary:
Read from disk: ~300-400 MB/s (seen results up to 700 MB/s, not shown here)
Read and pipe to cat: ~300 MB/s
Read and pipe to gzip: ~37 MB/s (one thread)
Read and pipe to zstd: ~200 MB/s (no matter one thread or many)
Read and pipe to lzop: >400 MB/s
Read and pipe to pigz: ~260 MB/s

All tests was done on real VM disks. Disks belong to unused VMs and each disk was tested only once, so there's no caching involved.
 
Yeah… this is strange… looks like you've got all in place for achieving better throughputs when writing to your FreeNAS. I am kind of baffled… although it really looks line vzdump is the culprit. Have your tried backing up without compression?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!