Keep in mind that PBS really needs low latencies. Veeam will use big incremental image files. PBS chops everything in small (some KB and up to a maximum of 4 MiB) chunks. So if you backup a 4TB disk it will need to hash and write atleast 1 million small files. NFS/SMB isn't that great at handling millions of small files.
I did an upgrade, but I was already on 7.1.Did you upgrade your PVE from 7.0 to 7.1 recently? I got that error after the upgrade, because I didn'T refreshed the browser cache (CTRL+F5) after the upgrade. Because of that I was using the cached PVE7.0 webui with the PVE 7.1 backend. This resulted in wrong backup configs because PVE switched the b ackup scheduler with PVE 7.1. So best you do a CTRL+F5 and then create that backup task again.
Please send your:I have tried clearing my browser cache, restarting my computer, and recreating the backup tasks. Same error
The backup job still ran at its normal time, I just cant run it manually.Please send your:
> pveversion -v
The bug is fixed in 7.1.10pve-manager: 7.1-9 (running version: 7.1-9/0740a2bc)
Thanks for your work in this thread and for all the support everyone here contributed, this was incredibly interesting and informative to read you guys! Has this been documented in the wiki by chance? Might be helpful as a troubleshooting section.It seems the cpu change to host has made the biggest improvement. Backup jobs now run at 500-800Mbit/s.
Hmm, something seems off with this.I can confirm that changing the CPU setting of the PBS VM to "host", I could rise the TLS value from 43 MB/s to 275 MB/s:
Before:
Uploaded 52 chunks in 5 seconds.
Time per request: 98437 microseconds.
TLS speed: 42.61 MB/s
SHA256 speed: 231.30 MB/s
Compression speed: 373.42 MB/s
Decompress speed: 621.15 MB/s
AES256/GCM speed: 1409.60 MB/s
Verify speed: 263.64 MB/s
After:
Uploaded 330 chunks in 5 seconds.
Time per request: 15269 microseconds.
TLS speed: 274.69 MB/s
SHA256 speed: 264.81 MB/s
Compression speed: 384.39 MB/s
Decompress speed: 639.13 MB/s
AES256/GCM speed: 1399.95 MB/s
Verify speed: 266.21 MB/s
This results in a backup speed of about 180 MByte/s through 2x 10 GBit bonded.
View attachment 47659
A restore to the same local-zfs (NVMe, where also the PBS VM lives) results into approx 60 MByte/s restore speed.
View attachment 47657
A restore through 2x 10 GBit bonded to local-zfs of second server (also NVMe) results into approx 72 MByte/s restore speed.
I have 2 nodes in cluster with same specs:Hmm, something seems off with this.
- What model of CPU?
PVE: 320 GB each node
- How much memory does PBS and PVE have? (How much is used on both)
Linux Bond with Round-Robin, works with over 900 MByte/s while migratining a container through "insecure" migration connection.
- How is the NIC bonded? LACP? Round Robin? etc..
- If you run iperf3 from your PBS host to your PVE hosts, what is the average throughput? (>~18000Mbps is acceptable given a 20Gbps link in LACP)
One 7,68TB 2,5" Micron 7450 Pro Datacenter Enterprise 24/7 U.3 NVMe 1Mio IOPS 14.000TBW PCIe 4.0 per Proxmox node.
- When you say NVMe, do you mean consumer QLC, consumer MLC like a Samsung 980 Pro, or enterprise like a Kioxia U2 drive?
Nothing else, only testing, not productive.
- What is the workload on those NVMes? Does the Summary page on your nodes show any io_wait?
Only the single NVMe per node each with Proxmox installed on ZFS-RAID0 through Proxmox ISO installer.
- What is the storage configuration? (RAIDzMirror w/ 2 disks or 3 or 4 or 6, RAIDz1-2-3 with how many disks).
I would be happy to get a restore with about 1 GByte/s as in migration mode.Difficult to tell why you are only getting 180MBps backup and 72MBps restore on a 2500MBps link (20Gbps / 8 = 2.5GBps). Note: You will never reach 2500MBps, more likely, 2250MBps or even 2000MBps given networking, application, and encryption overhead. That's also assuming the link isn't shared and being used by other traffic.