PBS Performance improvements (enterprise all-flash)

thanks for the additional numbers! that does indeed look like there is some severe bottle neck happening that we should get to the bottom of..
I suspect that there is something in the compression pipeline.
Because like i mentioned in the Bug Report, even if the way the Backup works with PBS compared to Local-Storage Backup, the limitations are all exactly the same.

Or the speeds are identical between Local and PBS as soon as Compression is enabled (no matter which compression or how many zstd multitasking threads).
With disabling Compression, it feels like every Limitation is removed, the Backup-Speeds go up to 5GB/s here.
With enabled Compression (no matter which/multitasking etc), no matter if Local or PBS, i can't pass 1GB/s.

And in my opinion it has nothing todo with Clockspeeds either because i Monitored the CPU-Utilization during Backuping (Local and PBS), and i don't see any Cores reaching 100%.
But im not entirely sure, since on other side, the Clockspeeds are definitively important, i get on Servers that can reach higher Clockspeeds faster Backup Speeds.

I took me 3 days of trying to find out what the Bottleneck is, as i created my Bugreport...
And i tryed to make PBS-VM Instances and even reinstalled one of both Genoa-Servers with PBS, to have the fastest possible PVE+PBS on the planet, tryed with Local Storage and all tuning options possible on Compression, etc...
Monitored CPU and IO etc...
And couldn't find out what the Bottleneck is and gaved up.

On the returning side of things, it seems like im still Archieving the fastest PBS Backup-Speeds with 1GB/s even with SAS (HDD) Drives on PBS, while others are using NVME Drives and have troubles to reach 500mb/s.

Cheers
 
  • Like
Reactions: _gabriel
Just as bad, or worse.

Code:
Upload image '/dev/mapper/pve-vm--100--disk--0' to 'root@pam@10.226.10.10:8007:pbs-primary' as tomtest.img.fidx
tomtest.img: had to backup 62.832 GiB of 80 GiB (compressed 42.404 GiB) in 673.35s
tomtest.img: average backup speed: 95.552 MiB/s
tomtest.img: backup was done incrementally, reused 17.168 GiB (21.5%)
Duration: 676.53s
End Time: Tue Jul  9 12:59:22 2024
FWIW, I very likely found the issue with this particular invocation, and might have some test package (if you are willing!) to try for image backups using proxmox-backup-client.
 
Any news here? We have also a performance issue with NVMe Disks on Proxmox Backup Server.
And what will you tell us?
only 10 Mb/s thought write?
Use other enterprice NVMe disks in ZFS like Raid 10 Setup on fast new hardware.
 
There was something change in the git with "input buffer size"
The question is was this generally changed in future pbs versions or is this something we must change manually to see if this solves our problem.
 
There was something change in the git with "input buffer size"
Don't know what are you referring to. Do you have a link to the change you mention? Also, resurrecting a year+ old post without providing details isn't that useful. In the mean time there has been improvements for both restores and verification tasks and you don't mention what kind of "performance issue" you have.

The changes mentioned above [1] were released with PBS3.3 [2]

[1] https://forum.proxmox.com/threads/p...ments-enterprise-all-flash.150514/post-688915
[2] https://pbs.proxmox.com/wiki/Roadmap#Proxmox_Backup_Server_3.3
 
Last edited:
From my Part, or the mentioned bugreport...

Nothing is fixed since this thread was opened. PBS backups are still single core tls limited.

It was an eternity ago and still the same issue today.
9374F -> 1gb/s limit
Xeon 4210R -> 200mb/s limit

So there is absolutely nothing fixed in this thread. I think that people just gaved up on this.