Improve restore speed with Proxmox Backup Server

jar2425

New Member
Jan 24, 2025
5
1
3
Hello, I would like to check if there's a way to improve the speed when restoring backups in my Proxmox using backups made through Proxmox Backup Server.

I have a system with Proxmox Backup Server 3.2, where the datastore is a volume connected to a NAS via NFS. The NAS has a RAID 1 setup with two disks (2x SATA HDDs) and is connected to the network through a 4-port 1Gb LACP. My Proxmox server is connected to the backup network through a single 1Gb port.

When restoring backups, I see that none of the resources reach 100% utilization—not on the NAS, not on the Proxmox Backup server—and the maximum transfer rate I achieve is 46MB/s (372 Mbps).

Do you have any suggestions on how I can improve this transfer rate or where I should start looking?

Thanks in advance!
 
Last edited:
Do the PBS and the NAS have weak CPUs?
I don't think so... The PBS CPU has 8 virtualized cores (Xeon Gold 5320), and the CPU use is under 20% of use all the time, also PBS has 8GB of ram.

The NAS has an AMD Ryzen R1600, and it is always under 10% of use.
 
NAS via NFS. The NAS has a RAID 1 setup with two disks (2x SATA HDDs)

Sorry, but there is NO way to get good performance with this. Restore needs actual IOPS on the data volume.

The usual tricks to speed up backups do NOT work for this part of the story. Please re-read the hardware requirements. (Use local and fast local storage to actually supply high IOPS.)
 
  • Like
Reactions: Johannes S
Thanks, I'll try to see how it works with local storage on PBS and I'll test the ZFS format as mentioned in the documentation, since the NAS only allows BTRFS and EXT4, and I'm not sure if switching to EXT4 will improve anything.
 
ZFS won't help if your datastore is on a NAS. If your NAS supports running VMs you could try to run PBS as VM on your NAS
 
Last edited:
Yes, it is supported, I'll try by this way, thanks a lot!
You might also want to read this:

Allthough there was quite a heated debate between PBS developers and the developer of that benchmarks on the validity of part of his assumptions in another thread they agreed on the basic result: A PBS datastore shouldn't lie on a network share. It will perform bad, even if the share is on the same hosts. The root cause is how PBS split it's data in a lot of small files ("chunks") for it's deduplication magic. Each chunks needs to be read for most Operations.
For the same reason SSD is preferred and a combination of HDD mirror and SSD special device mirror unter ZFS will outperform a HDD datastore. btrfs has a "metadata cache" feature which might yield similiar results to a ZFS special device ( never used it myself so take this with a grain of salt)

From previous forum threads I expect a speedup if you setup a VM on your NAS due to eleminating the network share as a bottleneck.
 
You might also want to read this:

Allthough there was quite a heated debate between PBS developers and the developer of that benchmarks on the validity of part of his assumptions in another thread they agreed on the basic result: A PBS datastore shouldn't lie on a network share. It will perform bad, even if the share is on the same hosts. The root cause is how PBS split it's data in a lot of small files ("chunks") for it's deduplication magic. Each chunks needs to be read for most Operations.
For the same reason SSD is preferred and a combination of HDD mirror and SSD special device mirror unter ZFS will outperform a HDD datastore. btrfs has a "metadata cache" feature which might yield similiar results to a ZFS special device ( never used it myself so take this with a grain of salt)

From previous forum threads I expect a speedup if you setup a VM on your NAS due to eleminating the network share as a bottleneck.
The performance improved, but a little, with the VM in the own NAS I can reach a maximum transfer rate of 60~64MB/s

I did the test and these are the results:

files to write: 500000
files to read/stat: 50000
buckets: 65536
sha256_name_generation: 0.61s
create_buckets: 2.62s
create_random_files: 41.97s
create_random_files_no_buckets: 49.19s
read_file_content_by_id: 78.21s
read_file_content_by_id_no_buckets: 16.07s
stat_file_by_id: 8.84s
stat_file_by_id_no_buckets: 1.49s
find_all_files: 23.15s
find_all_files_no_buckets: 1.67s
 
  • Like
Reactions: Johannes S