PBS slow backup

Testani

Member
Oct 22, 2022
37
3
13
Hi all i have a cluster of 5 nodes configured as follow:

2 x 10 GB eth for each node
2 x 3TB ssd enterprise RAIDZ1
2 x 6 TB hdd enterprise 7.2 H RAIDZ1

Production vm are stored on ssd zfs and hdd zfs pool is dedicated to PBS istance. Performance of my backups jobs are as follow:

INFO: 74% (148.1 GiB of 200.0 GiB) in 22m 3s, read: 55.9 MiB/s, write: 55.9 MiB/s
INFO: 75% (150.0 GiB of 200.0 GiB) in 22m 37s, read: 58.9 MiB/s, write: 58.9 MiB/s
INFO: 76% (152.0 GiB of 200.0 GiB) in 23m 18s, read: 50.0 MiB/s, write: 50.0 MiB/s
INFO: 77% (154.0 GiB of 200.0 GiB) in 23m 52s, read: 60.8 MiB/s, write: 60.8 MiB/s
INFO: 78% (156.0 GiB of 200.0 GiB) in 24m 31s, read: 52.4 MiB/s, write: 52.3 MiB/s
INFO: 79% (158.0 GiB of 200.0 GiB) in 25m 2s, read: 65.9 MiB/s, write: 65.9 MiB/s
INFO: 80% (160.1 GiB of 200.0 GiB) in 25m 38s, read: 57.7 MiB/s, write: 57.7 MiB/s
INFO: 81% (162.1 GiB of 200.0 GiB) in 26m 12s, read: 60.2 MiB/s, write: 60.1 MiB/s
INFO: 82% (164.1 GiB of 200.0 GiB) in 26m 48s, read: 57.0 MiB/s, write: 57.0 MiB/s
INFO: 83% (166.0 GiB of 200.0 GiB) in 27m 24s, read: 56.6 MiB/s, write: 56.6 MiB/s
INFO: 84% (168.0 GiB of 200.0 GiB) in 28m, read: 56.4 MiB/s, write: 56.4 MiB/s
INFO: 85% (170.0 GiB of 200.0 GiB) in 28m 36s, read: 56.7 MiB/s, write: 56.7 MiB/s
INFO: 86% (172.0 GiB of 200.0 GiB) in 29m 11s, read: 59.1 MiB/s, write: 59.0 MiB/s
INFO: 87% (174.0 GiB of 200.0 GiB) in 29m 47s, read: 57.0 MiB/s, write: 57.0 MiB/s
INFO: 88% (176.0 GiB of 200.0 GiB) in 30m 24s, read: 55.4 MiB/s, write: 55.4 MiB/s


Is this normal?
 
We generally recommend SSDs as backup storage media [1], as there are a lot of random accesses (which are slow on conventional HDDs).

You could try adding an SSD as ZFS special device [2] to your HDD pool. This should speed things up as the SSD will be used to cache metadata.

Also, why are you using RAIDZ1 and not simply a mirror?

[1] https://pbs.proxmox.com/docs/installation.html#system-requirements
[2] https://pbs.proxmox.com/docs/sysadmin.html#local-zfs-special-device
any benefit of adding zpecial device when running 100% ssd ?

is saw somewhere we can issue a command for caching on the ZFS pool, for entreprise SSD who handle write aknowledge automaticaly i dont find it
 
any benefit of adding zpecial device when running 100% ssd ?
It would offload all metadata IO from your other SSDs to the special device SSDs, so your other SSDs are hit by less IO and therefore should perform better. At least when your mirrored special devices are faster than the raid of your other SSDs. Intel Optane SLC SSDs are very small but got decent IOPS performance and might be a good choice for that.
 
  • Like
Reactions: Lukas Wagner

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!