Why Restore is so slow from PBS and what is slow/fast ?

I'm experiencing slow performance on nvme and 10g as well. I have a backup that is 700GB and processing the metadata for a restore is taking forever. Backups have been fast and incremental, so I didn't notice that a restore would take hours, even on nvme.
I'm going to have to plan moving the data portion of my backups off pbs and I'll use ZFS send instead for that...
 
I'm experiencing slow performance on nvme and 10g as well. I have a backup that is 700GB and processing the metadata for a restore is taking forever. Backups have been fast and incremental, so I didn't notice that a restore would take hours, even on nvme.
I'm going to have to plan moving the data portion of my backups off pbs and I'll use ZFS send instead for that...
What processor are you using on that server?
 
You really only need SSDs for the metadata cache.
Then, you will see reasonable speeds while restoring from spinning disks.
No, like the designation say its speeds up metadata. Nevertheless, Your chunks have to read from the spinners.
 
Hi All ! Please tell me, is there any work underway to improve performance when restoring from a PBS backup? In actual versions of PBS and PVE, I cannot get speeds above 350 megabit/s, no matter what hardware configurations. I use (for example, epyc 7003, ZFS NVME + ZIL + L2ARC) CPU < 5% iodelay < 7% ! :mad:
 
Last edited:
You really only need SSDs for the metadata cache.
Then, you will see reasonable speeds while restoring from spinning disks.
May I ask politely: Can the backup file verification also reach full speed? Or to put it another way, does the backup file verification perform SHA verification on the mechanical hard disk, or only verify the data on the SSD?
 
May I ask politely: Can the backup file verification also reach full speed? Or to put it another way, does the backup file verification perform SHA verification on the mechanical hard disk, or only verify the data on the SSD?
Please ask yourself, what Disksetup do you use? and how many ipos for 4k random Access you can get?
And you must remember Proxmox Backup Server access 4 Megabyte Chunks everytime it collect your data from disks.
 
Please ask yourself, what Disksetup do you use? and how many ipos for 4k random Access you can get?
And you must remember Proxmox Backup Server access 4 Megabyte Chunks everytime it collect your data from disks.
Yes, thanks. I checked some information and roughly understood. I’m using a mechanical hard drive (HDD). Even if I put the metadata on a solid-state drive (SSD), the checksum and actual data reading still take place on the HDD, and it should still not be able to surpass the inherent 4K performance limitations of the mechanical hard drive itself.
 
May I ask politely: Can the backup file verification also reach full speed? Or to put it another way, does the backup file verification perform SHA verification on the mechanical hard disk, or only verify the data on the SSD?

Of course the verification will still need to read data from the hdd and thus won't profit as much from the metadata on the special device as the garbage collection job (who mainly needs to read and write metadata). But it still profits from it (just not so much).
There is an option (special_small_blocks) which will allow that parts of the actual data (small files under a certain treshhold) are also written to the ssd. This is usually turned off but can be activated. You need to be careful though otherwise any new data will be written to the ssd filling it completely.

You can analyse your data and potential gains with the zdb tool see https://forum.level1techs.com/t/zfs...pecial-small-blocks-and-special-vdev/226348/2 https://forum.level1techs.com/t/zfs-metadata-special-device-z/159954 and https://github.com/openzfs/zfs/discussions/14542 for some hints how to determine a good value for the parameter. To enable it for your old data it needs to be rewritten with zfs send/receive, https://forum.proxmox.com/threads/zfs-metadata-special-device.129031/ has an example how to do this.