Backup Slow Listings

yottabyteman

Member
Apr 20, 2021
15
16
8
So I have (6) 20TB HDDs for backup (Per Storage container) for the size of data I need to back up. When I first set up the backup server and started the backups, all fast listing the backups. Now I prune and garbage collect without issues. I keep up to 10 backups for 60+ VMs. Once I get to 10 backups on 50 vms getting a list of the backups takes up to 5 mins from proxmox after multiple clicking. Even when I go into the backup server directly I have to wait to list the backups in that storage then I can see the backups in proxmox cluster. What configuration do I have to do for a faster listing of backups on server and proxmox? Can I store the backup listing on an SSD to make it show faster?

Thanks!
 
So I have (6) 20TB HDDs for backup (Per Storage container) for the size of data I need to back up.
The manual recommends to use local SSD-only storage for the backups. Especially for such big amounts of data: https://pbs.proxmox.com/docs/installation.html#recommended-server-system-requirements

Can I store the backup listing on an SSD to make it show faster?
And if despite that recommendation HDDs are used, a ZFS pool of HDDs for data + SSDs for metadata (see ZFSs "special" vdev class): https://pbs.proxmox.com/docs/sysadmin.html#local-zfs-special-device
 
Last edited:
The manual recommends to use local SSD-only storage for the backups. Especially for such big amounts of data: https://pbs.proxmox.com/docs/installation.html#recommended-server-system-requirements


And if despite that recommendation HDDs are used, a ZFS pool of HDDs for data + SSDs for metadata (see ZFSs "special" vdev class): https://pbs.proxmox.com/docs/sysadmin.html#local-zfs-special-device
I have two servers, each with 6x10 TB HDD's in RAID10 ZFS (three mirrors striped). 30 TB total usable, 15 TB currently used.
It works, but as you can imagine, verification and garbage collection takes a long time.

I was considering buying PCIe m.2 cards and adding a mirror of nvme drives to use as special vdev.
Can anyone estimate roughly how much of an improvement we might see in verification and garbage collection? 2x, 10x, maybe nothing?
 
Can anyone estimate roughly how much of an improvement we might see in verification and garbage collection? 2x, 10x, maybe nothing?
Verify not that much as this is rehashing the chunks so primarily the data on the HDDs is read and not the metadata on the SSDs. Still helps a bit that the HDDs don't need to seek all the time to access the metadata. GC is primarily reading + writing metadata, so magnitudes of improvement with fast SSDs. Here, adding special devices lowered the GC runtime from a few hours to a couple of minutes. But keep in mind that ZFS won't move the existing metadata on its own from the HDDs to the SSDs. You will need to remove the whole datastore and write everyting again to fully benefit from it.
 
Last edited:
Special vdevs also help with write performance while L2ARC is just a read cache. But special vdevs are no cache...you need proper redundancy matching the reliability of the normal vdevs. As with a dead special vdev all data on the HDDs would be lost too.
 
So, what would be faster for viewing backup lists and Garbage Collection?
For viewing backup lists it shouldn't matter as long as you never reboot the PBS (because the readcache will be dropped unless you enable persisent L2ARC). For GC special vdevs should be faster as this would also speed up the "touch" operations of the GC.
 
Last edited:
Can I enable both for one storage?
Yes, but I don't see the point doing that. That just wastes a SSD as a read cache for data shouldn't help much with this workload where you randomly access big amounts of chunks. And read-caching metadata won't help unless the SSD used for L2ARC is way faster than the SSD used as special vdev. And by using an L2ARC you consume more RAM that then can't be used as a way faster ARC which actually would be fast enough to speed up metadata access.

If you got the budget to buy multiple enterprise SSDs and also the free slots/ports then use special vdevs. If not, a persistent L2ARC with "secondarycache=metadata" might be an alternative.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!