Garbage collection speed

Your second issue is that the NFS mount. network file systems are known to perform bad with PBS see https://forum.proxmox.com/threads/datastore-performance-tester-for-pbs.148694/
There was another discussion with the PBS developers and the author of said thread where it was pointed out, that some of his assumptions for his performance testing tool were wrong, but the developers agreed with the results (that you really don't want to use NFS or some other network filesystem as PBS datastore).

You have following options:
  • Create a PBS lxc container on your TrueNAS see https://forum.proxmox.com/threads/pbs-on-truenas-have-your-cake-and-eat-it-too.162860/
    Sadly TrueNAS container support is still experimental and might be ditched or changed later, but performancewise this is propably the best course of action. And the IOPS HDDs will still be bad but you won't hurt that much by the NFS overhead
  • Use an ISCSI share instead of NFS: https://jrehkemper.de/content/linux/proxmox/truenas-iscsi-storage-for-proxmox-backup-server/
    The performance still won't be great but should be better than with NFS
  • Since your backups size is still quite small you could buy two used cheap datacenter SSDs from ebay or some reseller and use them as dedicated discs for your PBS
  • Like Udo said you will get better IOPS with a striped mirror out of your 4x10 TB discs. How are they configured at the moment (RAIDZ)? Which is the capacity of your ZFS pool? RAIDZ1, RAIDZ2 and striped mirror with four 10 TB discs actually dont differ much in capacity, https://www.truenas.com/docs/references/zfscapacitycalculator/ gives 18,063 TiB for a 2-way-mirror, 18,045 TiB for RAIDZ1 (one spare) or 26,304 TiB (RAIDZ1 without spare), 17, 494 TB (RAIDZ2 0 spare)
  • Again as Udo said a special device mirror out of used server ssds will help a lot with garbage collection. Another benefit is that it will also speed up any operations with other files on the pool (you will need to rewrite the data though).
For the third and fourth option you would need to backup the data on your pool somewhere else, recreate the ZFS pool and afterwards restore the data to it.

Personally I would go with two used server ssds (they go around for around 50 Euro per 480 GB in Germany at the moment) passed through to the PBS. Additionally I would buy two-three more used server ssds and add them as special device to the HDD pool and add an ISCSI share as second datastore as additional copy ( PBS can sync from one datastore to the other).
If this isn't feasible I would rebuilding the ZFS pool as striped mirrors plus (if budget allows) a special device and use a ISCSI as datastore.
Thanks for the info, this is informative.

I can’t redo the array, this Z2 array has been running for almost a decade and stores all of my personal data, NVR recording, etc. It’s my homelab and my personal data repo.

I could do iSCSI, I could also try the TrueNAS LXC… and to speed everything up, I could do the metadata special device. I am fairly sure I have seen Wendell on level1tech suggest you can add special metadata special devices after the fact, you just have to copy datasets to new datasets? I believe this copy would allow the metadata to get written to the SSD’s upon copy.

I will look into it… I have a few consumer SSD’s laying around which I couldn’t least use for the interim as targets for the PBS backup.

Thanks for the info!
 
I am fairly sure I have seen Wendell on level1tech suggest you can add special metadata special devices after the fact, you just have to copy datasets to new datasets? I believe this copy would allow the metadata to get written to the SSD’s upon copy.

Yes this should work, somewhere in this forum someone posted how to do this with zgs send/rece8ve.
I will look into it… I have a few consumer SSD’s laying around which I couldn’t least use for the interim as targets for the PBS backup.
Using them as datastore for PBS should be fine as long as you have another backup on different Media. For using as special device you should be aware that a loss of the special device will lead to the loss of the whole zfs pool. So don't use consumer ssds and create a mirror for the special device, it's redundant should match the one of the hdds
 
For using as special device you should be aware that a loss of the special device will lead to the loss of the whole zfs pool
Yes, thanks for confirming, but I am aware. I plan to run used enterprise drives in a 3 way mirror.

zgs send/rece8ve.
I’ll have to look into that as well. Hopefully A metadata special vdev will dramatically improve things… I guess I shall find out.
 
Last edited: