Mirrored VDEVS striped + GC > SSD / NVME usage

blueguy31

Renowned Member
Jun 10, 2016
6
1
68
46
Hello everyone!

We currently have a PBS server in ZFS with a mirror pool (2 x 4TB HDD 7200trs 128mo cache).
We would like to add a second one to that pool (mirrored vdevs striped) in order to gain performance (RAID10-like).

Our idea is also to improve Garbage Collector performance (maximizing IOPS alongside HDD ... i know, SSD would be better and SSD enterprise would be even better !)
Is it advisable to add an SSD / NVMe SSD for L2ARC / ZIL to this node?

Thanks for your reply!
 
Neither L2ARC or ZIL will help GC. Add a special device [1] using a mirror of at least 2 enterprise ssd drives, create a new datastore once you've set the value for special_small_blocks and replicate the content from the old datastore to the new. The special device is used for new data written to the dataset, that's why you should copy all data.

If you plan on adding 2 more HDDs to you current zvol, an option would be to use the 2 new HDD + special device as the new datastore, replicate contents from old to new datastore and then remove the old datastore, destroy the old zvol and add the old HDD to the new zvol.

[1] https://pbs.proxmox.com/docs/sysadmin.html#zfs-special-device
 
  • Like
Reactions: blueguy31 and UdoB
Neither L2ARC or ZIL will help GC. Add a special device [1] using a mirror of at least 2 enterprise ssd drives, create a new datastore once you've set the value for special_small_blocks and replicate the content from the old datastore to the new. The special device is used for new data written to the dataset, that's why you should copy all data.

If you plan on adding 2 more HDDs to you current zvol, an option would be to use the 2 new HDD + special device as the new datastore, replicate contents from old to new datastore and then remove the old datastore, destroy the old zvol and add the old HDD to the new zvol.

[1] https://pbs.proxmox.com/docs/sysadmin.html#zfs-special-device
So nice ! Thank you. I will give it a try quickly and will come here to give the result.
 
Here, you'll want to look at this post too. It's where I got the create_random_chunks.py script.

https://forum.proxmox.com/threads/datastore-performance-tester-for-pbs.148694/

When you read that post tho, don't get lost in the weeds. Note that the author is largely railing against the practice of using an NFS or other remote mount as a PBS datastore. They go to great lengths to prove the point that NFS mounts with PBS are a really bad idea.
Purely as an aside to their crusade against NFS, one of their several scripts "create_random_chunks.py" is useful for benchmarking PBS chunks performance.

Do be careful with the script. Its gonna write a bunch of files and folders and hit system performance real hard while it runs. Might go about 10 minutes. It won't fill up your storage, but you should delete the test folders after you are done.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!