Optimum ZFS configuration with 8 slots

gcakici

Renowned Member
Sep 26, 2009
46
1
73
I've 8 slots SAS storage chassis and IT mode supporting HBA with 2 slots can be filled optionally with NVMe SSD drives. I'm booting via DOM so all slots can be used for storage. I've plenty of 4TB 7.2K Toshiba SAS drives and a one Intel P3700 400GB. Cpu and memory is not an issue.

Which ZFS pool configuration might be optimum to have around 10 - 12 TB of usable capacity? I can order one more P3700 400GB or a lower priced Enterprise SSD of any size.

Thanks in advance
GC
 
here is a Raid calculator for you: it does include ZFS based Raid levels.
http://wintelguy.com/raidcalc.pl
https://www.servethehome.com/raid-calculator/

ZFS afaik supports:
  • Raid0
  • Raid1
  • Raid10
  • Raid-Z1
  • Raid-Z2
  • Raid-Z3
When ever you are doing parity calculations (e.g. last 3 levels mentioned) you wanna go with ECC ram. There is alot of "DO I REALLY ?!?" discussions and arguments about this; all arguments i have seen end the same way:
You do parity calculations? Then you go with ECC or you go home broken at some point.

Found this pretty helpful on Ram sizing:
https://hardforum.com/threads/when-will-zfs-use-ram.1586776/#post-1036865233

This was helpful on ZIL/Slog
http://www.freenas.org/blog/zfs-zil-and-slog-demystified/
https://forums.freenas.org/index.php?threads/some-insights-into-slog-zil-with-zfs-on-freenas.13633/

One of these you probably want mirrored, can not remember which one from on top of my head.


personal sidenote from my ceph experience:
When you buy SSD/NVME for caching purposes; compare these devices not only on raw read/write performance point, but especially on a Price/TBW ratio (Terabyte written). There are some very very large variances out there, where you typically would not expect em, that can cost you 5-9x more compared to a drive that costs only 20% more.

good luck.
 
Last edited:
Since you have not mentioned anything about RAM, I am assuming you have plenty as ZFS is memory heavy system.

Your optimum really comes down to how much redundancy or data protection you want out of ZFS. Raid-Z1 allows 1 hard drive failure whereas Z2 and Z3 allows 2 drives and 3 drives failures simultaneously. You can really speed up ZFS by putting LARC on SSD.

Or if data protection is not your concern than stripping of course is the best solution which provides very fast read/write but no protection and much less memory consuming. As Q-wulf pointed out above, those articles are good read.

We use ZFS purely for backup purpose.
 
They'll be used in production so both performance and redundancy are important. I need speed but want to sleep at night also :)

I'll have enough ram which will be dedicated to the VM's +64GB for ZFS. I want to use ZIL and SSD cache. For the 7.2K SAS disks I can go with 3xRaid1 vdevs and put a cold spare to speed up the recovery whenever it happens. I can loose the half of the drives from the capacity view while gaining performance.

How can I achieve the maximum speed with at least the same performance as above for both running and recovering? With which RaidZ level I can achieve the similar or higher performance?
 
If you want speed, there is nothing better than RAID10. I would not use a dedicated hot-spare, I'd use a cold-spare in your server room (if this is not off-site), because you have more speed. ZFS can IMHO read from all devices in a RAID10 setup, so you have enough read performance. L2ARC and ZIL are good, but you have to have a good ZIL device. Please refer to this list: http://www.sebastien-han.fr/blog/20...-if-your-ssd-is-suitable-as-a-journal-device/

If would not sleep well with the OS on a DOM. This is a single point of failure. If you have your ZFS pool, why not install the OS on top of it? Proxmox VE on compressed ZFS takes only 1-2 GB.
 
For OS I'll use a pair of Doms with zfs raid1 so remove that spof possibility. I choose the best from that list Intel P3700 for caching and ZIL.

If I have raidZ3 on 7 disks can I beat 6 disks with raid10 on reading, may be it's not the right question to ask but can not find to ask the right way.

My workload is several Zimbra vms with heavy I/O.
 
We've decided to use the last empty slot with another Intel P3700 NVMe drive.

We'll have disks;

2 x SATA DOM with zfs raid 1 for Proxmox attached on board
6 x 4 TB SAS 7.2K drives with 3 mirror vdevs (12TB usable)
2 x 400GB NVMe Intel P3700 partitioned. 40GB raid-1 ZIL and 550GB (275GB+275GB stripe) cache and some empty space > 100GB.

and memory;

128 GB RAM, 32 GB saved for ZFS caching.
 
Yes, you can do this ... but some remarks:

You know that ZIL only saves 5 seconds of synchronous writes. No way you can write 8 GB/sec with that drives, so your ZIL is much too big.

Only 32 GB and 550 GB L2ARC? You need an entry in your ARC for L2ARC entries, therefore you will decrease your overall ARC performance a lot. Why not buy more RAM instead of another drive?

Your write performance will also be slower with an L2ARC instead without it because your data must be written twice (to your pool and your cache).

Also important is that your machine will be dead slow at first, because you will have writes if you read data. L2ARC is IMHO not conserved across reboots, so the read/write-cycle continuous on every reboot or crash.

I'd advise to monitor your L2ARC-Usage (hits/misses) over time. I ditched my L2ARC device, because it has less than 1% hits, but was constantly written.

Your L2ARC is bypassed on sequential reads, so it does not cache those reads. Depending on your application this can be very, very bad:
If you snapshot regularly and your data is hugely fragmented, but is read sequentially, your will have terrible read performance.

I think that NL-SAS drives vs. NVMe is just a huge difference. In a new setup, I'd go with flash-only without DOM, L2ARC and ZIL for maximum performance.
 
What kind of configuration (disks, sizes and raid setup) do you suggest with 8 slots filled with just SSDs?
 
I really need that space and shouldn't exceed $200 per TB. With full ssd configuration I can not fit in. So I need to optimize with 8 slots and those devices.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!