Storage-Speed / ZFS questions / HBA

Dec 17, 2024
7
1
3
Austria - Tirol
Hi! I am currently testing disk performance with different setups, just to find out what is working best, and I am observing strange behavior - or I am simply doing something wrong here :-)

My Setup:
- Proliant ML30 G10 / 64GB Ram (not a rocket but shold do)
- Brodcom HBA 9400-16i with an 4-drive-U3-Enclosure and (after some trouble) with the correct cables.
- 4x Micron 7450 Pro Drives 1920GB
- everything is recognized correctly in Bios/HBA/Proxmox.
- Evaluation Version of MS Server 2025 Standard, VirtIO drivers installed, Updates installed, PVE 8.4.0

Drive Specs:
1750089084659.png


First Test:
- Single Drive, lvm-thin, just to get a feeling for the drive performance:
1750090960329.png1750090968856.png

The reading RND4K-IOPS are far lower than in the specs, the rest is looking like a nice starting point.

Note: Always ensured that the server has completely booted and CPU is staying as low as possible to avoid other processes to interfere.

Second Test:
- RAIDZ with all 4 drives

My expectations: Reading up to 2-3 times better, writing the same or a bit less speed

My results:
1750091079633.png1750089863478.png

Am I missing here something, or are my expectation simply to high :) ?

Thank you very much for any feedback / hints / etc.!
 
Last edited:
Thx for your reply!

Yes, I noted this too, maybe drive-cache-Impact? But what I am more wondering about is RaidZ not showing any performance or IOPS gain when reading from more than 1 disk.

Is there any ZFS configuration / tuning necessary? I simply created the RAIDZ via PVE-GUI....
Or on the HBA? Did not play with storcli by now....
 
IDK what you are expecting from your HBA, which is a PCIe 3.1x8 device that maxes out at 7.8 GByte/s?
...I just found a "real" spec sheet. You are right, thanks, I misread the specs i found before ordering: 8 GB/s max throughput - but of course not per 4x-Port - and limited by PCI3.1.....

But when I get this right, even the LSI 9500-16i with - 16 GB/s max throughput - limited by PCIe 4.0 - would not be able to max-out this drives?

Is the reading speed gain for RAIDZ really like one might ideally expect (3x with 4 drives) ? I Just do not want to change the HBA for another few hundred bucks (e.g. LSI-96xx 24G Tri-Mode-HBA) and then end up with none or only a slight speed improvement :-(
 
would not be able to max-out this drives?
Generally, any additional entries in the I/O path will slow things down, so nothing is as fast as directly accessing the individual drives from the CPU over the PCIe bus. I recently also setup a system with a LSI-based 24G SAS controller acting as RAID5 controller for 6 NVMe's and it is indeed much slower than accessing ONE NVMe drive directly (without the controller). I can only see the combined throughput by using I/O sizes greater or equal than the stripe size of 64KB, I then get around 19 GiB/s, which is a lot. Simple 4k/8K I/O is sadly much slower, only around 1.5-2 GiB/s. Also the device will show as SCSI and not as NVMe, which also adds additional delays and therefore reduced performance to the I/O path. Even passthrough each disk, each is only visible as SCSI device, not as NVMe.
 
I was just pointing out the PCIe 3.1 limitation. PCIe 4.0 is twice as fast, but IDK what those adapters do in terms of processing, that might otherwise limit their performance, If you use N NVMe devices that max out their PCIe speeds with M lanes each, you wil obviously need N*M PCIe lanes of the same speed to not limit their raw throughput. Thus, even a 9600 16i with only 8 PCIe 4.0 lanes is limited at 16 GByte/s. I do not get how they can claim 24 GByte/s here: https://www.primeline-solutions.com...ri-mode-enhanced-host-bus-adapter/#additional

I would probably go with two adapters or use one with 16 PCIe lanes (this is one for NVMe).