Raid 0,1,10 - poor performance

In your case first I would migrate your vm's to your local OS disk lvm,
remove the nvme raid10 from pve storage,
destroy the raid10 and recreate a new raid10 with stripsize=64k :
ln -s /opt/MegaRAID/storcli/storcli64 /usr/local/bin/storcli
storcli /c0/v239 del # why it's numbered 239 and not 1 ...
storcli /c0 add vd r10 drives=252:1-8 wt nora direct Strip=64
storcli /c0 set cacheflushint=1
storcli /c0/vall show init # see when build is done
After you have the choice of lvm or filesystem but would do xfs without lvm and add in pve as directory storage
as then you have much more value and fscache which isn't the case without a fs.
Take above steps first while I look for the last ones.
 
Last edited:
Well according to your hardware it's pointless to test RAID0 and RAID1, for VM storage with 8 SSD's you basically have to use RAID10. Using multiple RAID1 would be a special case and that would restrict you in the future (capacity, performance ...) and RAID0 is not really an option ...

Another option for testing would be to switch your RAID controller to HBA mode and use ZFS mirrored vdevs (RAID10 equivalent) but you would have to backup your VM's, destroy your RAID and setup ZFS but hat's another rabbit hole :eek:
This controller, doesn't have a HBA mode, it has a jbod mode, which I have tested with raid0, raid1, raid10 with 5 x hdd

Software raid 1 md0 2 x hdd

soft-raid1.png

Software no raid single hdd

soft-single-hdd-no-raid.png

Software raid md1 = 5 x hdd raid 0

soft-raid0.png

The performance looks much better on no hardware raid.
 
All newer RAID controllers should have HBA / pass through mode, i think you have to delete the drive in the raid setup so that it is shown unconfigured then the os sees it directly?
Code:
The HPE MR416i-o / HPE MR416i-p Gen11 controllers are ideal for many virtualized environments where HBA / pass-through mode is applicable offering high bandwidth and three million random write IOPS
But i don't think with ZFS you would get better performance anyway
 
After
Software raid 1 md0 2 x hdd
You mean 2 nvme instead of 2 hdd but again you measure cache and cpu and not your nvme as you cannot reach nearly 13 GB/s read and 9 GB/s write so the seq results are complete bullshit, the random ones could be "saved in head" for later ...
:)
 
All newer RAID controllers should have HBA / pass through mode, i think you have to delete the drive in the raid setup so that it is shown unconfigured then the os sees it directly?
Code:
The HPE MR416i-o / HPE MR416i-p Gen11 controllers are ideal for many virtualized environments where HBA / pass-through mode is applicable offering high bandwidth and three million random write IOPS
But i don't think with ZFS you would get better performance anyway
I did that, if I clear the configuration, the os will see no hard disks until I configure the controller as JBOD, when I do that I see all of the hard disks, but I do not see them as nvme as I see them as sda sdb etc...

I have re-configured the raid as per below

VD239 Properties :
================
Strip Size = 64 KB
Number of Blocks = 14998568960
VD has Emulated PD = No
Span Depth = 4
Number of Drives Per Span = 2
Write Cache(initial setting) = WriteBack
Disk Cache Policy = Enabled
Encryption = None
Data Protection = None
Active Operations = None
Exposed to OS = Yes
Creation Date = 21-09-2025
Creation Time = 02:37:52 PM
Emulation type = default
Cachebypass size = Cachebypass-64k
Cachebypass Mode = Cachebypass Intelligent
 
The HBA9400 and later models (including OEM versions) can handle NVMe devices, but they do not recognize them as NVMe.

I believe that's how it's designed.
 
So after you building a new nvme hw-raid10 with stripsize=64k and assuming it's called eg - sdb - seen by "lsblk" do:
storcli /c0 set cacheflushint=1
storcli /c0/v239 set iopolicy=Direct
storcli /c0/v239 set rdcache=NoRA
storcli /c0/v239 set wrcache=WT
mkfs.xfs -L hwraid10 /dev/sdb # take right device name !!
blkid|grep hwraid10
edit /etc/fstab to
UUID=<uuid-from-blkid> /<your-moundir> nofail,defaults,logbufs=8,logbsize=256k 0 0
mount -a
# This should be done after each reboot:
swapoff -a
echo mq-deadline > /sys/block/sdb/queue/scheduler # take right device name !!
echo 4096 > /sys/block/sdb/queue/read_ahead_kb # take right device name !!
echo 1023 > /sys/block/sdb/queue/nr_requests # take right device name !! - You should go as high as ctrl support, eg 2048, otherwise you get error if to high
echo 1 > /proc/sys/vm/dirty_background_ratio
echo 10 > /proc/sys/vm/dirty_ratio
echo 1000 > /proc/sys/fs/xfs/xfssyncd_centisecs
echo 20 > /proc/sys/vm/vfs_cache_pressure
echo 2097152 > /proc/sys/vm/min_free_kbytes
# end
Add storage as directory in pve.
Then migrate your vm back from os disk lvm to directory xfs and select .raw files as possible for vm disk format.
"CrystalDiskMark" your vm ... try vm for writeback cache and/or other
 
Last edited:
Testing Random 4k Queue 1 and Thread 1 is the worst case.
What are results from Windows bare metal ?
here on bare metal with 1 x Micron 7450 Pro 1,92 TB and 1 x Samsung PM983 3,84 TB (similar numbers)
running on M2 PCie x4 ports from prosumer mainboard MSI PRO Z690-A
Code:
1 Queue / 1 Thread
[Read]   RND    4KiB (Q=  1, T= 1):    48 MB/s [  12K IOPS] < 83.53 us>
[Write]  RND    4KiB (Q=  1, T= 1):   223 MB/s [  54K IOPS] < 18.25 us>
Profile: Real / Test: 1 GiB (x5) [C: 55% (171/312GiB) NTFS ] / OS: Windows 10 Pro 22H2 [10.0 Build 19045] (x64)

32 Queues / 1 Threads
[Read]  RND    4KiB (Q= 32, T= 1):  1084 MB/s [ 265 K IOPS] < 90.20 us>
[Write] RND    4KiB (Q= 32, T= 1):  1025 MB/s [ 250 K IOPS] < 35.39 us>
Profile: Peak / Test: 1 GiB (x5) [C: 55% (171/312GiB) NTFS ] / OS: Windows 10 Pro 22H2 [10.0 Build 19045] (x64)
 
Last edited:
Testing Random 4k Queue 1 and Thread 1 is the worst case.
What are results from Windows bare metal ?
here on bare metal with 1 x Micron 7450 Pro 1,92 TB and 1 x Samsung PM983 3,84 TB (similar numbers)
Running on what nvme raid/pool config ?
 
For pure 4k random read request benchmark don't do fs preread by
echo 0 > /sys/block/sdb/queue/read_ahead_kb # take right device name !!