New Proxmox box w/ ZFS - R730xd w/ PERC H730 Mini in “HBA Mode” - BIG NO NO?

KMPLSV

Member
Jun 25, 2022
9
0
6
I recently purchased a used R730xd LFF 12-bay (3.5”x 12 bay backplane) that I have installed Proxmox on and plan to use for some VMs (Emby/Jellyfin/Plex/Docker, etc, experimenting) and ZFS storage. Basically doing compute and storage with this until and it's a sandbox until I get a feel for what my needs are and how I want to continue building my lab out.

Anyhow, the R730xd came with a PERC H730 Mini. Everything that I've read says *DO NOT USE A RAID CARD WITH ZFS, EVEN IF IT IS IN "HBA MODE" - THIS IS A HORRIBLE IDEA*. So I was HBA shopping and asking for some advice on a pure HBA replacement for the H730 when a couple people told me that was ridiculous and that setting the H730 to HBA mode would be perfectly fine. They essentially scoffed at me for even considering buying an HBA card and insisted that the H730 is perfectly fine.

I did some testing, set the H730 to HBA mode, and installed Proxmox. I haven't setup any pools really as you can see (screenshots attached) or dug into any config, I just wanted to see if all of the drives would show up, and it looks like passthrough is working properly with the PERC H730 Mini in HBA mode. In case you're wondering about the NVMe drives, I have a x16 PCIe adapter card w/ 4 512GB NVMe drives in it and this board supports bifurcation so that's what those are.

My gut and experience has taught me that, although things may "look OK", risk taking behavior and going against established protocol is not a good idea. Everywhere I look it says DO NOT simply use a RAID card in "HBA mode". I have the impression that even though the drives are showing up properly, there are issues that can arise later on that I haven't thought about yet if I'm not using a true HBA card. I'm super new to ZFS, and don't want to start out with a dicey config that is bending/breaking rules and pay for it in the long run. I've heard odd things can happen w/ RAID cards in "HBA mode" w/ ZFS, like not being able to hotswap a failed drive in a zpool and have it resilver without having to reboot the server.

I realize there are big giant red warning signs that say to ONLY use true HBA cards, and I had probably better listen to them, but thought I'd check one more time and make sure that using a PERC H730 Mini in HBA mode in this scenario is truly bad idea.

Thanks!


hba1.jpghba2.jpg
 
The ZFS documentation has a good explanation why you ideally want a dumb HBA in IT-mode without any additional abstraction layer: https://openzfs.github.io/openzfs-d...uning/Hardware.html#hardware-raid-controllers

Some newer raid controller can be used perfectly fine when switching to HBA mode. With some others enabling the HBA mode will just present the disk as single disk raid0 or JBOD which would be bad. Not sure if your H730 Mini is the first or latter one.
 
I'm battling this issue as well. I have 2 r630s. one I switched out the h730 for an HBA330. The performance is absolutely awful. massive IO delay and noticeable slowness in VMs. Crystaldiskmark is fractions of the h730 setup and way slower than my h200 IT mode setup in my r610. Somehow that is the fastest controller and disk setup and it blows my mind.

I'm currently about to wipe one system and setup the h730 in hba mode just to test.
 
We run CEPH storage on R630 with HBA330 (not to be confused with the H330!) and Enterprise SSDs. We have no problems with performance. I also believe that it is the consumer SSDs and not the server or controller. Switching to HBA also deactivates all caches and the BBU. I don't think that the H730 offers more performance as an HBA than the real HBA.
 
Not to hack this thread, but in my case I had a common windows vm that I restored on each machine with different storage config.

A. PNY CS900
B. SK Hynix S31

Some testing results:
VM disk performance testing
 
Last edited:
I'm battling this issue as well. I have 2 r630s. one I switched out the h730 for an HBA330. The performance is absolutely awful. massive IO delay and noticeable slowness in VMs. Crystaldiskmark is fractions of the h730 setup and way slower than my h200 IT mode setup in my r610. Somehow that is the fastest controller and disk setup and it blows my mind.

I'm currently about to wipe one system and setup the h730 in hba mode just to test.
Ugh, lemme know how it goes. A few people told me the that the HBA330 performance would be trash compared to the H730 in HBA mode but I didn't want to believe it. I haven't had a chance to do any testing yet. I'd also like to see raw ZFS performance of the two setups in vanilla Linux/BSD versus their performance in Proxmox.

What are some solid HBA options that would offer more horsepower in the $100-$300 range?
 
Ugh, lemme know how it goes. A few people told me the that the HBA330 performance would be trash compared to the H730 in HBA mode but I didn't want to believe it. I haven't had a chance to do any testing yet. I'd also like to see raw ZFS performance of the two setups in vanilla Linux/BSD versus their performance in Proxmox.

What are some solid HBA options that would offer more horsepower in the $100-$300 range?
So far, i rebuilt the r630 with the h730. Set it into HBA mode and setup a single boot disk on lvm. The other 3 disks running zfs raidz. I know that's not recommended... but this is just a test. What would my other storage options be? I restored my test VM and got the following results. Much faster than anything else so far. the hba330 does that read speed but awful write speed. These results are literally 10x of this identical vm/host in HW raid mode

1701209153371.png
 
Are you getting si
So far, i rebuilt the r630 with the h730. Set it into HBA mode and setup a single boot disk on lvm. The other 3 disks running zfs raidz. I know that's not recommended... but this is just a test. What would my other storage options be? I restored my test VM and got the following results. Much faster than anything else so far. the hba330 does that read speed but awful write speed. These results are literally 10x of this identical vm/host in HW raid mode

View attachment 58962
Are you getting similar results when you test with a tool like fio directly on the Proxmox host machine?
 
Could you check and post the complete Results of the two commands:

Random Read
Code:
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=benchmark --filename=benchmark --bs=4k --iodepth=64 --size=4G --readwrite=randread

Random Write
Code:
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=benchmark --filename=benchmark --bs=4k --iodepth=64 --size=4G --readwrite=randwrite

It would be great if you could test it on the other controller too. Then you would have comparable conditions and results.
 
  • Like
Reactions: IxsharpxI
So here is my testing from the r630 with h730 in HBA mode w/ Dell 15k disks. I was curious if this only test the boot disk or if I can specify the ZFS pool to test its performance somehow. I am working on the other two machines now which have consumer ssds. Appreciate the cmds! first one I found maxed out my boot disk somehow.

Code:
####r630v2 - boot disk random read
root@pve630v2:/home# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=benchmark --filename=benchmark --bs=4k --iodepth=64 --size=4G --readwrite=randread
benchmark: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.25
Starting 1 process
benchmark: Laying out IO file (1 file / 4096MiB)
Jobs: 1 (f=1): [r(1)][99.8%][r=5965KiB/s][r=1491 IOPS][eta 00m:02s]
benchmark: (groupid=0, jobs=1): err= 0: pid=18925: Wed Nov 29 12:20:16 2023
  read: IOPS=1030, BW=4120KiB/s (4219kB/s)(4096MiB/1017977msec)
   bw (  KiB/s): min= 2392, max= 7768, per=99.99%, avg=4120.42, stdev=246.98, samples=2035
   iops        : min=  598, max= 1942, avg=1030.07, stdev=61.79, samples=2035
  cpu          : usr=0.66%, sys=1.86%, ctx=1045203, majf=0, minf=105
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=1048576,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=4120KiB/s (4219kB/s), 4120KiB/s-4120KiB/s (4219kB/s-4219kB/s), io=4096MiB (4295MB), run=1017977-1017977msec

Disk stats (read/write):
    dm-1: ios=1048095/2987, merge=0/0, ticks=65179676/4284240, in_queue=69463916, util=100.00%, aggrios=1049020/2197, aggrmerge=0/920, aggrticks=65540226/2603172, aggrin_queue=68143397, aggrutil=100.00%
  sda: ios=1049020/2197, merge=0/920, ticks=65540226/2603172, in_queue=68143397, util=100.00%

r630v2 - boot disk random write
root@pve630v2:/home# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=benchmark --filename=benchmark --bs=4k --iodepth=64 --size=4G --readwrite=randwrite
benchmark: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.25
Starting 1 process
Jobs: 1 (f=1): [w(1)][99.9%][w=3987KiB/s][w=996 IOPS][eta 00m:01s]
benchmark: (groupid=0, jobs=1): err= 0: pid=22033: Wed Nov 29 12:50:57 2023
  write: IOPS=693, BW=2773KiB/s (2839kB/s)(4096MiB/1512812msec); 0 zone resets
   bw (  KiB/s): min= 1482, max= 4592, per=100.00%, avg=2774.57, stdev=343.90, samples=3025
   iops        : min=  370, max= 1148, avg=693.54, stdev=85.94, samples=3025
  cpu          : usr=0.40%, sys=1.08%, ctx=1027461, majf=0, minf=72
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=0,1048576,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
  WRITE: bw=2773KiB/s (2839kB/s), 2773KiB/s-2773KiB/s (2839kB/s-2839kB/s), io=4096MiB (4295MB), run=1512812-1512812msec

Disk stats (read/write):
    dm-1: ios=5/1051970, merge=0/0, ticks=6392/98072420, in_queue=98078812, util=100.00%, aggrios=1547/1050636, aggrmerge=0/1404, aggrticks=384987/97794114, aggrin_queue=98179101, aggrutil=99.93%
  sda: ios=1547/1050636, merge=0/1404, ticks=384987/97794114, in_queue=98179101, util=99.93%

###RE-RAN THE TEST WITH A SINGLE RAID0 DRIVE.

Code:
root@pve630v2:~# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=benchmark --filename=benchmark --bs=4k --iodepth=64 --size=4G --readwrite=randread
benchmark: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.25
Starting 1 process
benchmark: Laying out IO file (1 file / 4096MiB)
Jobs: 1 (f=1): [r(1)][100.0%][r=10.0MiB/s][r=2570 IOPS][eta 00m:00s]
benchmark: (groupid=0, jobs=1): err= 0: pid=18368: Fri Dec  1 18:06:29 2023
  read: IOPS=1213, BW=4854KiB/s (4970kB/s)(4096MiB/864114msec)
   bw (  KiB/s): min= 3096, max=12144, per=100.00%, avg=4857.24, stdev=392.19, samples=1728
   iops        : min=  774, max= 3036, avg=1214.19, stdev=98.02, samples=1728
  cpu          : usr=0.81%, sys=1.99%, ctx=1044910, majf=0, minf=105
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=1048576,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=4854KiB/s (4970kB/s), 4854KiB/s-4854KiB/s (4970kB/s-4970kB/s), io=4096MiB (4295MB), run=864114-864114msec

Disk stats (read/write):
    dm-1: ios=1048532/2564, merge=0/0, ticks=55281368/116, in_queue=55281484, util=100.00%, aggrios=1048958/1753, aggrmerge=0/811, aggrticks=55616052/110, aggrin_queue=55616163, aggrutil=100.00%
  sda: ios=1048958/1753, merge=0/811, ticks=55616052/110, in_queue=55616163, util=100.00%
root@pve630v2:~# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=benchmark --filename=benchmark --bs=4k --iodepth=64 --size=4G --readwrite=randwrite
benchmark: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.25
Starting 1 process
Jobs: 1 (f=1): [w(1)][99.9%][w=10.5MiB/s][w=2675 IOPS][eta 00m:01s]
benchmark: (groupid=0, jobs=1): err= 0: pid=26899: Fri Dec  1 19:09:33 2023
  write: IOPS=1269, BW=5076KiB/s (5198kB/s)(4096MiB/826298msec); 0 zone resets
   bw (  KiB/s): min= 2320, max=82664, per=100.00%, avg=5077.96, stdev=2086.18, samples=1652
   iops        : min=  580, max=20666, avg=1269.35, stdev=521.54, samples=1652
  cpu          : usr=0.46%, sys=1.54%, ctx=659218, majf=0, minf=15
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=0,1048576,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
  WRITE: bw=5076KiB/s (5198kB/s), 5076KiB/s-5076KiB/s (5198kB/s-5198kB/s), io=4096MiB (4295MB), run=826298-826298msec

Disk stats (read/write):
    dm-1: ios=0/1049921, merge=0/0, ticks=0/52668424, in_queue=52668424, util=100.00%, aggrios=328/1049719, aggrmerge=0/747, aggrticks=139500/52639791, aggrin_queue=52779291, aggrutil=100.00%
  sda: ios=328/1049719, merge=0/747, ticks=139500/52639791, in_queue=52779291, util=100.00%

Here is the results of my other r630, hba330, consumer SSDs and zfs.

Code:
#####R630v1
root@pve630v1:~# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=benchmark --filename=benchmark --bs=4k --iodepth=64 --size=4G --readwrite=randread
benchmark: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.33
Starting 1 process
benchmark: Laying out IO file (1 file / 4096MiB)
Jobs: 1 (f=1): [r(1)][100.0%][r=361MiB/s][r=92.5k IOPS][eta 00m:00s]
benchmark: (groupid=0, jobs=1): err= 0: pid=1439777: Wed Nov 29 13:06:17 2023
  read: IOPS=52.2k, BW=204MiB/s (214MB/s)(4096MiB/20090msec)
   bw (  KiB/s): min=151448, max=493504, per=99.15%, avg=206995.80, stdev=56844.20, samples=40
   iops        : min=37862, max=123376, avg=51748.95, stdev=14211.04, samples=40
  cpu          : usr=8.30%, sys=91.69%, ctx=55, majf=4, minf=261
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=1048576,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=204MiB/s (214MB/s), 204MiB/s-204MiB/s (214MB/s-214MB/s), io=4096MiB (4295MB), run=20090-20090msec
root@pve630v1:~# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=benchmark --filename=benchmark --bs=4k --iodepth=64 --size=4G --readwrite=randwrite
benchmark: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.33
Starting 1 process
Jobs: 1 (f=0): [f(1)][100.0%][w=13.1MiB/s][w=3361 IOPS][eta 00m:00s]
benchmark: (groupid=0, jobs=1): err= 0: pid=1444079: Wed Nov 29 13:27:44 2023
  write: IOPS=829, BW=3316KiB/s (3396kB/s)(4096MiB/1264835msec); 0 zone resets
   bw (  KiB/s): min=  736, max=110720, per=99.94%, avg=3314.78, stdev=4238.35, samples=2529
   iops        : min=  184, max=27680, avg=828.67, stdev=1059.59, samples=2529
  cpu          : usr=0.39%, sys=2.69%, ctx=1015131, majf=0, minf=1234
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=0,1048576,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
  WRITE: bw=3316KiB/s (3396kB/s), 3316KiB/s-3316KiB/s (3396kB/s-3396kB/s), io=4096MiB (4295MB), run=1264835-1264835msec
 
Last edited:
These results are from my r610 with h200 in it mode - consumer ssds in zfs. Read test went fast but the write test nearly crashed it I thought. i lost all webUI access and test ETA was counting up hahah not sure what to make of that.

Code:
####R610

root@pve:/home# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=benchmark --filename=benchmark --b=randread
benchmark: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.33
Starting 1 process
benchmark: Laying out IO file (1 file / 4096MiB)
Jobs: 1 (f=1): [r(1)][90.0%][r=526MiB/s][r=135k IOPS][eta 00m:01s]
benchmark: (groupid=0, jobs=1): err= 0: pid=3216437: Wed Nov 29 12:53:32 2023
  read: IOPS=120k, BW=468MiB/s (491MB/s)(4096MiB/8748msec)
   bw (  KiB/s): min=159089, max=558608, per=99.40%, avg=476604.29, stdev=119030.81, samples=17
   iops        : min=39772, max=139652, avg=119151.18, stdev=29757.78, samples=17
  cpu          : usr=17.64%, sys=82.35%, ctx=42, majf=4, minf=142
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=1048576,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=468MiB/s (491MB/s), 468MiB/s-468MiB/s (491MB/s-491MB/s), io=4096MiB (4295MB), run=8748-8748msec
root@pve:/home#
root@pve:/home#
root@pve:/home#
root@pve:/home# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=benchmark --filename=benchmark --bs=4k --iodepth=64 --size=4G --readwrite=randwrite
benchmark: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.33
Starting 1 process
^Cbs: 1 (f=1): [w(1)][5.4%][eta 41m:42s]
fio: terminating on signal 2
^C
fio: terminating on signal 2
Jobs: 1 (f=1): [w(1)][5.4%][eta 01h:06m:46s]
benchmark: (groupid=0, jobs=1): err= 0: pid=3234429: Wed Nov 29 13:00:42 2023
  write: IOPS=247, BW=989KiB/s (1013kB/s)(222MiB/229455msec); 0 zone resets
   bw (  KiB/s): min=  192, max=98240, per=100.00%, avg=13338.59, stdev=22076.24, samples=34
   iops        : min=   48, max=24560, avg=3334.65, stdev=5519.06, samples=34
  cpu          : usr=0.15%, sys=1.19%, ctx=28354, majf=0, minf=139
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=0,56753,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
  WRITE: bw=989KiB/s (1013kB/s), 989KiB/s-989KiB/s (1013kB/s-1013kB/s), io=222MiB (232MB), run=229455-229455msec
 
Last edited:
Not running any H730 in HBA mode yet but running H330 in HBA in production. No issues with SAS drives. Don't use any SSDs though.
 
Last edited:
I have now ordered an R730xd. Either an H730 or H730p is installed, I can see that when it's there. I'll see if I can get an HBA330 and H330.

I will test all controllers as mini mono, not as PCIe cards. For this I will at least use the integrated functions - I don't know yet whether I will also flash them.

When it comes to data storage media, I offer 120 GB Enterprise Intel SSDs and certified 600 GB SAS disks. I'll definitely test both of them out. I may soon have a Samsung PM863 with 240 GB.

Basically, it can be said that the H330 and HBA330 both use the LSI SAS3008 and are each connected with PCIe Gen3 x8. The H730 and H730p are both LSI SAS3108 and also connected with PCIe Gen3 x8. All support SAS and SSD drives and speeds of up to 12 Gbps per port. Apparently there can only be a difference between SAS3008 and SAS3108, but let's see.
 
The first test passed. A quick summary of the test setup.

Dell PowerEdge R730xd:
- 2x Intel Xeon E5-2620 v3
- 2x 8 GB DDR4 2133 MHz @1866 MHz (HMA41GR7MFR8N-TF)
- PERC H730P Mini @HBA Modus (FW 25.5.9.0001)
- 1x OS Disk: 120 GB SSD (Intel SSDSC2BB120G7R) [Dell Certified]
- 4x SSD Data: 120 GB SSD (Intel SSDSC2BB120G7R) [Dell Certified]
- 4x SSD Data: 240 GB SSD (Samsung PM863)
- 4x SAS Data: 600 GB SAS (Toshiba AL14SXB60ENY) [Dell Certified]

OS: Proxmox 8.1.3 (6.5.11-7-pve)

root@pve:~# pveversion -v
proxmox-ve: 8.1.0 (running kernel: 6.5.11-7-pve)
pve-manager: 8.1.3 (running version: 8.1.3/b46aac3b42da5d15)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.5: 6.5.11-7
proxmox-kernel-6.5.11-7-pve-signed: 6.5.11-7
proxmox-kernel-6.5.11-4-pve-signed: 6.5.11-4
ceph-fuse: 17.2.7-pve1
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx7
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.1
libpve-access-control: 8.0.7
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.1.0
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.5
libpve-rs-perl: 0.8.7
libpve-storage-perl: 8.0.5
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve4
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.1.2-1
proxmox-backup-file-restore: 3.1.2-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.2
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.3
proxmox-widget-toolkit: 4.1.3
pve-cluster: 8.0.5
pve-container: 5.0.8
pve-docs: 8.1.3
pve-edk2-firmware: 4.2023.08-2
pve-firewall: 5.0.3
pve-firmware: 3.9-1
pve-ha-manager: 4.0.3
pve-i18n: 3.1.4
pve-qemu-kvm: 8.1.2-4
pve-xtermjs: 5.3.0-2
qemu-server: 8.0.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.2-pve1


Configuration:

# /sbin/sgdisk -n1 -t1:8300 /dev/sdb
Creating new GPT entries in memory.
The operation has completed successfully.
# /sbin/mkfs -t ext4 /dev/sdb1
mke2fs 1.47.0 (5-Feb-2023)
Discarding device blocks: 0/29304945 524288/29304945 3145728/29304945 6291456/29304945 8912896/2930494511534336/2930494514155776/2930494516777216/2930494519398656/2930494522020096/2930494524641536/2930494527787264/29304945 done
Creating filesystem with 29304945 4k blocks and 7331840 inodes
Filesystem UUID: 07afe007-0d79-4d4f-a6bc-5aea0bb03e00
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872

Allocating group tables: 0/895 done
Writing inode tables: 0/895 done
Creating journal (131072 blocks): done
Writing superblocks and filesystem accounting information: 0/895 done

# /sbin/blkid /dev/sdb1 -o export
Created symlink /etc/systemd/system/multi-user.target.wants/mnt-pve-ssd.mount -> /etc/systemd/system/mnt-pve-ssd.mount.
TASK OK

# /sbin/sgdisk -n1 -t1:8300 /dev/sdf
Creating new GPT entries in memory.
The operation has completed successfully.
# /sbin/mkfs -t ext4 /dev/sdf1
mke2fs 1.47.0 (5-Feb-2023)
Creating filesystem with 146515185 4k blocks and 36634624 inodes
Filesystem UUID: 84a5f578-90f6-4c94-b6c0-76fd70e8b084
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000

Allocating group tables: 0/4472 done
Writing inode tables: 0/4472 done
Creating journal (262144 blocks): done
Writing superblocks and filesystem accounting information: 0/4472 done

# /sbin/blkid /dev/sdf1 -o export
Created symlink /etc/systemd/system/multi-user.target.wants/mnt-pve-sas.mount -> /etc/systemd/system/mnt-pve-sas.mount.
TASK OK
# /sbin/zpool create -o ashift=12 ssd mirror /dev/disk/by-id/ata-SSDSC2BB120G7R_1 /dev/disk/by-id/ata-SSDSC2BB120G7R_2 mirror /dev/disk/by-id/ata-SSDSC2BB120G7R_3 /dev/disk/by-id/ata-SSDSC2BB120G7R_4
# /sbin/zfs set compression=on ssd
# systemctl enable zfs-import@ssd.service
Created symlink /etc/systemd/system/zfs-import.target.wants/zfs-import@ssd.service -> /lib/systemd/system/zfs-import@.service.
TASK OK

# /sbin/zpool create -o ashift=12 sas mirror /dev/disk/by-id/scsi-350000398f870ad01 /dev/disk/by-id/scsi-350000398d86bb3d9 mirror /dev/disk/by-id/scsi-350000398e8305a5d /dev/disk/by-id/scsi-350000398f8032851
# /sbin/zfs set compression=on sas
# systemctl enable zfs-import@sas.service
Created symlink /etc/systemd/system/zfs-import.target.wants/zfs-import@sas.service -> /lib/systemd/system/zfs-import@.service.
TASK OK


Benchmark:

fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=benchmark --filename=benchmark --bs=4k --iodepth=64 --size=4G --readwrite=randread

fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=benchmark --filename=benchmark --bs=4k --iodepth=64 --size=4G --readwrite=randwrite

Random Write:
Code:
benchmark: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.33
Starting 1 process

benchmark: (groupid=0, jobs=1): err= 0: pid=2612: Fri Dec  8 13:27:02 2023
  write: IOPS=38.4k, BW=150MiB/s (157MB/s)(4096MiB/27285msec); 0 zone resets
   bw (  KiB/s): min=100784, max=158048, per=100.00%, avg=153862.81, stdev=7779.42, samples=54
   iops        : min=25196, max=39512, avg=38465.70, stdev=1944.86, samples=54
  cpu          : usr=12.19%, sys=47.61%, ctx=510660, majf=0, minf=212
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=0,1048576,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
  WRITE: bw=150MiB/s (157MB/s), 150MiB/s-150MiB/s (157MB/s-157MB/s), io=4096MiB (4295MB), run=27285-27285msec

Disk stats (read/write):
  sdb: ios=0/1042478, merge=0/498, ticks=0/1197845, in_queue=1197848, util=99.72%
Random Read:
Code:
benchmark: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.33
Starting 1 process

benchmark: (groupid=0, jobs=1): err= 0: pid=2852: Fri Dec  8 13:28:10 2023
  read: IOPS=49.6k, BW=194MiB/s (203MB/s)(4096MiB/21152msec)
   bw (  KiB/s): min=192880, max=203552, per=100.00%, avg=198423.62, stdev=2913.57, samples=42
   iops        : min=48220, max=50890, avg=49605.95, stdev=728.58, samples=42
  cpu          : usr=11.40%, sys=40.63%, ctx=646939, majf=0, minf=82
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=1048576,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=194MiB/s (203MB/s), 194MiB/s-194MiB/s (203MB/s-203MB/s), io=4096MiB (4295MB), run=21152-21152msec

Disk stats (read/write):
  sdb: ios=1046085/3, merge=1668/1, ticks=1171500/2, in_queue=1171503, util=99.60%
Random Write:
Code:
benchmark: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.33
Starting 1 process

benchmark: (groupid=0, jobs=1): err= 0: pid=7621: Fri Dec  8 14:12:39 2023
  write: IOPS=858, BW=3436KiB/s (3518kB/s)(4096MiB/1220776msec); 0 zone resets
   bw (  KiB/s): min= 1896, max= 5146, per=100.00%, avg=3437.82, stdev=254.14, samples=2441
   iops        : min=  474, max= 1286, avg=859.30, stdev=63.52, samples=2441
  cpu          : usr=1.17%, sys=3.45%, ctx=1032860, majf=0, minf=21
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=0,1048576,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
  WRITE: bw=3436KiB/s (3518kB/s), 3436KiB/s-3436KiB/s (3518kB/s-3518kB/s), io=4096MiB (4295MB), run=1220776-1220776msec

Disk stats (read/write):
  sdf: ios=0/1048912, merge=0/243, ticks=0/76454719, in_queue=76454719, util=100.00%
Random Read:
Code:
benchmark: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.33
Starting 1 process
benchmark: Laying out IO file (1 file / 4096MiB)

benchmark: (groupid=0, jobs=1): err= 0: pid=2992: Fri Dec  8 13:46:20 2023
  read: IOPS=988, BW=3954KiB/s (4049kB/s)(4096MiB/1060844msec)
   bw (  KiB/s): min= 2624, max= 9368, per=100.00%, avg=3954.95, stdev=171.88, samples=2121
   iops        : min=  656, max= 2342, avg=988.59, stdev=43.03, samples=2121
  cpu          : usr=1.55%, sys=4.97%, ctx=1041921, majf=0, minf=131
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=1048576,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=3954KiB/s (4049kB/s), 3954KiB/s-3954KiB/s (4049kB/s-4049kB/s), io=4096MiB (4295MB), run=1060844-1060844msec

Disk stats (read/write):
  sdf: ios=1047975/5, merge=0/1, ticks=67833724/3, in_queue=67833728, util=100.00%

Random Write:
Code:
benchmark: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.33
Starting 1 process
benchmark: Laying out IO file (1 file / 4096MiB)

benchmark: (groupid=0, jobs=1): err= 0: pid=12219: Fri Dec  8 14:19:50 2023
  write: IOPS=7375, BW=28.8MiB/s (30.2MB/s)(4096MiB/142173msec); 0 zone resets
   bw (  KiB/s): min=18352, max=71784, per=99.88%, avg=29465.51, stdev=8572.78, samples=284
   iops        : min= 4588, max=17946, avg=7366.37, stdev=2143.19, samples=284
  cpu          : usr=4.29%, sys=64.72%, ctx=297375, majf=0, minf=688
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=0,1048576,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
  WRITE: bw=28.8MiB/s (30.2MB/s), 28.8MiB/s-28.8MiB/s (30.2MB/s-30.2MB/s), io=4096MiB (4295MB), run=142173-142173msec
Random Read:
Code:
benchmark: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.33
Starting 1 process

benchmark: (groupid=0, jobs=1): err= 0: pid=12710: Fri Dec  8 14:21:34 2023
  read: IOPS=17.9k, BW=69.9MiB/s (73.3MB/s)(4096MiB/58592msec)
   bw (  KiB/s): min=42952, max=159080, per=99.81%, avg=71451.15, stdev=10019.79, samples=117
   iops        : min=10738, max=39770, avg=17862.79, stdev=2504.95, samples=117
  cpu          : usr=4.11%, sys=95.87%, ctx=140, majf=0, minf=79
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=1048576,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=69.9MiB/s (73.3MB/s), 69.9MiB/s-69.9MiB/s (73.3MB/s-73.3MB/s), io=4096MiB (4295MB), run=58592-58592msec
Random Write:
Code:
benchmark: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.33
Starting 1 process
benchmark: Laying out IO file (1 file / 4096MiB)

benchmark: (groupid=0, jobs=1): err= 0: pid=12944: Fri Dec  8 14:24:39 2023
  write: IOPS=6607, BW=25.8MiB/s (27.1MB/s)(4096MiB/158704msec); 0 zone resets
   bw (  KiB/s): min= 3032, max=61018, per=99.81%, avg=26379.20, stdev=7821.27, samples=317
   iops        : min=  758, max=15254, avg=6594.80, stdev=1955.31, samples=317
  cpu          : usr=4.10%, sys=60.07%, ctx=271672, majf=0, minf=394
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=0,1048576,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
  WRITE: bw=25.8MiB/s (27.1MB/s), 25.8MiB/s-25.8MiB/s (27.1MB/s-27.1MB/s), io=4096MiB (4295MB), run=158704-158704msec
Random Read:
Code:
benchmark: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.33
Starting 1 process

benchmark: (groupid=0, jobs=1): err= 0: pid=13624: Fri Dec  8 14:27:21 2023
  read: IOPS=16.1k, BW=62.9MiB/s (65.9MB/s)(4096MiB/65142msec)
   bw (  KiB/s): min=12912, max=165192, per=99.63%, avg=64152.92, stdev=18847.80, samples=130
   iops        : min= 3228, max=41298, avg=16038.23, stdev=4711.95, samples=130
  cpu          : usr=3.46%, sys=90.54%, ctx=1045, majf=0, minf=85
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=1048576,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=62.9MiB/s (65.9MB/s), 62.9MiB/s-62.9MiB/s (65.9MB/s-65.9MB/s), io=4096MiB (4295MB), run=65142-65142msec
 
Last edited:
- PERC H730P Mini @HBA Modus (FW 25.5.9.0001)

The HBA330 and H730* are fundamentally different. The HBA330 is true HBA only, which is what ZFS documentation recommends (lists as a requirement really). The H730* is a RAID controller that can be set to "HBA mode". While in this mode, it seems to passthrough devices properly to Proxmox (I haven't tested extensively) and many people do this (reporting to never have had a problem at all), but experienced ZFS users have pointed out many reasons not to do this and there are a lot of things that can go wrong.

When you ran those tests, did you have the H730 set to HBA-mode in the BIOS settings? Also, you should be able to update the H730 and HBA330 both to the latest official firmware versions using the iDRAC. From everything I've heard, the official Dell firmware isn't noticeably different from the LSI firmware out there. Someone please correct me if I'm wrong.
 
The HBA330 and H730* are fundamentally different.
I talked about this in the previous post, but I wouldn't say fundamental. In terms of the actual specs, the controllers are technically at the same level, even though a different chip is installed.

When you ran those tests, did you have the H730 set to HBA-mode in the BIOS settings?
I will test all controllers as mini mono, not as PCIe cards. For this I will at least use the integrated functions - I don't know yet whether I will also flash them.
Was changed via the control management via F10. This can also be done using CTRL + R and iDRAC. Strictly speaking, the BIOS itself was not involved.

What is particularly important to me is that the performance with the DELL FW is there. This firmware rewrite is less optimal, especially in the enterprise environment, but is necessary if you want to operate CEPH etc. in the absence of the corresponding HBA from DELL.
So far it looks good on the Gen 13, although you can rightly ask yourself why Dell removed the function from the Gen 14 and apparently reinstalled it in the Gen 15.
Also, you should be able to update the H730 and HBA330 both to the latest official firmware versions using the iDRAC.
I haven't checked it, but I thought it might be the most current one. But I can do it again.
As a note again, I don't currently have an H330 or HBA330 on hand, just the H730P.
From everything I've heard, the official Dell firmware isn't noticeably different from the LSI firmware out there. Someone please correct me if I'm wrong.
Now that depends on what it means to you. A major problem between the two firmwares has always been that the two SAS ports are swapped.
With the H310, for example, you had to swap port A with B with the LSI firmware so that the server itself didn't complain. This of course led to a mess at the bays.

Of course, you also lose the integrated functions such as creating RAIDs via iDRAC or using the LCC to update the firmware.

What I picked up from earlier was that the 16 LSI firmware was generally worse than the 19 firmware.
 
In the meantime I also found and ordered an H330. I'm already working on an HBA330.
 
On our Dell hardware, I haven't had issues running the onboard or add-on PCIe card controllers in HBA mode. I did notice that on some controllers (I forget which) there was an option to disable drive caching in HBA mode, which I applied.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!