Advice on which HBA Card to buy

Hamstrangler

New Member
Jun 19, 2024
8
0
1
Hello everyone, I recently bought an DellPoweredge r730. I want to install proxmox VE 8 on 2 SATA SSDs and use ZFS. It has an PERC H730 RAID controller. I know you should use an HBA controller with ZFS. I already red a lot in the Proxmox Forum and on the Internet in generell. In the forum it is recommended to use an HBA LSI controller 96xx, 95xx and 94xx. I also red that some people are using the HBA330because it has a true HBA mode, but according to DELLs Dell HBA card specifications it does not work with SATA.

I would appreciate it if someone could give me some advice on which HBA Card to purchase and why.
 
Last edited:
Some servers have a few SATA ports on the Mainboard. If you just want to use two SSDs you might be lucky and could just use them?
Maybe you can also configure your existing raid controller to a HBA mode? HP allows this, so maybe Dell does too.
 
Hi Simonhoffmann thanks for your answer. The overall plan is to use 2xSSDs mirrored for installing Proxmox, and 5 additional SSDs in an RAIDz for storage where the VMs are stored. That is why I am unfortunately need an HBA Card. I also considered putting the the PERC H730 in HBA-Mode but threads in the Proxmox Forum and the ZFS documentation stress to use an HBA Controller and no RAID Controller.
 
would it be possible to use RAIDz2 or RAIDz3 for storing the VMs? Or is the IOPS there also that low?
RAIDz2 and RAIDz3 are very similar to RAIDz1 in this regard. Sorry for not making that clear. If you want high IOPS then go for a stripe (and build it out of mirrors for redundancy), preferably using enterprise SSDs with PLP. Or use hardware RAID5 (or RAID6) with a BBU, instead if that worked for you before (like on ESXi for example). Maybe search the forum for previous discussions about RAIDz versus RAID5.
 
  • Like
Reactions: Kingneutron
If you want high IOPS then go for a stripe (and build it out of mirrors for redundancy), preferably using enterprise SSDs with PLP. Or use hardware RAID5 (or RAID6) with a BBU, instead if that worked for you before (like on ESXi for example).
I don't quite understand, do you mean if I wanna use ZFS I should build Mirrors (2xSSDs per mirror) and then make a group out of these Mirrors? Or do you mean something else?
 
I don't quite understand, do you mean if I wanna use ZFS I should build Mirrors (2xSSDs per mirror) and then make a group out of these Mirrors? Or do you mean something else?
Yes, I'm pretty sure that's what they meant.

zpool create pool mirror disk1 disk2 mirror disk3 disk4 mirror disk5 disk6......

This groups 2 disks together as a mirror and stripes the pool across all those mirrors. The advantage here: incoming writes to the pool will be split into several parts and each part will be written to one mirror pair (simplified). This means that data can be written in parallel and it means that no complex parity needs to be calculated because the drives are "just" mirrors.
With a raidZ incoming data cannot be written in parallel, as the whole raidZ is a single logical device, and since no two disks are directly linked to another there needs to be calculation of parity information to fulfill the redundancy requirements.

As a (simplified!) rule of thumb: a raidZ always is only as fast as the slowest disk in the entire raid and will always have "1 disk" of performance. With a striped mirror pool, you get "x disks" performance with x being the number of mirrors in the pool.
 
  • Like
Reactions: Kingneutron
Appendix: this is the recommendation if you go the ZFS way. If you don't need the special zfs features you can of course use the existing RAID controller with a RAID5 or RAID6 and put LVM or something else ontop. There is no compulsion to be using zfs. IIRC snapshots and replication only work with zfs.
 
Last edited:
Migrated a production 5-node 16-drive bay R730 VMware cluster over to Proxmox Ceph. Swapped out the PERC H730 for HBA330 for "true" HBA mode. Updated HBA330 to latest firmware.

First two drives are Intel DC S3710 SATA boot drives using ZFS RAID-1. Rest of drives are OSDs.

Workloads range from DBs to DHCP/PXE servers. Not hurting for IOPS. Since Ceph is a scale-out solution, more nodes = more OSDs = more IOPS.

All the servers I manage at work use ZFS RAID-1 for mirroring Proxmox.
 
  • Like
Reactions: Kingneutron
Migrated a production 5-node 16-drive bay R730 VMware cluster over to Proxmox Ceph. Swapped out the PERC H730 for HBA330 for "true" HBA mode. Updated HBA330 to latest firmware.

First two drives are Intel DC S3710 SATA boot drives using ZFS RAID-1. Rest of drives are OSDs.

Workloads range from DBs to DHCP/PXE servers. Not hurting for IOPS. Since Ceph is a scale-out solution, more nodes = more OSDs = more IOPS.

All the servers I manage at work use ZFS RAID-1 for mirroring Proxmox.
Thanks for the hint jdancer. at work we face the same decision which hypervisior to use in the future for our customers enviroments, because we also used VMware. We also thinking switching over to proxmox as a solution.

For my case in my homelab I guess ceph would not be an option, because I only got one server and therefore one node and as I read, you need at least two nodes to setup shared storage with CEPH. But it sound quite interesting and I will keep it in mind.
 
Yes, I'm pretty sure that's what they meant.

zpool create pool mirror disk1 disk2 mirror disk3 disk4 mirror disk5 disk6......

This groups 2 disks together as a mirror and stripes the pool across all those mirrors. The advantage here: incoming writes to the pool will be split into several parts and each part will be written to one mirror pair (simplified). This means that data can be written in parallel and it means that no complex parity needs to be calculated because the drives are "just" mirrors.
With a raidZ incoming data cannot be written in parallel, as the whole raidZ is a single logical device, and since no two disks are directly linked to another there needs to be calculation of parity information to fulfill the redundancy requirements.

As a (simplified!) rule of thumb: a raidZ always is only as fast as the slowest disk in the entire raid and will always have "1 disk" of performance. With a striped mirror pool, you get "x disks" performance with x being the number of mirrors in the pool.
Hi simonhoffman, that sounds reasonable. I will have a look at the options and inform myself about the pros and cons and will decide what I will be using.

Thanks a lot to all of you for your help sofar.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!