[SOLVED] Install Proxmox 8.1 on Boss-N1 and using Dell PERC H965i Controller

John0212

New Member
Feb 2, 2024
12
3
3
We're looking to buy some Dell servers and plan to use Proxmox on them. The servers come with (2) 480GB M.2 drives on a Boss-N1 Controller. We'll have (8) 3.84TB U.2 NVME drives attached to a PERC H965i Controller.

  1. Any issues expected during the Proxmox 8.1 installation on to the Boss-N1 controller drives?
  2. After Proxmox is installed, any issues that I can expect creating VM's (Windows Server and 11) on the 3.84TB drives attached to the H965i.
Our thought is that the PERC will handle RAID 6 and pass a huge volume to Proxmox. We'll then use slices of that volume to create our VM's.

Should we think about this differently or take a different approach?
 
  1. Any issues expected during the Proxmox 8.1 installation on to the Boss-N1 controller drives?
should be fine.

  1. After Proxmox is installed, any issues that I can expect creating VM's (Windows Server and 11) on the 3.84TB drives attached to the H965i.
none hardware related. you should have a read here: https://pve.proxmox.com/wiki/Storage

edit- one more:

Our thought is that the PERC will handle RAID 6 and pass a huge volume to Proxmox. We'll then use slices of that volume to create our VM's.

as a general rule, parity raid is not recommended for virtual disk storage. You can create a RAID10 volume, or passthrough the disks for use by zfs. again, have a read at the above storage page.
 
Last edited:
The "no parity raid for VM hosting" is a very old rule, I have been breaking it for over 10 years. If you have HDDs and a cacheless controller, don't break the rule.

If you have high performance SSDs and a 8 GB DDR4 write-back cache on your controller, there is no rule. The drives won't perform at their full potential, but with the acceleration from the controller, the aggregate performance will be pretty great.

I'm afraid on some systems these U.2 backplanes only give x2 PCIe lanes to each NVMe drive in order to economize on PCIe lanes. It is unfortunate but I bet the controller only has a x8 uplink.

There is only an unwritten rule about hardware RAID in general going against PVE cultural norms. Most will tell you to attach the NVMes directly to your CPU root complex and use ZFS.
 
as a general rule, parity raid is not recommended for virtual disk storage. You can create a RAID10 volume, or passthrough the disks for use by zfs. again, have a read at the above storage page.

Thanks for all of the feedback, Alex. I'm feeling better about ordering this hardware.

There is only an unwritten rule about hardware RAID in general going against PVE cultural norms. Most will tell you to attach the NVMes directly to your CPU root complex and use ZFS.

I've read about ZFS and it seems that is generally the recommended approach. I'm not opposed to it. The only feedback I've seen in forums is the hit to the CPU, which I'm not terribly worried about. This server will have two Xeon Gold 6444Y (16C, 3.6Ghz).

What would be the best way to pass these onto Proxmox? Put the controller in a specific mode?
 
Passthrough or not, a PERC card will bottleneck the individual drives.

8 NVMe drives at x4 each means you need 32 PCIe lanes to hook all these up to your system at their full width... but the PERC card only has x8.

To use the drives properly as a JBOD or ZFS, you would need to give the PERC card back to Dell and get a different card that is not a RAID controller, it just connects your U.2 backplane directly to the PCIe on the motherboard. They used to call them PCIe extenders.
 
Passthrough or not, a PERC card will bottleneck the individual drives.

8 NVMe drives at x4 each means you need 32 PCIe lanes to hook all these up to your system at their full width... but the PERC card only has x8.

To use the drives properly as a JBOD or ZFS, you would need to give the PERC card back to Dell and get a different card that is not a RAID controller, it just connects your U.2 backplane directly to the PCIe on the motherboard. They used to call them PCIe extenders.

I appreciate you. Calling Dell now.
 
Since Dell BOSS cards are used to mirror the OS, I can confirmed that on the BOSS-S1 on a 13th-gen Dell, you can install Proxmox on it. I did use the CLI to configure a mirror. It does show up as an install target during Proxmox install. I used XFS as the file system.

Before I did that, I deleted any virtual disks and configured the PERC to HBA-mode and rebooted. Then used Proxmox to create a ZFS pool on the rest of the drives.
 
Last edited:
The "no parity raid for VM hosting" is a very old rule,
Its not a rule, its math.

one queue means one op at a time. multiple queues in a compound raid means multiple ops at a time. if you wanted to achieve the same random performance with parity raid you'd need a raid 60 with the same number of subranks- but you'll end up with a whole lot of wasted space due to the large stripe width. VM disk workload produces lots of small parallel transactions.

8 NVMe drives at x4 each means you need 32 PCIe lanes to hook all these up to your system at their full width... but the PERC card only has x8.
I'm curious where you got this data; I couldnt find any. fwiw the pci connected version is mechanically x16. But even if true, this is largely academic. the RAID controller itself wouldnt get ANYWHERE CLOSE to what the drives are capable of. neither will virtually any workload you put on the drives.
 
  • Like
Reactions: Johannes S
Just wanted to thank you all for your help. We worked with Dell yesterday to finalize the design. The PERC cards have been removed and the backplane has been swapped to now interface directly to the board. We've kept the Boss Controller as we don't envision an issue having this handle the raid and we'll just install Proxmox on this drive without ZFS enabled.

We've placed the order and we're expected to have them the first week of March.
 
  • Like
Reactions: alexskysilk
How did things work out with your servers? How did you configure the drives?

Mat
Everything is working great. We have some memory issues that we haven't been able to pinpoint yet but otherwise, no issues.

The Proxmox installation is installed on two 480GB SSD's in Raid 1 on a Boss Controller that Dell provided. The RAID is done at the controller level. ZFS is done inside Proxmox using RAIDZ1 on six Dell Enterprise 3.84TB drives. These drives are just rebranded Kioxia CM6 drives with a different firmware. We bought these servers during Dell's end of year and our enterprise guy was able to get us really good pricing. If I had to buy again with new pricing I've been given since, I would buy the servers with the Boss Controller and two M.2 SSD's installed and populate the server with my own Kioxia drives or something similar.
ZFS Arc Cache is set to 40GB.

As for the memory issue, no matter what we do or change, memory usage doesn't go above 70%, leaving almost 100GB on the table. SWAP starts to be consumed and we've experienced slowdowns on VM's. This is rare enough and brief enough that frankly, we've tabled the issue and will address it some other time.

If you need any other information, feel free to ask. Happy to help.

EDIT - We did have Dell remove the PERC controller and have ZFS doing all of the RAID and parity. This also allowed us more PCIe Lanes for the backplane to the drives that @alyarb pointed out earlier. They were absolutely correct and we're glad they provided this information.

1713884372845.png
1713884427802.png
 
Last edited:
  • Like
Reactions: Johannes S
Thank you for the quick response. I am looking at getting a couple PowerEdge R6615's with NVME drives. Originally, like you, I was planning on using the H965i and putting it in HBA mode, however that seems like a waste of money. Knowing that I can bypass that and go straight to the PCIe lanes is good information. I assume once you do that you can only put NVME drives in the drive bays, which is not a concern. Have you ran any benchmarks using CrystalDiskMark? I am also planning on adding a Boss card and just mirror the two drives for the OS. Keeps things simple.

Mat
 
Thank you for the quick response. I am looking at getting a couple PowerEdge R6615's with NVME drives. Originally, like you, I was planning on using the H965i and putting it in HBA mode, however that seems like a waste of money. Knowing that I can bypass that and go straight to the PCIe lanes is good information. I assume once you do that you can only put NVME drives in the drive bays, which is not a concern. Have you ran any benchmarks using CrystalDiskMark? I am also planning on adding a Boss card and just mirror the two drives for the OS. Keeps things simple.

Mat
I would take a look at the backplane. When we brought up removing the PERC card, they had to swap the backplane to have a direct interface to the board. Something had to change to support what we were doing. Here are screenshots from our Dell build that might help.

1713885655641.png
1713885681764.png


I think the reason some use a RAID controller is to take the load off the CPU. Since ZFS is software based, the CPU has to do that processing but frankly, this is not something we've noticed in any capacity. Migrating a VM from one host to another puts a little hit on the host, since we're not using any type of shared storage, but this is a rare thing that we built out so we can upgrade the hosts when we need to. We also do this in the middle of the night so our staff don't even know.

I've ran CrystalMark before and will do it again and provide screenshots. I'm at the airport right now. I think I was a little underwhelmed by the performance, which could be a configuration error on our end, but frankly, everything performs great so this wasn't something I wanted to spend a bunch of time trying to ooze performance out of. We saturate the 25Gb links when transferring between hosts so that's good enough for us.

This is one of the hosts with about 20 Windows 11 VM's. Each assigned four cores and 16GB of RAM.

1713886141705.png

1713886077886.png
 
Funny, I'm at an airport as well. The reason I am looking at using ZFS is so I can do replication between the two servers, otherwise I would probably use the PERC controller. These days I find that the bottle neck on any system is the lack of IOPS, not CPU and Memory, of course this depends on the workload. Thanks again for your BOM, that is helpful.

Mat
 
The drives were supplied by Dell. See screenshots:

Proxmox sees them as Dell Ent NVMe CM6 RI 3.84TB Drives
The Promox installation is installed on the Boss-N1 controller with (2) 480GB M.2 SSD's.

The Dell iDRAC shows them as Dell firmware and branded Kioxia drives.

If I could do it again, I would just buy Kioxia drives and skip the price premium on the Dell drives. We were able to get them at a really decent price so your mileage may vary on what's worth it.

1730226916386.png1730226896463.png
 
  • Like
Reactions: Johannes S
but the PERC card only has x8.
PERC H965i has PCIe4 x16 https://www.dell.com/support/manual...3c9cee-dcd8-4c71-b532-e5d1fd854dd1&lang=de-de , bus bandwitdh 32GB/s, measured 27 GB/s read with nvme's on H965i, write depends on choosen raid level. Don't think you get nearly there with zfs.
as a general rule, parity raid is not recommended for virtual disk storage.
Measured at >13GB/s in raid5 mode with nvme's, parity raid is not really a problem with nvme iops.
I think I was a little underwhelmed by the performance, which could be a configuration error on our end
Would be faster if don't taken out the H965i ... why not tested before giving back ...
 
I have R7615 with 2x H965i. one controller has 1x 960GB Dell-branded Samsung PM9A3 drive, which I was forced to buy from Dell, plus 2x non-dell branded identical drives (so 3x in a RAID5, just to make it up to something useful with parity raid).

The other controller has 4x Intel P5520 3.84TB bought in the aftermarket (for £290 each new from a reputable supplier... compared to what.. £2,000 each from Dell website, maybe £1000 each when you speak to them)

Same for RAM. bought with a single 64GB stick then bought the rest from Crucial. Ended up costing approx £10,000 + vat, saving about £7,000 off Dell's original price, and still about £5k after negotiations and discounts etc.

I could do some benchmarks on the H965i if it's of interest, but of course I only have the 4xNVMe on it at the moment.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!