Supermicro hardware recommendation for NVMe drives

bsinha

Member
May 5, 2022
54
0
11
Hi forum members,

We want to build a 3-node proxmox cluster with ceph. Ceph drives will be 3.2TB x3 NVMe SSDs on each node. Among them 3 NVMe would be for ceph. Rest 2 will be in RAID-1 mode for Database server.

Additionally, 240GB x2 SSD would be for Operating System.

We have tried to communicate several vendors of Supermicro in India, but they are providing a solution where either we need to put all 5 NVMe ssds in HBA mode or in RAID mode.

We need mixed mode. (JBOD and RAID 5 together). How can we achieve that? Can anyone suggest a Supermicro server with such configuration?
 
Call me on +919141414242 - maybe i could suggest something. We have over 25 years of experience building servers.

Also you are wanting RAID5 ? but you said CEPH, you have to be specific/clear with your requirements
 
Last edited:
Hi forum members,

We want to build a 3-node proxmox cluster with ceph. Ceph drives will be 3.2TB x3 NVMe SSDs on each node. Among them 3 NVMe would be for ceph. Rest 2 will be in RAID-1 mode for Database server.

Additionally, 240GB x2 SSD would be for Operating System.

We have tried to communicate several vendors of Supermicro in India, but they are providing a solution where either we need to put all 5 NVMe ssds in HBA mode or in RAID mode.

We need mixed mode. (JBOD and RAID 5 together). How can we achieve that? Can anyone suggest a Supermicro server with such configuration?
Why do you need Raid5 at all?

For ceph you require the disks to be in HBA mode. If they are NVMe, then connect the backplane directly onboard and not via an extra controller.
You can also do Raid1 for the DB with ZFS, then you have the native performance of NVMe without a slowing controller.
 
Our first target is to create Ceph. If we get sufficient IOPS from ceph, then definitely we shall not go for RAID-5. RAID-5 is a fall-back mechanism.

We have experience in ZFS as well as RAID. We found Hardware RAID performing way better than ZFS.
 
Call me on +919141414242 - maybe i could suggest something. We have over 25 years of experience building servers.

Also you are wanting RAID5 ? but you said CEPH, you have to be specific/clear with your requirements
I have called you at this number, but it was unanswered. Please call me back at +91 8910894274. We need RAID-5 if ceph does not reach the intended IOPS that we need.
 
I think your NVMe are fast enough.
The performance of Ceph is often limited by the network.
What kind of network cards and which network setup do you have for ceph?
If the performance is not sufficient, you can tune it by creating 2 OSDs per NVMe.
 
We shall use 25gig ports. We shall bond 25gig x2 ports. In total we should get 50gig throughput.

Additionally, there will not be any switch in between. The 3 node cluster will be connected in a full mesh manner as suggested in this article: https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server?

Apart from all these, my question is can we not keep NVMe ssds in RAID as well as in Non-RAID mode, that we usually can do with the sata and sas drives?

Can you please shade some light on what is the point of creating 2 OSDs in 1 NVMe? How will it perform better?
 
Last edited:
Why hardware RAID? You can use LVM or ZFS or Ceph. An LSI RAID controller can “pass through” individual drives but it will not be as good as just having the drives directly connected to the bus. Intel has VROC but that is also just software RAID.
 
We shall use 25gig ports. We shall bond 25gig x2 ports. In total we should get 50gig throughput.

Additionally, there will not be any switch in between. The 3 node cluster will be connected in a full mesh manner as suggested in this article: https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server?
With a Full Mesh setup you will never reach 50GBit. It is better to calculate with only 25 GBit.
Apart from all these, my question is can we not keep NVMe ssds in RAID as well as in Non-RAID mode, that we usually can do with the sata and sas drives?
Depending on which chassis you have, you may be able to connect one cage to a raid controller and another directly to the PCI bus.
Can you please shade some light on what is the point of creating 2 OSDs in 1 NVMe? How will it perform better?
In high performance setups, the NVMe is sometimes split into 2 partitions and two OSDs are created per NVMe to get more performance from a single NVMe. You won't need to do this, as you only have 25 GBit bandwidth anyway. You can easily saturate this with 1 OSD per NVMe.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!