Hi all,
We are looking in to deploying a new refurbished NVME HCI Ceph Proxmox cluster.
At this point we look at 7 nodes, each with 2 NVME OSD drives, with expansion for 2 NVME OSD's more.
As we would quickly saturate a 25GbE link we should be looking in to 40/50/100 GbE links and switches.
This is where i'm in need of advice.
Our workload are 70 VM's with a lot of mixed use, both lite database usage, a lot of logging data, and plain old storage with lot of reads and only few writes.
So we want the lowest latency and highest performance we can get for our money together with the ease of use by utilizing Ceph.
We know that other solutions would yield far better performance especially with regards to random writes and a lot of I/O, but we have fallen in love with Ceph, so we'd rather skimp a bit on performance and keep using Ceph. It also allows to easily scale with more OSD's and nodes later on.
On eBay one can find several solutions to this. Expensive native ethernet gear 25/40/50/100GbE, Mellanox Infiniband EDR/HDR, Intel Omnipath 100GbE.
I imagine two switches to keep redundancy of the Ceph network, and then two NICs in each node. On board is already two 10GbE for VM access and Corosync.
What i can't seem to get a clear answer on is if it's possible to buy either Omnipath or Infiniband NIC's and switches, and put Ceph traffic on these networks, while achieving low latency.
Omnipath seems to be quite cheap, and that could be a red flag as it doesn't seem to be developed by Intel anymore. But does it work for Ceph? Would it continue to work with newer releases of Proxmox?
Otherwise Infiniband seems to allow some Ethernet mode, so this could be a slightly more expensive alternative to Omnipath.
Can somebody argue for or against the solutions?
Thanks,
Lucas
We are looking in to deploying a new refurbished NVME HCI Ceph Proxmox cluster.
At this point we look at 7 nodes, each with 2 NVME OSD drives, with expansion for 2 NVME OSD's more.
As we would quickly saturate a 25GbE link we should be looking in to 40/50/100 GbE links and switches.
This is where i'm in need of advice.
Our workload are 70 VM's with a lot of mixed use, both lite database usage, a lot of logging data, and plain old storage with lot of reads and only few writes.
So we want the lowest latency and highest performance we can get for our money together with the ease of use by utilizing Ceph.
We know that other solutions would yield far better performance especially with regards to random writes and a lot of I/O, but we have fallen in love with Ceph, so we'd rather skimp a bit on performance and keep using Ceph. It also allows to easily scale with more OSD's and nodes later on.
On eBay one can find several solutions to this. Expensive native ethernet gear 25/40/50/100GbE, Mellanox Infiniband EDR/HDR, Intel Omnipath 100GbE.
I imagine two switches to keep redundancy of the Ceph network, and then two NICs in each node. On board is already two 10GbE for VM access and Corosync.
What i can't seem to get a clear answer on is if it's possible to buy either Omnipath or Infiniband NIC's and switches, and put Ceph traffic on these networks, while achieving low latency.
Omnipath seems to be quite cheap, and that could be a red flag as it doesn't seem to be developed by Intel anymore. But does it work for Ceph? Would it continue to work with newer releases of Proxmox?
Otherwise Infiniband seems to allow some Ethernet mode, so this could be a slightly more expensive alternative to Omnipath.
Can somebody argue for or against the solutions?
Thanks,
Lucas