Ceph hardware question

vRod

Renowned Member
Jan 11, 2012
36
2
73
Hi all,

I am looking to setup 3 proxmox hosts and start hosting cloud services.

I have a Dell C6220 box with 3 nodes of:

2x Xeon E5-2680
256gb quad rank 800mhz memory
Dual port 10gbe nic

In each node Inhave space for 4x 3,5” Drives and a single PCIe 3.0 low profile slot.

Now which drives and journal would be idela to go for? I was thinking about a 280gb optane 900p for journal and HGST Ultrastar drives for the HDD’s. Is this a bad combo? Would there be some better combo for my situation? Would there be good performance?

Also, is the 800mhz memory an issue? I already have the DIMM’s, hence why i wanna use them.

Edit: Realized, I can't put a second 10g nic in the host with the optane ssd in place, is it OK to mix ceph traffic with VM traffic? I would separate it using VLANs. I have a couple of 1gb NIC's onboard so could also use that for VM traffic. What would be best?

Any help would be appreciated!!

Thank you!
 
Last edited:
if the memorys work in the dell server, they should be ok, at least for a start. The servers will not run to its maximum speed, but
it depends on the workload you want to put on it, if it will be a problem. Of course you cannot expect speeds as in a modern system.

From the intel data the optane 900p sounds like a very good thing for journaling.

For networking: CEPH is very latency sensitive, but 10G is fine for it.
If 1G is enough for your VM traffic, separate it.
 
if the memorys work in the dell server, they should be ok, at least for a start. The servers will not run to its maximum speed, but
it depends on the workload you want to put on it, if it will be a problem. Of course you cannot expect speeds as in a modern system.

From the intel data the optane 900p sounds like a very good thing for journaling.

For networking: CEPH is very latency sensitive, but 10G is fine for it.
If 1G is enough for your VM traffic, separate it.
Thank you for your response! I do also have some 1600Mhz memory but not enough to fully populate all the servers.

I have heard various people saying that a 3-node cluster is not a good idea performance-wise. Is that so? I have a 4th node I could include too but would then have to get an extra optane ssd and re-cable the hotswap bay for that particular node. Also, would the option with 12x 7,2K SATA drives be an issue as OSD's? is that too few or would it suffice?

Sorry about all these question, I'm quite new to ceph. :)

I don't think I will need the 10G for network so i might as well just use the 1G interfaces to handle that. Eventually I can bond them.
 
depends on your workload and what you want to see as performance. CEPH will of course have benefits from more OSD's and Host's, but I see acceptable performance from three nodes with 4 OSD's with 7.2k SATA's each. In my case a Samsung EVO 960 NVME is used as block.db.

So just try it with the three nodes
 
Thank you for your help, I will give it a go! When you say used as block.db, do you mean as a journaling device?
 
they are configurted as bluestore. There is db and wal device for bluestore OSD, block.db is enough, as I read from the docu, wal should then be also on this device.

for the partitioning of the NVME disk: the default will make a 1G partition, which is quite small. make partitions with say 20G with parted, which should be enough. you have to set the Partition UID GUID code per Hand with sgdisk as in parted there currently no matching type, otherwise the udev rules for ceph will not work

sgdisk -t 4:30CD0809-C2B2-499C-8879-2D6B78529876 /dev/nvme0n1

(the number before the ":" is the partition number)
 
I see, I just read that with bluestore, I wouldn't actually need any journaling device? Is this true?
 
Ah I see, that makes sense then. :) I have a different question though, I also have 8x WD Red (not the pro) and was wondering if these would perform okay as well? I would then get 4 more to fill up the chassis, the reason behind this is that the drives are 6tb so I would get a bit more space in the end. I think the normal WD Red's are about 5400-5900 rpm. Would that be an issue?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!