Advice needed on new 7 Node Cluster with Ceph.

HiltonMundell

Member
Jan 10, 2021
2
0
6
56
Hi

I have been lurking for a while and reading as much as I can and am in the process of planning a new cluster with ceph storage. I have the following hardware and am needing some advice on the best way to set it up.

4 x Dell R720, 2 x 10core, 128GB Ram, 8 x 3.5 bays
3 x Dell R630, 2 x 8core, 128GB Ram, 8 x 2.5 bays

Each server with have dual 128 GB SSD in Raid1 for OS and dual 10Gb ethernet.

I have the following storage available:
7 x 1TB SSD
7 x 1TB HDD 2.5
8 x 4TB HDD 3.5
4 x 8TB HDD 3.5

Cost is an issue and there is no real budget to add to what we have.

I will be running this in production for a SMB and they will have various VM's, hosted NAS's (QNAP) and some hosted PBX's and HA is important. The load is not too heavy at all but expected to grow quiet fast over the next 12 to 24 months. I plan to have 2 pools one with SSDs and one with HDDs and then to put the PBX's and some of the VM's on the SSD pool while the Hosted NAS and other VM's on the HDD Pool. (Current Max Pool Needs 2TB SSD and 12TB HDD)

My questions are as follows:

1. Is it better to have separate storage nodes and compute nodes? My thinking is to maybe use the 4 DellR720s as Storage Nodes put the Storage drives there and use the R630s as compute nodes
2.The other option is to spread all the drives evenly amongst the servers in a hyper-converged setup.
3. Is it problematic using different HDD sizes? Not sure the 1TB HDD are even worth using?
3. Is there a better way than what I am thinking at the moment.

Thanks in advance for any replies.
 
First things first.

If you create a Ceph cluster, make sure that the OSDs in the nodes are similar. If you have different sized disks, make sure they are spread across the cluster. Each node should have the same configuration. As otherwise, and even this way to a certain degree, larger OSDs will store more data. As a result, they see more IO and could become a bottleneck.

If cost is an issue, and you don't have a budget, but expect quite some growth in the foreseeable future, well... don't expect too much from the cluster as it could perform poorly.

Regarding the cluster design: the more nodes Ceph has, the easier it is for Ceph to recover from failures and to spread the load. But a 7 node hyperconverged setup would mean that you will have to skip on the 8TB disks.

If you want to go down the 4 node hyperconverged way, you have two options:
- 2 Completely separate Proxmox VE clusters. One hyperconverged that just manages Ceph. Though you don't necessarily need Proxmox VE in that case. And the other, the compute cluster just connecting to Ceph.
- A mixed large cluster where the 4 nodes run a full Ceph installation and provide the storage and the remaining nodes do not get any Ceph services. But even in this case, consider to set up the same Ceph repository so that the Ceph clients get the same updates.

And thinking about it, you could also run the OSDs on the 4 nodes, and run the MONs and MGRs on the remaining nodes, further spreading the load of the Ceph services a bit.


In the case where you run OSDs on only 4 nodes, you have to think about what to do with the remaining 3 of the 7 SSDs / HDDs. Use them somewhere else or get one more so that each node can have 2.


Also keep in mind how you setup your networks. The faster 10Gbit should be used solely for Ceph traffic. Make sure to have at least one other physical network dedicated to the Proxmox VE cluster traffic (Corosync). Especially if you plan to use the Proxmox VE HA stack! https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_requirements
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!