Migration from 2nodes shared storage to 3nodes CEPH

paoloirons

New Member
May 5, 2022
1
0
1
Hi all,

I would like to upgrade my pve cluster from 2 nodes + qdevice and qnap shared storage to a 3 nodes cluster with ceph
- what are the requirements to have 3/4 tb of storage for the vm?
- are 10gb network cards enough for ceph traffic?
- are there favorite models of ssd for ceph?
- are there any particular best practices?
- how much ram is recommended to have? currently the nodes have 48 gb
- a physical node will have to have a pcie HBA card, are there preferred hba cards?

I'm not able to find this infos here or over internet

Thank you guyz


Paolo
 
Hello,

I dunno for which kind of applications you are planning to use your cluster for, but in terms of a productive (enterprise) environment I can give you a few pointers:

- what are the requirements to have 3/4 tb of storage for the vm?

CEPH by default uses a 2/3 setting, so size=3 and min_size=2 replicas (and it is NOT recommended to change this). This means that for every GB of data you need a total of 3 GB of disk space - split over all your nodes. So to have e.g. 4TB of space, each server needs 4TB of disks - in theory. BUT:
  • CEPH really doesn't like it when you run out of disk space. So you should try to have at least 10% free space at all times
  • Also: When one node goes down, CEPH will try to replicate their now missing third replica on the two remaining nodes. This also requires additional storage. In particular, when you only have three nodes each node needs a maximum of 66% used storage (considering point1, better: 60%)
So in order to get a resilient 4TB ceph pool, you have to calculate:
replicas: 4TB * 3 = 12TB of total storage
failover: 12TB / 60% = 20TB total

That makes it 20TB / 3 ~= 6.7TB of raw storage for each node. That could be e.g. in the form of four 1.67TB disks (just as an example, I doubt that there are disks with that size).

- are 10gb network cards enough for ceph traffic?

More speed is always better, but 10Gbit is a good middle-ground. I would not take 1Gbit cards for CEPH. If you plan to stick with 3 nodes you could also think about configuring a Full Mesh network. That way you don't need swiches and could use 25/40/100Gbit cards: https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server

- are there favorite models of ssd for ceph?

You can't stress enough: DONT USE CONSUMER SSDs. A Samsung SSD 850 EVO might be fun for gaming pcs, but do not belong in a server.

At the very least, the SSDs need Power Loss Protection. Using SSDs without PLP is not only bad for data security, but has a severe impact on performance. There's a great CEPH disk benchmark here:
https://forum.proxmox.com/threads/proxmox-ve-ceph-benchmark-2020-09-hyper-converged-with-nvme.76516/

- are there any particular best practices?

Get disks as close as you can to CEPH: use HBAs, no raidcontrollers (and no, RAID0 does not count). There are also things you should do on top of the basic configuration done by the PVE wizard in the GUI - like adding more Managers and Monitors. It is extemely advisable to read the CEPH article in the PVE wiki:
https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster

- how much ram is recommended to have? currently the nodes have 48 gb

CEPH definitely needs quite a bit of extra RAM. Iirc rule of thumb is to calculate 5GB of RAM per OSD (= per physical disk used by CEPH). Considering that a solid CEPH environment should have at least 4 OSDs per host, I would guess that using hosts with 48GB RAM will not leave you with much RAM for your VMs.

Kind regards,
Benedikt
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!