New CEPH Project

netpana

New Member
Jun 5, 2024
7
1
3
Hi guys I would like some advice about a completly new setup for Proxmox CEPH, I will apreciate as much help as you can and thanks on advance.

I would like to build the CEPH with this servers:

The idea is to set up 3 servers like this one:

Dell R640 12SFF
CPU 2 × Intel Xeon Gold 5217 (8C 11M Caсhe 3.00 GHz)
RAM 4 × 32GB DDR4 RDIMM 2666MHz
RAID NO RAID CONTROLLER OR HBA355i Adapter
ADAPTADOR LAN: 4 ports 10GB SFP+ NDC
Option 1:
SSD SATA: 4 * 4 TB DISK
Option 2:
SSD SATA: 8 * 2 TB DISK

For now it will have a few machines:

4* Radius Server (Freeradius)
1* Geni ACS (arrownd 7000 olt)
4* VoIP PBX (debian Based)
1* Zabbix ( arrownd 3000 devices)
1* Graylog
1* Zammad Server
And some more will come soon.

And really thanks in advance ... looking forward to come into proxmox world.
 
You didn't tell us where the OS will live. With the devices listed I would opt for 2*2 TB with ZFS for PVE. All other disks would go to Ceph, one OSD each.

With this small number of disks/OSDs you want the smaller ones. When (not:if) one of these disks dies the other one on this node has to absorb the data of the dead disk. With only two disks for OSDs you could only fill it to ~45% because moving this content to the other OSD would fill that one up to 90%.

With only three nodes and Ceph and the default replication rule being "size=3,min_size=2" you already start at the lowest limit. As soon as one node dies the pool is degraded and it will stay degraded as there is no other (fourth) node to migrate the Ceph data to - so self-healing is not possible. (The failure domain is "node", not "OSD".)


Disclaimer: I am not a "real" Ceph user!
 
  • Like
Reactions: Johannes S