what do you think of this Proxmox/ceph Cluster ?

crashpb

New Member
Aug 30, 2023
3
0
1
So I have this cluster in mind:
PLEASE keep in mind the availability of hardware be it new/used server/workstation hardware is quit unorthodox and different Here from what's available in the us/eu.

for example a single used epyc 9654 costs 10x a 9950x and that's just a single cpu, a single 7960x costs 4x a 9950x.
all the "SORTA" reasonable priced used server stuff are quite old intel sandy bridge maybe skylake based HP servers.

my main goal is to have very very fast single core performance due to badly implemented accounting and finance software.

so here is what I have in mind for our main site (we have 4 sites, site 1 is gonna be the primary):

6x:
Asus x670E-E
ryzen 9950x
256GB ECC DDR5
dual port 100gb nic (connectx-5 most likely)
3x Samsung pm9a3 15.36TB or kioxia cd6-v 12.8tb (if i could get a reasonable price on kioxia) (planning to use erasure coded ceph with K=4 and M=2).
3x 20Tb hdds (mostly for bulk storage)
a rack mount case (2u or 4u) with at least 3-5x hot plug nvmeV4 storage bays and 4x sata LFF bays.
a couple of connectors/convertors to connect 3x m.2 nvme ports (coming from the cpu) to the backplane.
I'm also planning on putting a disk(nvme or 3 sata disks(raidz1) via 'm.2 to sata card') on wifi card's slot for proxmox boot storage.(or use two-three usb to m.2 disks)
A 100gbs Switch (or maybe 2 if I could stretch the budget), mikrotik crs520 or if could find a used cisco 100gbs switch for the same/cheaper price.

this cluster gonna host our vms (adds,adcs, acounting, voip, etc) and ceph is gonna act as our storage backend for vm disks (nvme disks) and vm backups (hdd disks).

for other 3 non primary sites, I have a 3 node proxmox/ceph cluster (same hardware but 3 nodes with replicated ceph).
*I might switch to 40gb/s networking on secondary sites to save on costs.

few caviats:
*utilizing 3 cpu sourced pcie x4 nvme slots on the x670e-e board the primary x16 pcie slots swtiches to x8 mode and that will somwhat limit my nic performance (nic is a pcie gen4 x16 card and and x8 slot gonna be 125gb/s max, so I can't saturate the card's theoritical 200gb/s (2 100gb/s ports) bandwith which I think is gonna be fine.

* the board will have a pcie gen4 x4 slot and a m.2 gen4 available still which I might later use (maybe) for an HBA or some other hardware, considering these ports are coming from the chipset, I don,t plan on using them for nvme (to prevent uneven drive latencies, although i'm not sure if ceph cares too much about it).

since my MB lacks IPMI, for the management purposes I'm planning to use a nano kvm or pi kvm.

my main concern is how much ram ceph is gonna be eating up?
will I be able to have 192GB (or maybe more) available per host?
I am planning to setup HA on proxmox.
 
Last edited: