Proxmox + Ceph build - Suggestions?

Mar 19, 2024
4
0
1
Hello,

We are migrating from XCP-NG over to Proxmox. At the same time, we will be purchasing new (new to us, but refurbished) equipment.
We've landed on 3x HP Proliant DL380 G10 servers, running these specs on each server:
2x Intel Xeon Platinum 8173M 28-Core 2.0 GHz (3.5GHz turbo)
8x32 2666MHz Registrered ECC DDR4
2x 2-port 10G NIC's
2x Micron 7450 MAX U.3 (U.2) 1.6TB NVMe drives OS/boot
8x Micron 7450 MAX U.3 (U.2) 1.6TB NVMe drives for Ceph pool

We will see if we can do direct fiber between each host, but it might have to be switched (Cisco N5K-C5596UP). Will there be any performance difference here if we have 20G LAG between all sites and do it switched?

We will also be using Proxmox Backup for now, possibly Veeam in the future. We'll be going for Proxmox support for both VE and Backup Server.

Is there anything we haven't thought about here? Our current workload consists of a variety of Linux servers and a couple Windows Servers. Mostly DB stuff, (monitoring like Grafana with InfluxDB, Zabbix, Prometheus, ++).

I do believe this setup should be good enough and some, but I'd like some external input. Ask me anything and I'll answer what I can.
 
Last edited:
2x Micron 7450 MAX U.3 (U.2) 1.6TB NVMe drives OS/boot
sounds a bit overkill for the OS drives. Half a TB is usually plenty already.

2x 2-port 10G NIC's
They hopefully have more NICs, or you add them afterward. At least one dedicated 1 Gbit link is recommended for the Proxmox VE cluster communication via Corosync (configuring Corosync to use additional networks is also a good idea). And then you have the MGMT and production traffic and the Ceph traffic which, which these number and types of disks, can easily saturate 10 Gbit.

Also, I don't know how the newer HP machines are, but make sure that you have an HBA or put the RAID controller into HBA mode.

Keep in mind, that each Ceph service will need CPU and memory resources. E.g. per Ceph service, count one full CPU core. With 1 MON, 1 MGR and 8 OSDs, that is 10 Cores for one server. So it really depends on how much CPU resources you need for your VMs.
 
sounds a bit overkill for the OS drives. Half a TB is usually plenty already.


They hopefully have more NICs, or you add them afterward. At least one dedicated 1 Gbit link is recommended for the Proxmox VE cluster communication via Corosync (configuring Corosync to use additional networks is also a good idea). And then you have the MGMT and production traffic and the Ceph traffic which, which these number and types of disks, can easily saturate 10 Gbit.

Also, I don't know how the newer HP machines are, but make sure that you have an HBA or put the RAID controller into HBA mode.

Keep in mind, that each Ceph service will need CPU and memory resources. E.g. per Ceph service, count one full CPU core. With 1 MON, 1 MGR and 8 OSDs, that is 10 Cores for one server. So it really depends on how much CPU resources you need for your VMs.
The drives will be running on a HBA.
We are using 1.6TB drives for OS just because we dont have to have so many disks in cold-spare. The price difference is not terrifying.

I'm gonna look into some different CPU's. Do each MON/OSD really use 1 physical core, no matter what CPU?
 
I've edited the post, we're planning to use Platinum 8173M processors instead, gaining us an additional 24 cores per server.
We do sacrifice some clockspeed though.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!