Advice on Ceph for Homelab

gr0ebi

New Member
Aug 15, 2022
1
0
1
Hey guys,

I'm currently running a single 10year-old 4c/8t esxi host with local consumer SATA SSD raid 5 in my homelab to host my VMs
My current VMs are: 5x debian servers (teamspeak, docker host, chat server, home automation, DLNA server), 3x windows servers (2x AD DC, print server)
I want to switch over to a 3-host proxmox full-mesh configuration, ideally with Ceph for my VMs.
Since I'm new to Ceph and only have minor experience with proxmox (only used single host setups) I could need some input on storage and network.
I only want to host my VMs on Ceph (my data is on an external NAS) and I'm not sure if my planned setup is feasible

I was looking into buying 3 AsrockRack 1U4LW-X570/2L2T RPSU with 6c/12t CPUs and mesh them together with their dual onboard 10GbE and add a PCIe dual 10Gb into each server for the uplink to my core switch. I want 2TB usable capacity (4x SATA SSD max. each host) for my VMs on Ceph and I'm not sure if I need to buy enterprise/prosumer SSDs to have enough performance. I also want to know if either the single 10Gb to each host or the 6c/12t CPU might be a bottleneck in my setup.
These servers also have 2x 1GbE, maybe they can be used for anything usefull that I might have forgotten

I dont want to cheap out on hardware, I just want some input from people that might have some experience with Ceph/Proxmox HCI

Thanks a lot
 
I have a 3-node Ceph cluster running on 13-year old server hardware. It's using full-mesh broadcast bonded 1GbE networking working just fine. That's right, the Ceph public, private, and Corosync traffic over 1GbE networking just fine. There is even a 45Drives blog on using 1GbE networking in their test environment and they are fine too.

Of course, best practice dictates that Ceph public, private, and Corosync should on be on different physical networks but I have a 5-node Ceph cluster with 10GbE networking with all of the network traffic is working just fine too.

If you want to best IOPS for the VMs, do the following:

Set write cache enable (WCE) to 1 on SAS drives (SATA drives should have WCE already enabled supposedly)
Set VM cache to none
Use VirtIO-single SCSI controller and enable IO thread and discard option
Set VM CPU type to 'host'
Set Linux VMs IO scheduler to none/noop
Run latest version of Ceph which is Quincy at this point in time (Ceph gets better performance with each latest release)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!