Storage / Ceph questions for a basic 3 node homelab

Nicoloks

New Member
Apr 26, 2024
2
0
1
Hi All,

Currently playing with a 3 node cluster in my home lab, trying to build some skills before I migrate my GCP hosted services back in house. In no way expecting cutting edge performance from this setup, just looking for some advice on my options / best practices. For each node I'm using an old HP EliteDesk 800 G3's (i5-6500) with 32GB memory, LSI 9200-8E HBA (connected to 8 x 1TB SATA 7200 RPM 2.5" HDDs) and Mellanox Connectx-4 providing 10Gbit for Ceph via DAC ring IPv4 network. Using the internal SATA is a 256GB SSD for Proxmox and 2 x 1.6TB Intel S3510 Enterprise SSDs.

Would appreciate a steer on the following.

1. I'm currently in full lab mode, especially for poking around Ceph. Wondering if anyone new of any scipts to spin up / tear down Ceph cluster components? I found this older bash script to remove all OSDs on a host which looks to mostly work, however it doesn't seem to wipe the drives ready for re-use. Wondering if there are already some community scripts for this before I go re-inventing the wheel?

2. In terms of Ceph performance I've been happy with the performance when using my Enterprise SSDs as DB & WAL for the 1TB HDDs. I must be using some bad search terms or just looking off in completely the wrong direction. Seems to me that the most resilient way to configure this would be (at a minimum) to have 2 x Enterprise SSDs in a mirror to serve the DB and WAL to my low performance spinning OSD HDDs. Ceph does not seem to support any form of hardware or software mirror for this purpose as far as I can see. Unless I'm reading this wrong, having single drives (even though they are Enterprise SSDs) to serve the DB and WAL would effectively put the failure domain at the host level (which doesn't read particularly desirable to me). What is the best way to configure a mixed HDD / SSD pool so that it is getting the performance benefit of having the DB & WAL on the SSDs?