Best storage solution for a 3 node cluster in a homelab?

lucas2000

New Member
Jul 8, 2021
8
1
1
35
Currently trying to define my storage solution for my homelab, mostly for for vm containing home assistant, radarr, sonarr, overseer, grafana, unifi, nginx, etc etc etc. So the idea is to keep VMs specially home automations one running even if there is a hardware failure. I want to be able to take down 1 of the servers without any interruption, and be able to run the cluster for few days while i get a replacement part/server.

I currently have 3 Dell R420 servers with the following specs:

  • 96G of ram
  • dual Xeon CPU
  • 2 port 10G NIC
  • 2 port 1G NIC
  • 3 120G SSD (Have 4 drive slots)
So my initial plan was to install proxmox on a raidz using the 3 120G ssd, and get a NVME pcie adapter and an NVME disk (500G), I was going to use Ceph. That means 3 nodes and 3 OSDs was going to use both of the 10G one for Public and one for Sync. I also have a UNRAID server that I was going to use to store backups. But not sure this is the best solution, I was just reading this article https://www.servethehome.com/building-a-proxmox-ve-lab-part-2-deploying/ and got me thinking if using GlusterFS is a better plan.

Because I have limited drive slots i could install proxmox on a 2 disk raid1 array, and get 2 more SSDs (instead of the NVME pcie adapter and an NVME disk (500G)) and create a raid0 for speed use that for ZFS and use ClusterFS on top of that. Or just straight up ZFS with 2 SSDs and setup HA
 
Last edited:
But not sure this is the best solution, I was just reading this article https://www.servethehome.com/building-a-proxmox-ve-lab-part-2-deploying/ and got me thinking if using GlusterFS is a better plan.
To be honest, I did not read that article in detail, but personally I'd not go for GlusterFS over ceph. Sure, GlusterFS will be easier to setup first, but Ceph isn't too hard either with Proxmox VE's integrated ceph tooling and management, and Ceph is just order of magnitudes more resilient and also scalable compared to GlusterFS, at least in my experience.
 
To be honest, I did not read that article in detail, but personally I'd not go for GlusterFS over ceph. Sure, GlusterFS will be easier to setup first, but Ceph isn't too hard either with Proxmox VE's integrated ceph tooling and management, and Ceph is just order of magnitudes more resilient and also scalable compared to GlusterFS, at least in my experience.
Will it work fine with just 3 nodes and 1 OSD each (3 in total)???
 
Yes, with three nodes ceph can work, and we know of quite some three node ceph cluster setups.

One OSD per node isn't much, but at least from Ceph POV you can always add more later on, and it will work in general.
 
Yes, with three nodes ceph can work, and we know of quite some three node ceph cluster setups.

One OSD per node isn't much, but at least from Ceph POV you can always add more later on, and it will work in general.

Thanks for your replies!!! very helpful.

So thats kinda the problem with the servers i got :( they are r420 So expansion is very limited so planning ahead to prevent extra cost later

So i got 4 drive bays and 2 pcie (one of the pcie will be taken by a 10G nic) so that leaves me with 4 drive bay and 1 pcie...

So here are few options (note: My servers are so old that they wont boot from a pcie bus)
  1. Use 3 ssd disk to install proxmox on a raidz conf and get a pcie nvme drive
  2. use 2 ssd disk to install proxmox raid1 and get 2 regular SSD for OSD
  3. use 1 sdd disk bay to install proxmox and get 3 regular ssd for OSD
I guess with options 2 and 3 I can add another SSD using the pcie for another OSD. So that would be something like:'

2a. use 2 ssd disk to install proxmox raid1 and get 2 regular SSD for OSD and one nvme pcie drive for a total of 3 OSD per server.
3a. use 1 sdd disk bay to install proxmox and get 3 regular ssd for OSD and one nvme pcie drive for a total of 4 OSD per server.

From there my only scale options will be to get more servers.

So @t.lamprecht based on your experience which option will you go with? 1? 2? 3? 2a? 3a?
 
Last edited:
2a. use 2 ssd disk to install proxmox raid1 and get 2 regular SSD for OSD and one nvme pcie drive for a total of 3 OSD per server.
3 OSDs per server does not sound too bad. Also note that you can replace the existing OSDs also with bigger ones on-the-fly with ceph (naturally one by one, not all at one ;)) in the future, if you ever think you'd need more space.

Personally, I'd go for 3. a small enterprise SSD for the Proxmox VE base system and backup all relevant up off-site, you'd need to have some real bad luck for an enterprise SSD that's just used for the base system goes up in smoke.
But, as it is possible lets cue to our backup of (at least) /etc and a ready Proxmox VE ISO to dd on an USB pen drive to just rebuild, for that day the enterprise SSD breaks, if it ever comes.
Disclaimer, I play around with Proxmox VE basically daily and cleaned up quite some mess already, so I may have a bit of an unhealthy risk assessment for that stuff here, so it may not be worth the (possible) hassle for you, RAID1 for OS is surely the safer, peace-of-mind, way to go.
 
3 OSDs per server does not sound too bad. Also note that you can replace the existing OSDs also with bigger ones on-the-fly with ceph (naturally one by one, not all at one ;)) in the future, if you ever think you'd need more space.

Personally, I'd go for 3. a small enterprise SSD for the Proxmox VE base system and backup all relevant up off-site, you'd need to have some real bad luck for an enterprise SSD that's just used for the base system goes up in smoke.
But, as it is possible lets cue to our backup of (at least) /etc and a ready Proxmox VE ISO to dd on an USB pen drive to just rebuild, for that day the enterprise SSD breaks, if it ever comes.
Disclaimer, I play around with Proxmox VE basically daily and cleaned up quite some mess already, so I may have a bit of an unhealthy risk assessment for that stuff here, so it may not be worth the (possible) hassle for you, RAID1 for OS is surely the safer, peace-of-mind, way to go.

Im sure i can probably google this but, how well/bad does Ceph handle different speed and size ssd? Like i know its probably bad practice but can i combine a lets say 500gb nvme pcie disk with a 128 SSD?
 
What you can do in general is to create multiple "CRUSH rules" (CRUSH is the name of the main algorithm of what makes ceph ceph), where one can tell "that filters by SSDs", "that one by spinners", "that one by NVMe's" then you can create different ceph pools and assign each one a different CRUSH rule, that way ceph only uses the device class you told it to save the data for the respective ceph pool.

That's often used to separate a ceph setup into a bigger, but slower pool (e.g., consisting of spinners) for cold/warn data and one smaller, but faster pool, for hot data like VM OS disks or the like.

There's also cache tiering or just using it as is, which can be also OK, especially for smaller setups.

In general I'd recommend giving ours and cephs documentation a read regaring this thematic:

https://pve.proxmox.com/pve-docs/chapter-pveceph.html
https://docs.ceph.com/en/latest/rados/
 
How about 1 server with more OSDs than the others? For example if i replace one of the r420 with a r620 that has 8 drive slots, and fill them with SSDs will there be any benefit? Basically:

Node 1: 3 OSD
Node 2: 3 OSD
Node 3: 7 OSD

Any benefit on doing that??
 
Any benefit on doing that??
I mean, the obvious one: you have more OSDs and thus more storage space and drives to parallel access.

But, if "node 3" fails as a whole, you're in trouble, as more than half of the total OSDs failed at once.

So, with ceph in smaller setups it's general good to have somewhat homogenous disk-configurations, and maybe even HW.
 
@t.lamprecht Thanks for your help... One last question for now, For the the proxmox OS I will use enterprise SSD. But how about for the OSD?? is it worth it (for home labs) to spend the extra money on enterprise grade SSD? or just get regular SSDs from amazon?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!