Home Lab Clustering: Node Hardware Age?

Daphoid

New Member
May 24, 2024
5
2
3
Hi Folks;

I'm building my first proxmox cluster at home on some Intel NUC's I saved from ewaste. I was planning to expand the cluster with more nodes and have two options, I can buy more of the same model off eBay for a pretty decent price; or I can buy some newer mini PC's with Intel's N100 chip in it.

I know the architecture is recommended to be the same, but what's the best approach here?

- Multiple nodes all the same hardware (or as close as possible)?
- Multiple nodes, all intel, but different ages?
- Multiple clusters (keeping the older machines together and newer ones together).

They'll all be on the same network, same switch ideally, with separate NIC's for cluster traffic (may use USB 3.0 adapters for this, need to test).

THanks for all the help!

D
 
It's recommended for cluster hardware to be homogenous, but you can probably get away with slightly different hardware in a homelab.
You still probably don't want to mix AMD and Intel CPUs tho.

Don't use "host" for the VM CPU, use a common denominator. If you're going with usb3 NICs, you might as well try 2.5Gbit - although be aware, most of them are based on crappy Realtek chips. Use an Intel-based pcie 2.5 NIC if at all possible.

As far as your cluster config goes, only you can really tell what is best for your needs - you'll have to do some testing and reconfig, and track the results.

As you're probably aware, your electric bill will go up when you have multiple servers running. If you only power on multiple clusters over the weekend, shouldn't be too much of an increase.

If you plan on running a main cluster 24/7, use either the "biggest" hardware or the most power-efficient. A 2-node cluster with Qdevice (like a raspberry pi or a VM running on your always-on desktop) should be fine for most home needs.

Just make sure you put everything on UPS power, and setup NUT
 
  • Like
Reactions: Daphoid
It's recommended for cluster hardware to be homogenous, but you can probably get away with slightly different hardware in a homelab.
You still probably don't want to mix AMD and Intel CPUs tho.

Don't use "host" for the VM CPU, use a common denominator. If you're going with usb3 NICs, you might as well try 2.5Gbit - although be aware, most of them are based on crappy Realtek chips. Use an Intel-based pcie 2.5 NIC if at all possible.

As far as your cluster config goes, only you can really tell what is best for your needs - you'll have to do some testing and reconfig, and track the results.

As you're probably aware, your electric bill will go up when you have multiple servers running. If you only power on multiple clusters over the weekend, shouldn't be too much of an increase.

If you plan on running a main cluster 24/7, use either the "biggest" hardware or the most power-efficient. A 2-node cluster with Qdevice (like a raspberry pi or a VM running on your always-on desktop) should be fine for most home needs.

Just make sure you put everything on UPS power, and setup NUT

Thanks for the helpful reply! I ended up going the identical route and based on the shelving I bought (and a bit of OCD probably) I'll soon have a 9 node cluster of Intel NUC's.

I did end up going with USB 3 NIC's out of necessity - the NUC's don't have PCI expansion slots, there may have been a 3rd party top that would've helped here; but I'm trying to stay (somewhat) cost effective at present. Same goes for 2.5 Gbps, while I'd like to explore it - I can't buy any switches with fans due to the location of this stuff (our shared living room / WFH area) so a fanless 24 port Netgear will have to do. I'll at least have LAGs to the QNAP shared storage and over to the core switch out to the Internet.

Can you elaborate on the "don't use host for VM CPU" line? I'm confused there.

As for the electricity concern, I'm fortunate that I don't pay for it. Housing where I live is insane (average homes are $800K-$1M) as such we rent, but we've been here long enough that we don't pay for hydro separately :).

As for UPS power, done - sized a UPS with 20% headroom and bought that accordingly. It's the 7th UPS in the place but who's counting? :)

Thanks!

- D
 
  • Like
Reactions: Kingneutron
LMGTFY " ups nut "
https://networkupstools.org/

Unless you have the exact same CPU across all nodes, you don't want to use "host" for the virtual cpu bc the VM would get confused if it HA transfers over to another node.
Email notification is too fast for me! It sent out the original text. I did google pretty much after I hit post and stealth edited; but here we are :)

Each node actually does have the same CPU; but good advice either way, I wasn't aware you could select CPU types (or virtual types I suppose?) in Proxmox, I'm used to just specifying core counts in vmware. I'll have a look though once I get things spun up (still waiting for some of hardware to be delivered).

EDIT: Nice article for anyone else who reads this: https://www.techaddressed.com/tutorials/proxmox-improve-vm-cpu-perf/#what-is-the-cpu-type-setting

Thanks!
 
Last edited:
As for the electricity concern, I'm fortunate that I don't pay for it.
Oh wow. For that situation you could have picked up some older branded towers on Ebay (eg Dell T620).

The main downside of those is their power draw compared to modern gear. It's not eye watering, but they can hover around ~100w when idling, and go up to 200-300w when both cpus are fully working (depending upon exact cpu, and they're upgradable too).

Their upside is more cpu cores, more memory (cheaply), availability of cheap SAS SSDs (from ebay), and cheap (its a theme!) expansion cards (ie https://www.ebay.com/itm/384097637844).
 
Oh wow. For that situation you could have picked up some older branded towers on Ebay (eg Dell T620).

The main downside of those is their power draw compared to modern gear. It's not eye watering, but they can hover around ~100w when idling, and go up to 200-300w when both cpus are fully working (depending upon exact cpu, and they're upgradable too).

Their upside is more cpu cores, more memory (cheaply), availability of cheap SAS SSDs (from ebay), and cheap (its a theme!) expansion cards (ie https://www.ebay.com/itm/384097637844).
Dang, I totally forgot about Dell's tower servers. That T620 is one dense tower, three of those for cheap to have quorum in proxmox would've been pretty sweet. Only downside is shipping something of that size up to Canada where I am isn't cheap so that would've burned a bit.

Definitely a better performance option though. Well at least I got a bunch of the NUC's I have for free so that softens the blow a bit.

I guess I'll take the note at present and maybe upgrade in the future :)

- D
 
  • Like
Reactions: justinclift
Indeed :). Those are about $500 CAD, so say $1500 if I wanted a trio for a cluster. Add in some storage and what not and it's doable. Heck even a single host to start would be neat. Good stuff!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!