Cheap Homelab NUC Cluster with Ceph.

Ramalama

Well-Known Member
Dec 26, 2020
668
120
53
35
Hey guys,
my Mum needs a Server, in short i need a cheap redundant solution, where i can run opnsense and pihole+unifi :)

So nothing special, it's all very basic.
I thought to buy Intel Nucs for this task, actually 3x NUC13ANHI3

Because I'm using already an NUC13ANHI3 as a backup Proxmox Server at home in a Cluster with my big Server, just in case i have to shutdown/update my big Server and i need something where opnsense still runs, for the Internet connection...
Opnsense runs simply in HA, 1x on the big Server and 1x on the NUC13ANHI3, which works wonderful.

However, as i seen how freaking awesome the nuc13 runs Proxmox, i thought i need more of them xD
Especially since they're super cheap.

- The only problem with the nuc is, Storage... It's impossible to make it redundant on the nuc, with only one SATA port and one nvme Port.
I thought of iscsi storage, but that would blow my pocket and blow the power consumption.

Now im thinking of Ceph?
Ceph allows me to combine all 3 NUC SATA ports to one over a Thunderbolt 3 network.
Since all the NUCs have 2x USB4/TB3 ports, and they are capable of 20 or 40gb/s they would be actually a great backbone for Ceph?

I would buy for every NUC an NUCIOALUWS (that's an extra i226-LM 2,5gbe nic), that allows me to pass through the additional nic directly to opnsense.
(Doing this at my home already, works wonderful)

However, i never did Ceph and never did TB3 networking.
So i need someone that did already Ceph and can tell me if my idea would work and if it's generally a good idea + will be performant.
Or if we can even do zfs on top of Ceph to increase performance.

However, the basic idea behind is, NUCs are extremely power efficient and cheap, 3 of them in a Ceph TB3 Cluster will allow to fail one completely + allow me to update one by one without loosing anything...
And they even provide a proper HA cluster, since the storage is shared?

The only downside i see is, no ecc memory and max 64gb RAM, but 64gb is enough, just no ecc hurts me a little.

Cheers
 
Last edited:
I expand the guestion a bit:
- What if i put into each an 2tb nvme ssd, partition for Proxmox only 100gb and use ~1,8tb for Ceph as nvme shared storage?
- And the additional 2,5 inch ssd as a separate ssd shared storage?

- would this even work with attached usb drives? Let's say for media content?

- Can we use keepalived (carp ip), to have one virtual ip to access the cluster? Like would the Proxmox gui work on that virtual IP?

- Do you define a private Ceph network? Means simply if i can use the TB3 network exclusively for ceph? (Sry no clue about Ceph)
 
Last edited:
I would like to know if you did already some setup on this project? I have more or less the same idea:

My idea is to set up 3 or 4 Asus mini computers (core i3 - 8GB RAM) in one Proxmox cluster with some virtual machines on it. And use external HDD or SSD drives to make a shared storage with ceph. The mini pc's have only a 1GB network port on board, that could be a littlebit tricky...
Maybe I can add an usb/ethernet adapter on all 3 devices, just for the ceph cluster.

Another option is to migrate my ESXI server (Fujitsu server, Xeon processor, 32GB RAM) to Proxmox. I just need to find a way to export my VM's (but that should work normally). Add a 2nd server (Dell R210 mkI with Xeon and 16GB RAM) and find a 3rd device (found a Supermicro with Xeon processor and 16GB RAM too, just need to buy it). The problem here is 3 different machines, but more powerful. Some say you use best nodes that are equal...

So I'm not sure what is the best option. Of course, with the 3 Asus pc's the workload can not be so big like on the real servers, but I need a stable setup to make a project on it (for school), and keep it alive in my homelab when it works fine of course :)
 
I run 3-node proxmox/ceph cluster for several years now. I use cheap Dell refurbished desktops with cheap consumer-grade NVMe and Solarflare 10GB adapters (though now my preference is Intel X520). Works fine regarding the cluster itself, live-migrating the VMs and doing host maintenance without disruption to VMs.
The storage performance is not great at all. It's quite all right for my VMs which are not very storage-intensive. But when I tried to build a decent-sized Java project within a VM it was very noticeably (maybe 10 times or more) slower than on a PC with directly connected NVMe...

Each hosts also have 1 HDD on each, with a CephFS RAID 2+1 configuration, I use clustered Samba to access it, and the Proxmox GUI works quite fine using the virtual IP (it is provided by CTDB). The performance of that is about 15-20 MB/s large sequential reads, which is OK for my use (hosting video)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!