micro server Cluster with Ceph

sapphiron

Well-Known Member
Nov 2, 2012
30
0
46
Hi All

I am working on a proof of concept for a micro hosting requirement. The client requires hosting of many small VM's, about 30-50 of 2GB of RAM each and 30GB of storage each. CPU and disk Load generated by these VM's are very low.

They are looking to retire their two old Dell R540 servers due to very high datacentre power costs.
rather than buying a single server to replace it and having that single point of failure, we are thinking of going a homelab-ish route of using micro desktops as servers in a cluster.

We are investigating the option of setting up a cluster of Intel NUCs like these ones: https://www.intel.com/content/www/u...nuc-12-pro-kit-nuc12wshi5/specifications.html

We like that it has a 2.5G nic with Vpro. We will likely Vlan public traffic onto a Vlan interface setup in Proxmox. We are also considering using one of the thunderbolt ports for a 10gig nic if we find the 2.5G nic is too slow.

We are looking at 3 or 5 nodes initially, up to a maximum of about 15 at most if the concept works very well.

using a SATA SSD for boot and a M.2 SSD for VM data. We know that there is no disk redundancy, but the requirement can tolerate 5min of downtime and a minute or two of data loss in the case of a node failure. We are wondering what would work best for storage.

We are wondering if it is viable to setup the M.2 SSDs in a ceph cluster with 1 OSD per node. We will be using something decent for the M.2 SSD, but at 2.5Gbit or 10gigabit networking, I don't see the SSD being the performance bottleneck. The shared storage nature should allow for migration and HA in the case of node failures or maintenance. I know general practice is to use at least 4 OSD's per node, but I am not certain as to the thinking behind that. I have seen people using single OSD nodes in their lab environments.

Anything less obvious that we may be missing, or is someone using hardware other than Intel NUC's for a similar purpose?
 
Last edited:
You need 4GB RAM and two CPU threads per OSD. I'd say that such a setup would be feasible, but you should test it thoroughly before going into production.
We are planning to max them out at 64GB of RAM and the 1240p CPU has 4 performance cores and 8 efficiency cores, so that should not be a bottleneck
 
Best bang for the buck is used enterprise servers.

Take a look at used Dell 13th-gen servers. Can optionally upgrade the internal NICs to 10GbE (fiber, wired, or both). The PERC H330 can be configured for HBA/IT-mode or get the PERC HBA330.

I like the R730xd and can use the rear drives as RAID-1 OS boot drives.

Curated list at lapgopher.com
 
Best bang for the buck is used enterprise servers.

Take a look at used Dell 13th-gen servers. Can optionally upgrade the internal NICs to 10GbE (fiber, wired, or both). The PERC H330 can be configured for HBA/IT-mode or get the PERC HBA330.

I like the R730xd and can use the rear drives as RAID-1 OS boot drives.

Curated list at lapgopher.com
Older servers power draw is just too high unfortunately

Ideally we are looking for an Nuc type device that supports 10gig ethernet
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!