Bill of material - Proxmox HA solution

leovinci81

New Member
Jun 7, 2017
17
0
1
49
Hi,

After some good feedback on Proxmox I'm trying to build a semi-professional low-power proxmox High availability environment.
Are there maybe any recommendations on hardware motherboards (preferable supermicro) with good, power-efficient CPU's?

The main feature I'm looking for is compatibility with good fencing hardware..? Can people maybe share their BOM's for HA clusters, focussing on power efficiency ...?

Thx!!

Thomas
 
e.g.

Use a nvme express SSD for the installation, and 4 x enterprise class SSD for Ceph OSDs. The integrated 10 GBit is perfect for Ceph.

Damn:)) that's a quick reply :)))
couldn't make up which motherboard they use on this link ... you had real experience with this device ? two of those in a cluster ?
 
we have 3 of them in a HA Ceph Cluster in our testlab.

Hi Tom,

Many thanks for your feedback again, highly appreciated.
Any chance you can share a bit more info ..? how many hosts do you run ..? Memory ? CPU ?
Since how long is this up & running ..? Sorry for bombarding with questions, but a bit hard to find real-life experiences
 
Hi, small note for what it is worth (?) HA Config in Proxmox 4.X is entirely a different beast from 3.X in my experience. Testing I did last year with a 3-node supermicro @ remote hosting centre (OVH) and it .. just worked .. no fencing special hardware needed; no fussing about how to configure fence events, just use defaults and 'it just worked'. 3 nodes were enough to do the 'sanity test' of avoiding split-brain. Manual fail even (ie, remote IPMI web console) to kill power on one node) - cleanly triggered a fail event; which led to the other 2 nodes taking over VMs which were under control of HA Management (ie, those VMs started up on the other 2 nodes following the confirmed 'a node has failed, HA must intervene!") status.

So. Once I realized how easy it was - I was quite amazed and happy. Not sure if you have tested a basic 3-node Proxmox 4.X HA config yet or not, but I can recommend to do so and compare to the old way, just forget everything you had to learn for the "old hard method" :) and you will be fine.

Tim
 
Hi, small note for what it is worth (?) HA Config in Proxmox 4.X is entirely a different beast from 3.X in my experience. Testing I did last year with a 3-node supermicro @ remote hosting centre (OVH) and it .. just worked .. no fencing special hardware needed; no fussing about how to configure fence events, just use defaults and 'it just worked'. 3 nodes were enough to do the 'sanity test' of avoiding split-brain. Manual fail even (ie, remote IPMI web console) to kill power on one node) - cleanly triggered a fail event; which led to the other 2 nodes taking over VMs which were under control of HA Management (ie, those VMs started up on the other 2 nodes following the confirmed 'a node has failed, HA must intervene!") status.

So. Once I realized how easy it was - I was quite amazed and happy. Not sure if you have tested a basic 3-node Proxmox 4.X HA config yet or not, but I can recommend to do so and compare to the old way, just forget everything you had to learn for the "old hard method" :) and you will be fine.

Tim
Hi Tom,

Thanks for your kind reply, any chance you could share supermicro motherboard (model ?
Just don't want to run across incompatibility with drivers, etc ...
 
This hardware is part of our testlab, so there is no constant workload on it.
Hi Tom,

Thanks for your kind reply, any chance you could share supermicro motherboard (model ?
Just don't want to run across incompatibility with drivers, etc ...

I already posted it above.
 
You mean, install only the Proxmox operating system on the nvme SSD?

This is how we use it here. The Proxmox VE is installed on the nvme, and the 4 x SATA SSD are used for Ceph OSD.
As the nvme is also used for Ceph monitors, we use only enterprise class SSD and nvme (on these boxes, from samsung).
 
  • Like
Reactions: leovinci81
Hi Tom,

Thanks for your kind reply, any chance you could share supermicro motherboard (model ?
Just don't want to run across incompatibility with drivers, etc ...

Hi, I just looked, apparently the current nodes I've got at OVH are motherboard, X10SDV-TLN4F
The other ones were a bigger config I used last year but I don't appear to have the MB Documented, and I no longer have access.
Broadly speaking, I think - any standard server motherboard (ie, Supermicro) should work .. since they tend to use pretty standard chipset build config...

Tim
 
For very low power consumption (and also low power itself), you can use 3 APU2 boxes with one SSD each. Don't expect miracles, but it works quite nice for moderate workloads.

https://www.pcengines.ch/apu2.htm

You get a 3 node ceph cluster with approx. halv a TB of storage for about 1000 Euros.
 
Is a RAID controller needed for the Ceph disks?

No, not needed. The whole technology is software based. You should use enterprise grade disks, because of the better failure detection and smaller response times to higher layers on errors. This is crucial for I/O response times.
 
This is how we use it here. The Proxmox VE is installed on the nvme, and the 4 x SATA SSD are used for Ceph OSD.
As the nvme is also used for Ceph monitors, we use only enterprise class SSD and nvme (on these boxes, from samsung).
Hi Tom,

Many thanks on all your feedback.
I actually just bought exactly the same motherboards from supermicro.
Is there maybe any chance you could share more information about your setup ..? You guys didn't make a step-by-step by any chance ..?
Bit new in the Ceph setup ... therefore was wondering if I could replicate your setup.

Thomas
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!