[SOLVED] Proxmox Cluster For Data Centre Hosting Setup

void307

New Member
Mar 11, 2024
10
1
3
Need a bit of help/advice/tips with the scenario below. We've read the documentation thoroughly but feel like we're bit in a blackhole with all the info.

We're currently learning and trailing Proxmox after, well .... Vmware.

Our aim is to setup a 3 node Cluster that connects to a Fortinet that will handle all the networking side of things (VLAN's, DHCP, 2x WAN's etc)

Now, we're bit new to the Proxmox networking side of things. But we all have quite good understanding of the on prem single server hosting with proxmox but not clustering and advanced networking yet.

We would like to create failover in case the primary link goes down and i saw that one can add multiple links during the initial Cluster config with the lowest number as the highest priority. 3X Identical Servers

What is the best case setup here? Bridge vs Bond? Linux vs OVS?

See attached screenshot of ports on 1 server pre setup and cluster join. All 3 nodes are identical so far in this regard.

Quick overrun of intended setup.

VLAN Trunks 10Gbps (Server Ports 1 and 2 ) LAG/bond etc? 2 per server
IPMI Management 1 port per server
Corosync via 20Gbps ports - 2 ports per server (Serv1 P1 to Serv 2 P1, Serv1P2 to Serv3 P1, Serv2 P2 to Serv3 P2)
Ceph via 25Gbps ports - 4ports per server. Same concept


Fortinet
2x WAN (Different ISP's for redundancy)
3 Ports birdged for VLAN trunked server connection
3 Ports bridged for IPMI management.


1723097250961.png
 
Generally we recommend using Linux bridges over OVS, unless you know why you want to use OVS (usually some special feature that isn't otherwise available).

If I understand correctly you want to do Corosync / Ceph Full Mesh? 20 Gbps might be a bit overkill for Corosync, since it is very senstive on latency but doesn't really need a lot of bandwidth. Usually we recommend having one completely separate dedicated link for Corosync, and then using another network (usually storage network) as a backup link. Since Corosync supports multiple links out-of-the-box we strongly discourage any form of bond on the Corosync link, since Corosync itself handles failover much better than bonds. It can even lead to issues with Corosync in some cases.

It might make sense to use the 20G ports for the trunked VLANs and the 10G for Corosync instead.
 
  • Like
Reactions: void307
@shanreich

Your suggestions work. Thank you.

We do have a bit of hassle with the connections for the Corosync (yellow on diagram) when completing the link aka Serv1 to Serv3 which seems to form a loop and break the quorum and network. Typical network loop 101 I take. If I understand the docs correctly; during Cluster creation and adding Link 0, Link 1 etc (bridges) you need vmbr1 (Serv1, Serv2 and Serv3) and vmbr2 (Serv1 and Serv3). Is this the correct idea instead of throwing all Corosync (Yellow) interfaces under Vmbr1, which is currently the setup but causing the loop.


1723615730790.pngSee below the current working diagram we made to give an idea. We're purchasing Proxmox support once we have the entire thing up and running.
 
You don't even need to bridge that port (this is only needed if you want to share the connection with guests) - so configuring the IP directly on the physical interface should suffice
 
  • Like
Reactions: void307

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!