Multi-Subnet, Multi-Tenant, VLANned, Bonded & Bridged Network Config - Am I Doing It Right?

SamboNZ

New Member
Feb 10, 2021
8
2
1
Hey guys,

I'm an IT veteran but a Proxmox & Linux noob.
I'm configuring a set of Proxmox 6.3-2 servers for use in a relatively complex web hosting environment and I just wanted to get some confirmation that the way I'm configuring the networking is correct.

The network will used a bridged model and public IPs are assigned to the main (physical) firewall with ports forwarded to VMs on internal IPs as appropriate.
There will be VLANs in use on the network, including with the VMs.
There will be 3 clustered Proxmox servers.

Basically the IP network environment is:
10.0.10.x/24 - Public (Connectivity to the Internet via the main (physical) firewall)
10.0.20.x/24 - Management (Internal LAN / Backups etc)
10.0.30.x/24 - Cluster (ProxMox Cluster Traffic)
10.0.40.x/24 - Storage (Dedicated Host IP Connected Storage)

The existing physical switching setup is fully redundant with failover between switches.

Based on this I have created bonded NIC pairs for all 4 IP subnets and bridges for both the Public and Management networks as follows:

1612928290507.png

So, how'd I do? :D
Anyone see any problems / mistakes?

I want to get this right before I configure clustering!

Any feedback greatly appreciated!
Thanks!
 
10.0.30.x/24 - Cluster (ProxMox Cluster Traffic)
This is the Corosync network right? Configured in /etc/pve/corosync.conf?

If so, don't use a bond for it. Corosync can handle up to 8 links/rings and will switch between them by itself if one becomes unavailable or problematic.
It will do so way faster than the default bond, a few ms instead of the 100ms that an active backup bond will wait by default.

So I would remove that bond and configure 2 different networks on the separate interfaces and let corosync use both directly. See the docs how to configure more links for corosync.

vmbr interfaces are needed if the guests should be able to use it. Think of them as virtual switches. If no guest needs access to the management network, you could configure the mgmt IP directly on that bond and remove vmbr1.
 
Hi Aaron,

Thanks for that info!

This is the Corosync network right?

Correct, and thanks for highlighting that failover lag with bonded interfaces!

I've read through the documentation you linked, but to confirm I understand it correctly; to reconfigure the Corosync / cluster networking per your suggestions I could do something like:

link0:
ens2f0
10.0.30.103/24
Connect to Switch 1

link1:
ens2f1
10.0.31.103/24
Connect to Switch 2

Is that right?

vmbr interfaces are needed if the guests should be able to use it. Think of them as virtual switches. If no guest needs access to the management network, you could configure the mgmt IP directly on that bond and remove vmbr1.

In my case, some guests would need to be connected to the management network as these hosts will contain both public-facing and some back-end management, backup etc servers.

There's also the possibility that we will use the management network for high volume traffic such as guest OS level backups (heavily firewalled of course) in order to leave the public subnet network capacity free, but this has yet to be decided.
 
Last edited:
Is that right?
Correct.

If you want to get a better idea of how it will behave I can only recommend to set up a PVE node and then in there create a few nested PVE VMs. With those nested PVEs you can create a cluster and observe different situations.

For the network you can create multiple vmbr interfaces that don't have a bridge port or IP assigned to them -> virtual switch.

If you then create a similar network for corosync and set one of the NICs of a VM to "disconnect" in the VM NICs settings (advanced) you can observe in the syslogs how Corosync behaves. tail -f /var/log/syslog | grep corosync
 
Its an old post but !!!
what if we add ceph in thiw installation ?
wich NICs or vmbr's we use for ceph public and cluster network ?
Thanks in advance
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!