1x NIC to 2x NICs (Keep VMs/WAN on 1st & Move PVE/Corosync/etc to 2nd)

linux

Member
Dec 14, 2020
95
35
23
Australia
Hi there,

I'm looking to update the networking setup for Proxmox VE now that the switches have been swapped out for more capable models.

At the moment, each host has 1x NIC connected, which is serving VMs (WAN) and PVE (Web GUI, SSH, Corosync, etc). Static addressing for PVE is set against vmbr0 (NIC #1).

CURRENTLY:

NIC #1 - Public network
Everything (vmbr0)

The theory is to move Proxmox/Web/Corosync over to NIC #2 that's been brought online (new vmbr created, host rebooted, etc), while keeping VMs on NIC #1.

PROPOSED:

NIC #1 - Public network
WAN for VMs (vmbr0 - no change)

NIC #2 - Private network (vmbr1)
Proxmox Management (Web/SSH/etc)
Corosync, etc (Cluster, migrations, etc)

What's the best method to go about these changes for Proxmox and its accessory services?

I've read into the ring0_addr property for Corosync, though there's a bit to negotiate all at once so order of execution seems important. VMs don't need updating as vmbr0 (NIC #1) remains valid for them. Proxmox needs management (ie. Web GUI IP address) to be moved, and other services need moving too (ie. Corosync). PVE then Corosync?

Does this thread reply have any relevance here? Is the Proxmox/Management change as simple as updating /etc/hosts and restarting the network? (After NICs are tested OK)

From what I can work out, we'd need to remove the static addressing in /etc/network/interfaces for vmbr0 and assign the relevant static addressing for vmbr1 before we'd be able to move Proxmox/Corosync/etc across to it. Would that be correct, or is that needed as well as the /etc/hosts updates? Do services then require restarting?

Sorry for the questions - it's not something that I'm looking to stuff up!

Thank you!
 
Honestly, I would rather move VMs and WAN to another NIC than Corosync ...

We did this in reverse, as it makes no difference and retained cabling layout (colour-based). Moved all but WAN traffic to new NICs and bridges.

I have to disagree on that, though. Just bridge the other NIC with vmbr0 and you should be fine.

I've replied properly in my other thread over here - we ended up adding a second bridge as it keeps configurability nice and fine-grained.

Appreciate both of your replies, thanks a bunch.
 
We now have an active-backup bond, and a LACP bond. The former is fibre-first with LACP on fibre-fail; the latter is the LAG for 2x 1G RJ45. So the copper LAG bond is utilised as the backup component of the fibre bond and, as below, it doubles as the full-time PVE interface for cluster syncing.

So this way we have 2 bridges, and point #1 to fibre-first active-backup bond (handles WAN/VMs) and point #2 to the Copper LAG for PVE/Corosync.

This is giving us a diet version of load-balanced fibre feeds, as at least failover is to 2G, while it retains the VLAN & medium separation for PVE always. Corosync was simple to re-configure back in 2021, the key thing with Proxmox is like anything Linux - spend most of your time planning! :)
 
  • Like
Reactions: ericprox

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!