Dual-NIC enablement with VLANs (VMs untagged) (Migrations/Backups/PVE tagged)

linux

Member
Dec 14, 2020
95
35
23
Australia
G'day,

We have 2x switches available, each with 4x 10G ports and 48x 1G ports. They're stackable, though we'd rather avoid it.

At the moment, all hypervisors are routing through 1 of the 2 switches. They're going through the untagged handoff VLAN, with tagged VLANs atop.

Those tagged VLANs are running on the same ports as the untagged handoff VLAN.
Untagged = WAN access. Tagged = private networking with separate uplinks (& routers).
Configuration across the 2x switches is identical, so the VLANs and bound ports do match up.

We're trying to work out the best way to move from single-NIC to dual-NIC on all of our PVE hypervisors.
All hosts only have dual 1G NICs installed, so at the moment each hypervisor is capped accordingly in terms of its maximum throughput.
We're hoping to achieve true 2x1G per host (usable by VMs); and to split out migrations, backups & the PVE GUI (management) to a tagged VLAN.

Different ways we're looking at this:
  1. NIC #1 for WAN traffic (VM traffic with outside world)
    NIC #2 for LAN traffic (migrations, backups & PVE)
  2. NIC #1 & #2 for WAN & LAN (via VLANs) together
    Load-balance the interfaces via a bond/LAG/etc
Option 1 may have the benefit of "complete" traffic separation, while Option 2 provides the desired throughput per-host & lets us split the traffic.

Due to the added complexity and our 2021 network plans, we're hesitant to stack our switches - though it's possible and the modules are installed.

Reading into the different bond options:
  1. 802.3ad would require us to stack the switches, so that the LAGs can be configured across the switches. I think they'd require stacking?
    Does this provide enough benefit to be worth the additional complexity for 3-6 months? It does address both of our key desires though.
  2. balance-rr should provide the traffic throughput that we're hoping for, but do all switches support it?. Concerned about VLAN compatibility.
  3. broadcast seems to be hard to read into, likely due to the name matching up with the networking function. It likely isn't suitable for us?
  4. Other options listed here don't sound like they'd be overly helpful - I could well be entirely wrong there (as well as on most of this post!).
Key info & questions about how to sort this:
  • Our upstream provider doesn't have any network loop protection at the moment, so we'd like to be as confident as possible in the chosen route and implementation method before going ahead, to minimise the risk of anything happening there (as silly as their lack of protection is). We'll perform the changes on-site for extra safety.
  • VMs will be routing out via the default/untagged VLAN. It's traffic around the VMs that we're looking to move to tagged VLANs (ie. migrations, backups & the PVE GUI).
  • NAT isn't involved in our environment. Public IPs (untagged) go straight out via the handoff/uplink, & private is (will be) handled via the newly-created VLANs (tagged).
  1. What would be the most logical way to move forward with these changes, considering our desires?
    1. Stemming from that, as we're going "backwards" (VLANs not for VMs), is anything different?
  2. If there are multiple ways to go about this that accomplish our goals, what would you do & why?
  3. Are there other resources (see below) that'd be useful ahead of implementing the changes?
Pages/threads that we've found helpful thus far:
  1. https://forum.proxmox.com/threads/understanding-vlans-in-proxmox.68474
  2. https://engineerworkshop.com/blog/configuring-vlans-on-proxmox-an-introductory-guide
  3. https://bobcares.com/blog/proxmox-vlan-bridge (not incredibly useful, but gives a different perspective)
Thanks so much for reading this far, and apologies if this is long-winded.
Hopefully I've made our position clear enough, but I'm happy to share more info if it'd help!

As for what our switches support re: protocols (LACP etc) - https://support.hpe.com/hpesc/public/docDisplay?docId=emr_na-c03007561#N10D0B

Cheers,
LinuxOz
 
Last edited:
You didn't really go into detail of how many ports are being used on each switch etc.

I am a big believer in separating any and all LAN traffic away from any and all public traffic, which could easily be accomplished by a second nic on a private subnet and completely separating the traffic in a different vlan, or even separate switches if you port density requires it if it is easier to manage that way.

Also, having the management of both switches on that same subnet/vlan might make things easier to manage as well.

If the vlans are separated, you can even do the vlans untagged, unless there is a need to keep them tagged, although if your switches support 10G lags I am sure there is not a bandwith issue on the switch backplane.

I look forward to additional posts in this threads especially hearing the other side of the story and the why for the answer.
 
You didn't really go into detail of how many ports are being used on each switch etc.

A fair few, but not over 1/2 on each switch if they were compacted down to remove spare/empty ports, etc. We're adding hardware quickly though.

I am a big believer in separating any and all LAN traffic away from any and all public traffic, which could easily be accomplished by a second nic on a private subnet and completely separating the traffic in a different vlan, or even separate switches if you port density requires it if it is easier to manage that way.

That's what we ended up doing. We left WAN/internet traffic for VMs on the default/initial NIC/bridge, and brought up a 2nd NIC and bridge on each host which was privately addressed and isolated in a separate VLAN. A few spots need changing, and Proxmox references different locations.

Order of execution was important, as well as ensuring that the finer details of the process flow were followed - all went really smoothly. Nice & easy!

Also, having the management of both switches on that same subnet/vlan might make things easier to manage as well.

Once the new switches were installed, we connected both the RJ45/HTTP access (via its own VLAN entirely, with both together) and the serial ports which were hard-wired into a NUC that's on-site, which skips the 2x switches entirely and goes straight to the upstream device, adding resilience.

If the vlans are separated, you can even do the vlans untagged, unless there is a need to keep them tagged, although if your switches support 10G lags I am sure there is not a bandwith issue on the switch backplane.

We've enjoyed embracing untagged on the WAN NICs, and have everything tagged on the private side now. This has opened up configurability for us, and we're now able to dial-in (VPN) to any private network that we need to, knowing that they're all isolated from each other.

I look forward to additional posts in this threads especially hearing the other side of the story and the why for the answer.

Previously we had a simplistic network switching configuration, with VLANs/etc having a minor focus. This revamp has improved our control, security & reporting abilities. We've had a generational leap on the switching side amongst it, not to mention opening up the ability to move over to 10G.

There's another round of switching updates due in the near-ish future, so we didn't go down the LAG route as we'd have had to stack the switches and introduce a degree of potential failure there. For now, we're running from a better place, and have a road-map now to bolster it further.

Appreciate your interest! My apologies for leaving this thread (as well as this one) neglected for a while. I've posted there too to link back here.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!