G'day,
We have 2x switches available, each with 4x 10G ports and 48x 1G ports. They're stackable, though we'd rather avoid it.
At the moment, all hypervisors are routing through 1 of the 2 switches. They're going through the untagged handoff VLAN, with tagged VLANs atop.
Those tagged VLANs are running on the same ports as the untagged handoff VLAN.
Untagged = WAN access. Tagged = private networking with separate uplinks (& routers).
Configuration across the 2x switches is identical, so the VLANs and bound ports do match up.
We're trying to work out the best way to move from single-NIC to dual-NIC on all of our PVE hypervisors.
All hosts only have dual 1G NICs installed, so at the moment each hypervisor is capped accordingly in terms of its maximum throughput.
We're hoping to achieve true 2x1G per host (usable by VMs); and to split out migrations, backups & the PVE GUI (management) to a tagged VLAN.
Different ways we're looking at this:
Due to the added complexity and our 2021 network plans, we're hesitant to stack our switches - though it's possible and the modules are installed.
Reading into the different bond options:
Hopefully I've made our position clear enough, but I'm happy to share more info if it'd help!
As for what our switches support re: protocols (LACP etc) - https://support.hpe.com/hpesc/public/docDisplay?docId=emr_na-c03007561#N10D0B
Cheers,
LinuxOz
We have 2x switches available, each with 4x 10G ports and 48x 1G ports. They're stackable, though we'd rather avoid it.
At the moment, all hypervisors are routing through 1 of the 2 switches. They're going through the untagged handoff VLAN, with tagged VLANs atop.
Those tagged VLANs are running on the same ports as the untagged handoff VLAN.
Untagged = WAN access. Tagged = private networking with separate uplinks (& routers).
Configuration across the 2x switches is identical, so the VLANs and bound ports do match up.
We're trying to work out the best way to move from single-NIC to dual-NIC on all of our PVE hypervisors.
All hosts only have dual 1G NICs installed, so at the moment each hypervisor is capped accordingly in terms of its maximum throughput.
We're hoping to achieve true 2x1G per host (usable by VMs); and to split out migrations, backups & the PVE GUI (management) to a tagged VLAN.
Different ways we're looking at this:
- NIC #1 for WAN traffic (VM traffic with outside world)
NIC #2 for LAN traffic (migrations, backups & PVE) - NIC #1 & #2 for WAN & LAN (via VLANs) together
Load-balance the interfaces via a bond/LAG/etc
Due to the added complexity and our 2021 network plans, we're hesitant to stack our switches - though it's possible and the modules are installed.
Reading into the different bond options:
- 802.3ad would require us to stack the switches, so that the LAGs can be configured across the switches. I think they'd require stacking?
Does this provide enough benefit to be worth the additional complexity for 3-6 months? It does address both of our key desires though. - balance-rr should provide the traffic throughput that we're hoping for, but do all switches support it?. Concerned about VLAN compatibility.
- broadcast seems to be hard to read into, likely due to the name matching up with the networking function. It likely isn't suitable for us?
- Other options listed here don't sound like they'd be overly helpful - I could well be entirely wrong there (as well as on most of this post!).
- Our upstream provider doesn't have any network loop protection at the moment, so we'd like to be as confident as possible in the chosen route and implementation method before going ahead, to minimise the risk of anything happening there (as silly as their lack of protection is). We'll perform the changes on-site for extra safety.
- VMs will be routing out via the default/untagged VLAN. It's traffic around the VMs that we're looking to move to tagged VLANs (ie. migrations, backups & the PVE GUI).
- NAT isn't involved in our environment. Public IPs (untagged) go straight out via the handoff/uplink, & private is (will be) handled via the newly-created VLANs (tagged).
- What would be the most logical way to move forward with these changes, considering our desires?
- Stemming from that, as we're going "backwards" (VLANs not for VMs), is anything different?
- If there are multiple ways to go about this that accomplish our goals, what would you do & why?
- Are there other resources (see below) that'd be useful ahead of implementing the changes?
- https://forum.proxmox.com/threads/understanding-vlans-in-proxmox.68474
- https://engineerworkshop.com/blog/configuring-vlans-on-proxmox-an-introductory-guide
- https://bobcares.com/blog/proxmox-vlan-bridge (not incredibly useful, but gives a different perspective)
Hopefully I've made our position clear enough, but I'm happy to share more info if it'd help!
As for what our switches support re: protocols (LACP etc) - https://support.hpe.com/hpesc/public/docDisplay?docId=emr_na-c03007561#N10D0B
Cheers,
LinuxOz
Last edited: