2 NICs on same subnet - 1 for management 1 for VMs?

hp1

New Member
Jun 26, 2024
9
0
1
Hello,

I have a server with 3 interfaces, 1 is a 1GB onboard (enp0) , and 2 are 10GB ports (enp1) on a single adapter. I just got the 10GB card, so I've been using the 1GB interface for both management and VM access. /etc/network/interfaces below

I set up a new bridge with the 10GB adapter, however whenever I change the VMs to use vmbr1, they lose network connectivity. I tried adding an IP address to the second bridge, and I can ping that address from other machines, but the VMs still can't communicate out. Feels like this is a simple issue, but I've been searching and haven't found any answers to my issue yet. Hoping that someone is also configured this way and might be able to provide some suggestions.


auto lo

iface lo inet loopback
iface enp0s31f6 inet manual
iface enp1s0f0 inet manual
iface enp1s0f1 inet manual

auto vmbr0

iface vmbr0 inet static
address 192.168.0.2/24
gateway 192.168.0.1
bridge-ports enp0s31f6
bridge-stp off
bridge-fd 0

auto vmbr1

iface vmbr1 inet static
address 192.168.0.4/24
bridge-ports enp1s0f0
bridge-stp off
bridge-fd 0

#10Gbe
 
So your VMs are also in that same subnet?
How should the kernel know to which interface it should send packets destined for a host on that subnet?
It might work if you explicitly tell the kernel on which interface it can find each host, like route add -host 192.168.0.23 vmbr1
 
Thank you for the reply and that is a great question. I was just hoping to use the 10GB connection to provide additional bandwidth to the VMs, but keep the 1GB bridge for low bandwidth items, and access to proxmox if the 10G switch breaks as I have 1 of those and plenty of 1GB switches.

Does it make more sense to add the 10GB adapter into the vmbr1 bridge? I just wanted to create a high-bandwidth connection into the VMs, in the simplest, best supported way possible that doesn't involve VLANing.
 
Sorry I wasn't clear.

I have a server with an onboard gig port, and a separate dual 10G PCIE card. I've been using the onboard for management and VM access, and now that I have a 10G port, I'd like to use the bandwidth of the 10G for VMs, but also keep the 1GB active for management access (and potentially VM access for backup purposes).

Would a bond be a better approach, with the 10G as the primary, but the secondary available in case my switch with the single 10G port breaks and I need to access the proxmox UI?

Just want to be able to add the bandwidth of my new 10G capability.
 
Would a bond be a better approach, with the 10G as the primary, but the secondary available in case my switch with the single 10G port breaks and I need to access the proxmox UI?
Asymmetric bond is a bad idea. Might work with active-backup, though.
Using the two 10G ports for bonding is not an option?
 
I only have a single 10G port right now, I thought I might use the other 10G in the future if I wanted a separate storage network.

I saw another post about a similar asymmetric bond, setting the 10G as the primary. Ideally having 2 bridges would allow me access to the proxmox UI on either bridge address, and I could manually move VMs to either bridge depending if I had a problem with the 10G port on the server or switch.
 
I have it running ok using two nics on the same subnet. The kernel knows exactly where to send the traffic as the vm/lxc only use one of the nics.
I'm using one nic dedicated to my nas on one of the vm's. All other traffic goes to the other nic no problemo...
 
I have it running ok using two nics on the same subnet. The kernel knows exactly where to send the traffic as the vm/lxc only use one of the nics.
I'm using one nic dedicated to my nas on one of the vm's. All other traffic goes to the other nic no problemo...
Do you use two bridges? Can you share your configuration?
 
Yes using vmbr0 and vmbr1

Code:
auto vmbr0
iface vmbr0 inet dhcp
        bridge-ports enp1s0
        bridge-stp off
        bridge-fd 0

auto vmbr1
iface vmbr1 inet dhcp
        bridge-ports enp2s0
        bridge-stp off
        bridge-fd 0

I'm using dhcp from my router!

Both nic on 192.168.1.0/24 subnet.
 
Last edited:
Default via 192.168.1.2 dev vmbr0...

which is my router...

Make sure to disable the firewall on the vm's when testing as it is probably the reason for no connection in the vm's!
 
Last edited:
Yes using vmbr0 and vmbr1

Code:
auto vmbr0
iface vmbr0 inet dhcp
        bridge-ports enp1s0
        bridge-stp off
        bridge-fd 0

auto vmbr1
iface vmbr1 inet dhcp
        bridge-ports enp2s0
        bridge-stp off
        bridge-fd 0

I'm using dhcp from my router!

Both nic on 192.168.1.0/24 subnet.
Interesting. You don't have a iface enp1s0 or iface enp2s0 lines in your config?
So since both addresses are fetched via dhcp, both bridges would have a gateway? Proxmox doesn't allow me to add the same gateway to vmbr1, so I don't know what would happen if I added it. Currently my proxmox server is headless, so can't make significant network changes until I I move a few things around.
 
Off course I have cause the 'real' interfaces are connected to the bridge, just not shown here (it's just the bridge part of my !etc/network/interfaces....
The gateway is set automatically through the dhcp server of the router.
What's the output of your : nmcli conn show ?
 
Last edited:
Off course I have cause the 'real' interfaces are connected to the bridge, just not shown here (it's just the bridge part of my !etc/network/interfaces....
The gateway is set automatically through the dhcp server of the router.
What's the output of your : nmcli conn show ?
On my PVE (8.2.4) I don't have nmcli installed, and due to the current headless nature, don't want to introduce additional packages without a way to access console.

There is no firewall on the VMs. If I ping the router on the VM, I get back a response and when I change the bridge connection I get no connection. Changing back to vmbr0 things connect again, so it's not a VM configuration issue from what I see.
 
Your /etc/network/interfaces file looks fine to me allthough I'm not sure about the f1, f0 and f6 at the end of the interfaces ?

The bridges can definitely have the same gateway. BTW if you change the bridge connection you have to restart your network services to make it effective!

Try traceroute to diagnose your network routing.
 
Last edited:
Your /etc/network/interfaces file looks fine to me allthough I'm not sure about the f1, f0 and f6 at the end of the interfaces ?

The bridges can definitely have the same gateway. BTW if you change the bridge connection you have to restart your network services to make it effective!

Try traceroute to diagnose your network routing.
I'm not sure either. the enp1s0 is the 10G adapter, so it looks like f0 is port 1 and f1 is port 1. enp0s31f6 is actually the onboard, which should be called eno according to systemd rules, but it's what comes up at boot so it's fine.

I'll try adding the gateway to the bridge through the file and restart networking when I can get a monitor and keyboard to the physical server.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!