Remote access through Linux bridge with no IP/gateway

nickk

Active Member
Aug 27, 2020
9
1
43
46
I've a single Proxmox host with two interfaces. The first is the onboard gigabit management interface. This is vmbr0.

To move the Proxmox hosts and VMs into their own subnet, I created a Linux VLAN interface called vmbr0.2, to put host into VLAN 2. This interface has an IP and gateway defined.

I added a 10GB card and created a Linux Bridge interface, vmbr1.

I've created a Linux Mint VM and assigned it to vmbr1 with the vlan tag of 2.

Vmbr1 does not have an IP or gateway defined.

(please see image below)

On my client I've installed nomachine, which operates on port 4000 for remote desktop.

I am not able to connect to the Mint VM from a device in VLAN 1 (where my two Macs and Mint NUC are. All return a time out error.

The Mint VM is able to ping the gateway defined 'inside' the VM. It can telnet to port 4000 on my Mac and my Mac can telnet to its port 4000. A traceroute between the two is one hop, both ways. There are no firewall rules keeping the VLANs apart - it is simply a logical separation.

If I put the Mint VM on to vmbr0 instead of vmbr1, I can connect to the Mint VM. I assume then, that I have not properly configured vmbr1 and the Proxmox interface needs a default gateway, as well as the PC. I wasn't able to define an IP in the same range (2.X/24) nor the same default gateway as vmbr0.2 has defined. I assume this would cause some sort of routing loop.

I'd be very grateful if folk could (gently) tell me why the set up isn't working as I imagined and what I should do to allow access to virtual machines.

Many thanks.

920-nic-config-Screenshot from 2025-09-15 14-26-45.png
 
Try this:

ip link set eno1 down
sleep 2
ip link set en01 up
sleep 5

This brinks the link up again, I had comparable issues
 
Wouldn't that sort of 'restart' the host interface?

Is that the problem I'm having? That the 10G card doesn't know how to 'route' out through the slower host interface?

I'll give the box a reboot - that should have the same effect?
 
I'm afraid the reboot didn't work to resolve the access problem. I think there's an interaction between the VM and the interface as regards gateway that sits above the VM.

I thought the VM would just route for itself, using the 10GB card as a virtual switch. It doesn't.
 
I don't know if it's useful but when I try to SSH in I receive this error *after* authenticating:

client_loop: send disconnect: Broken pipe

Is this problem obvious and I'm just not understanding something really basic? I don't remember needing to set an IP on a host adaptor when using a bodge USB 2.5GB adaptor?
 
Just a minor update - I wondered if my having the VLAN port set to 1 but allow all would be a problem, so changed it to VLAN 2, the VLAN the proxmox hosts and VMs are on. Initially I thought this had resolved the issue. Alas, it didn't. I reverted that change so *untagged* traffic from the Mellanox CX3 is still assigned to VLAN 1.

I thought, in my ignorance - oh, perhaps you need to create a VLAN for vmbr1? Which I duly did. That returned the error:
command 'ifreload -a' failed: exit code 1.

A bit of Googling here suggested reinstalling ifupdown2 (apt install ifupdown2 --reinstall) on the host.

I did this and now nomachine remote desktop and ssh both work.

I've a lot of learning to do - my boded collection of devices doesn't help me much.