Proxmox host unable to reach internet when gateway for second VLan is added.

Enotsz

New Member
Mar 6, 2025
4
0
1
Currently I have my Proxmox up and running and with untagged and tagged networks configured. When I want to add the gateway for my last VLAN on the setup the proxmox host starts not behaving correctly and can't reach the internet.

This is how the configuration looks like without the gateway:
1741267344883.png

/etc/network/interfaces:
Code:
auto lo
iface lo inet loopback

iface eno1 inet manual

iface enx04bf1b359209 inet manual

auto vmbr0
iface vmbr0 inet static
        address 192.168.1.142/24
        gateway 192.168.1.1
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 20 30

auto vmbr0.20
iface vmbr0.20 inet static
        address 192.168.20.142/24
        gateway 192.168.20.1

auto vmbr0.30
iface vmbr0.30 inet static
        address 192.168.30.142/24

With the configuration shown all seem to work, however if I add on the vmbr0.30 the "gateway 192.168.30.1" I lose access to the internet from the proxmox host.

Any ideas of what the problem might be?
 
Any ideas of what the problem might be?
You CAN NOT have two DEFAULT gateways! Remove one. (You can have any number of networks, each with a specific route into them.)

To connect several separate networks you usually use a router ;-)
 
You CAN NOT have two DEFAULT gateways! Remove one. (You can have any number of networks, each with a specific route into them.)

To connect several separate networks you usually use a router ;-)
Thanks for your reply.

While I am trying to fully digest your reply I need to understand something more.

I have on my router 1 main (untagged) lan and 2 vLans, the vLans can't access each others network gateway so they have their own gateway on their network (vlan 20 has x.x.20.1 and vlan 30 has x.x.30.1).
On the Proxmox I have a LXC that serves has multicast relay in between vLans. This LXC relay then needs to have configured interfaces on all vLans to listen to multicast.
In order to configure the vLans interfaces on the LXC for multicast I need to configure on the Proxmox host Linux vLans to be able to bridge them into the LXC.
Now to configure the Linux vLans I need to tell them their specific gateways otherwise they will not be able to messages since gateways on the router are only accessible for their on vLan.

Does this makes sense or am I mixing (not understanding) what Proxmox vLans are? Should they not be just an interface declaration to the existing networks available on the physical port?

Disclaimer I started using Proxmox yesterday for the first time so I might be wrong in multitude of things.
 
Well..., there are multiple ways to work with separated networks and/or VLANs in PVE. At least a) "everything is trunked, pick you VLAN by setting the tag by yourself", b) "there is no VLAN, you only get untagged traffic on this (your VM) interface" and c) SDN.

Maybe SDN is mis-listed here. I am not sure because I do not use it :-)

From a) and b) I chose b) because for me it looks simpler and does fit better to my brain. For c) my clusters are too small to be tempting and I am too old ;-)

But I have one or two dozen networks. And one or two dozen VLANs. Remember those two things represent two independent layers of the network stack. Usually I use a one-to-one relationship to glue them together. But this is a choice of mine, not a requirement.

My approach is to give a VM (or a container) one (or more) NIC which delivers exactly one network to them. VM NICs are connected to PVE "classic" bridges. So I need one bridge per IP network. Or one bridge per VLAN - which in my case is exactly the same.

The following is an excerpt of my /etc/network/interfaces and it establishes two bridges with an IP address and a third bridge without an assigned IP address. You only need an IP address on a bridge if PVE itself needs to access that network directly. All of my traffic goes through a router and there is only one default gateway known by this PVE. (The other bridges do not have an IP address. I picked the listed subset for the sake of this demonstration.)

Code:
iface eno1 inet manual
iface enp4s0 inet manual

auto vmbr3
iface vmbr3 inet static
        address 10.3.16.8/16
        gateway 10.3.12.254
        bridge-ports enp4s0
        bridge-stp off
        bridge-fd 0
# san

auto vmbr2
iface vmbr2 inet manual
        bridge-ports eno1.2
        bridge-stp off
        bridge-fd 0
#dmz

auto vmbr11
iface vmbr11 inet static
        address 10.11.16.8/16
        bridge-ports eno1.11
        bridge-stp off
        bridge-fd 0
#adm

Note that vmbr3 is not bound to a VLAN but to an separate NIC. Untagged. I want this traffic separated physically from the other main trunk, which contains several networks. (Sidenode: corosync is configured to use one ring on these two pyhsical NICs each.)

The two other bridges use the same physical NIC, separated by VLAN tags. vmbr2 is called "dmz" and PVE is NOT reachable directly from there. On the other hand vmbr11 has an IP address and this means each VM inside that network may talk to this PVE. Without routing!

PVE can talk to each network it has a bridge with an IP address in. Without routing.

For everything else data is being sent to the default gateway. That's it. At least for this topology which is a simple as it can be.

vmbr2 may have (IP-) traffic of any kind. PVE does not see it. PVE can not get accessed and not attacked from there. For VMs connected to that bridge there is no exit through PVE. There must be (and there is, of course) another router inside that network which handles traffic in/out.


Probably one could write books about the different ways to connect PVE and some VMs to the Internet ;-)

Compare also: https://pve.proxmox.com/wiki/Network_Configuration#_default_configuration_using_a_bridge ff.
 
Last edited:
Got back here to share my experience.

In proxmox you don't define gateways for vlans on the host. Instead you define the gateways for the interface you need on the containers or on the virtual machines.

Thanks to everyone//