Can't get VLAN tagged traffic across Linux Bridge

jsalas424

Active Member
Jul 5, 2020
142
3
38
34
Hello,

I can't seem to get tagged VM traffic to my router. I am successfully routing vlan traffic on another proxmox node using an OVS config, so it's a node specific issue. 192.168.1.1 is my pfsense router. I have rebooted the node multiple times in between network changes.

I initially attempted with the following OVS setup:
1640706612423.png

This is my current setup with a VLAN aware linux bridge:
1640706683162.png

Here's what the networking configuration file looks like:
Code:
root@TracheNodeA:~# cat /etc/network/interfaces

auto lo
iface lo inet loopback

auto enp2s0
iface enp2s0 inet static
        address 10.0.0.3/24
#cluster NIC

auto eno1
iface eno1 inet manual
#LAN NIC

auto vmbr0
iface vmbr0 inet static
        address 192.168.1.24/24
        gateway 192.168.1.1
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
#LAN Bridge

Any help would be greatly appreciated!!

PVE 7.1-6
 
Where and how do you configure the VLAN tag?
If it is inside the VM, please provide its network configuration.
 
I configure the VLAN tag in the VM hardware pane for both the OVS and Linux Bridge scenario:
1640710406354.png

This is the same way I'm doing it on my other PVE node.
 
Could you watch the traffic with `tcpdump` to see how it arrives on the bridge and how it is forwarded to the underlying `eno1`?
For this run the following commands:
tcpdump -envi vmbr0 -w vmbr0.pcap
tcpdump -envi eno1- w eno1.pcacp
 
Thanks Mira,

I turned off all the other VMs on the host and ran the commands and was able to get clean data for vmbr0. I then tagged the VM with VLAN 3 and rebooted. I stopped the tcpdump at that point.

I was able to run the tcpdump for eno1 once, but then I got permission denied? Sorry the file is really messy, this one wasn't with the other hosts shut off.

root@TracheNodeA:~# tcpdump -envi eno1 -w eno1.pcacp
tcpdump: eno1.pcacp: Permission denied

Here's the files: https://ufile.io/f/yuhhm
 
What IP is configured in the VM?
Did anything run that would send packets to your router, which should contain VLAN tag 3?
 
All my VMs are configured to grab IPs via DHCP, my pfsense router runs a DHCP server for each VLAN. Here is the working config I have on my other PVE node as well as the associated VM config where traffic is sucesfully being passed to VLAN 3.

Screen Shot 2021-12-28 at 12.43.53 PM.png
1640713561774.png

The config for the node you're seeing is running the pfsense VM at 192.168.1.1
 

Attachments

  • Screen Shot 2021-12-28 at 12.44.19 PM.png
    Screen Shot 2021-12-28 at 12.44.19 PM.png
    121.1 KB · Views: 4
  • 1640713557959.png
    1640713557959.png
    130.6 KB · Views: 3
As I was writing up this issue, I upgraded my other Node to PVE 7. Now my VMs can't reach their VLAN gateway, and I'm suspecting that the PVE 6-7 upgrade broke openvswitch.

Edit: I am now certain that my VLAN tagging issue are related to my upgrade from PVE 6-7. I had a setup working perfectly fine, and now VMs cant access their VLAN gateway, how can I further investigate this?
 
Last edited:
I cut openvswitch out of the picture and still none of my tagged VMs are working correctly after the 6 to 7 upgrade:

Here is the current networking with VLAN aware Linux Bridges:
1640716901603.png

VM can't access it's gateway
jon@zoneminder:~$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 0e:3f:ea:81:f7:09 brd ff:ff:ff:ff:ff:ff
inet 192.168.3.10/24 brd 192.168.3.255 scope global dynamic ens18
valid_lft 6974sec preferred_lft 6974sec
inet6 fe80::c3f:eaff:fe81:f709/64 scope link
valid_lft forever preferred_lft forever
jon@zoneminder:~$ ping 192.168.3.1
PING 192.168.3.1 (192.168.3.1) 56(84) bytes of data.
^C
--- 192.168.3.1 ping statistics ---
9 packets transmitted, 0 received, 100% packet loss, time 8110ms
 
Last update for the day:

Digging into the logs, I found this when I assign a VLAN tag to the VM:

Code:
Dec 28 15:24:53 TracheNodeA pvedaemon[1116]: <root@pam> update VM 42069: -net0 virtio=7E:EA:78:9D:EF:1F,bridge=vmbr0,tag=3
Dec 28 15:24:53 TracheNodeA kernel: vmbr0: port 2(tap42069i0) entered disabled state
Dec 28 15:24:53 TracheNodeA ovs-vsctl[39996]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port fwln42069i0
Dec 28 15:24:53 TracheNodeA ovs-vsctl[39996]: ovs|00002|db_ctl_base|ERR|no port named fwln42069i0
Dec 28 15:24:53 TracheNodeA ovs-vsctl[39997]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port tap42069i0
Dec 28 15:24:53 TracheNodeA ovs-vsctl[39997]: ovs|00002|db_ctl_base|ERR|no port named tap42069i0
Dec 28 15:24:53 TracheNodeA ovs-vsctl[39998]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port tap42069i0
Dec 28 15:24:53 TracheNodeA ovs-vsctl[39998]: ovs|00002|db_ctl_base|ERR|no port named tap42069i0
Dec 28 15:24:53 TracheNodeA ovs-vsctl[39999]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port fwln42069i0
Dec 28 15:24:53 TracheNodeA ovs-vsctl[39999]: ovs|00002|db_ctl_base|ERR|no port named fwln42069i0
Dec 28 15:24:53 TracheNodeA kernel: vmbr0: port 2(tap42069i0) entered blocking state
Dec 28 15:24:53 TracheNodeA kernel: vmbr0: port 2(tap42069i0) entered disabled state
Dec 28 15:24:53 TracheNodeA kernel: vmbr0: port 2(tap42069i0) entered blocking state
Dec 28 15:24:53 TracheNodeA kernel: vmbr0: port 2(tap42069i0) entered forwarding state

1640723219204.png

Even though I'm NOT using OVS in my network config, it's getting called when I try to assign the VLAN tag?
 
Did you reboot after changing from OVS to Linux bridges? If not, that might explain it.

Please provide the output of ip -details a.
 
Did you reboot after changing from OVS to Linux bridges? If not, that might explain it.

Please provide the output of ip -details a.
I have rebooted each machine a couple of times. Heres the requested details:

Code:
root@TracheNodeA:~# ip -details a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 minmtu 0 maxmtu 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
    link/ether f8:b1:56:d2:5b:9e brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 9000
    bridge_slave state forwarding priority 32 cost 4 hairpin off guard off root_block off fastleave off learning on flood on port_id 0x8001 port_no 0x1 designated_port 32769 designated_cost 0 designated_bridge 8000.f8:b1:56:d2:5b:9e designated_root 8000.f8:b1:56:d2:5b:9e hold_timer    0.00 message_age_timer    0.00 forward_delay_timer    0.00 topology_change_ack 0 config_pending 0 proxy_arp off proxy_arp_wifi off mcast_router 1 mcast_fast_leave off mcast_flood on mcast_to_unicast off neigh_suppress off group_fwd_mask 0 group_fwd_mask_str 0x0 vlan_tunnel off isolated off numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
    altname enp0s25
3: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 68:05:ca:9b:30:bf brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 9212 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
    inet 10.0.0.3/24 scope global enp2s0
       valid_lft forever preferred_lft forever
    inet6 fe80::6a05:caff:fe9b:30bf/64 scope link
       valid_lft forever preferred_lft forever
4: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether f8:b1:56:d2:5b:9e brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 65535
    bridge forward_delay 0 hello_time 200 max_age 2000 ageing_time 30000 stp_state 0 priority 32768 vlan_filtering 1 vlan_protocol 802.1Q bridge_id 8000.f8:b1:56:d2:5b:9e designated_root 8000.f8:b1:56:d2:5b:9e root_port 0 root_path_cost 0 topology_change 0 topology_change_detected 0 hello_timer    0.00 tcn_timer    0.00 topology_change_timer    0.00 gc_timer   42.94 vlan_default_pvid 1 vlan_stats_enabled 0 vlan_stats_per_port 0 group_fwd_mask 0 group_address 01:80:c2:00:00:00 mcast_snooping 1 mcast_router 1 mcast_query_use_ifaddr 0 mcast_querier 0 mcast_hash_elasticity 16 mcast_hash_max 4096 mcast_last_member_count 2 mcast_startup_query_count 2 mcast_last_member_interval 100 mcast_membership_interval 26000 mcast_querier_interval 25500 mcast_query_interval 12500 mcast_query_response_interval 1000 mcast_startup_query_interval 3124 mcast_stats_enabled 0 mcast_igmp_version 2 mcast_mld_version 1 nf_call_iptables 0 nf_call_ip6tables 0 nf_call_arptables 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
    inet 192.168.1.24/24 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::fab1:56ff:fed2:5b9e/64 scope link
       valid_lft forever preferred_lft forever
14: veth100i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr100i0 state UP group default qlen 1000
    link/ether fe:89:f7:14:e0:a4 brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 1 minmtu 68 maxmtu 65535
    veth
    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on port_id 0x8002 port_no 0x2 designated_port 32770 designated_cost 0 designated_bridge 8000.4e:6f:c9:63:53:b4 designated_root 8000.4e:6f:c9:63:53:b4 hold_timer    0.00 message_age_timer    0.00 forward_delay_timer    0.00 topology_change_ack 0 config_pending 0 proxy_arp off proxy_arp_wifi off mcast_router 1 mcast_fast_leave off mcast_flood on mcast_to_unicast off neigh_suppress off group_fwd_mask 0 group_fwd_mask_str 0x0 vlan_tunnel off isolated off numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
15: fwbr100i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 4e:6f:c9:63:53:b4 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 65535
    bridge forward_delay 0 hello_time 200 max_age 2000 ageing_time 30000 stp_state 0 priority 32768 vlan_filtering 0 vlan_protocol 802.1Q bridge_id 8000.4e:6f:c9:63:53:b4 designated_root 8000.4e:6f:c9:63:53:b4 root_port 0 root_path_cost 0 topology_change 0 topology_change_detected 0 hello_timer    0.00 tcn_timer    0.00 topology_change_timer    0.00 gc_timer   29.12 vlan_default_pvid 1 vlan_stats_enabled 0 vlan_stats_per_port 0 group_fwd_mask 0 group_address 01:80:c2:00:00:00 mcast_snooping 1 mcast_router 1 mcast_query_use_ifaddr 0 mcast_querier 0 mcast_hash_elasticity 16 mcast_hash_max 4096 mcast_last_member_count 2 mcast_startup_query_count 2 mcast_last_member_interval 100 mcast_membership_interval 26000 mcast_querier_interval 25500 mcast_query_interval 12500 mcast_query_response_interval 1000 mcast_startup_query_interval 3124 mcast_stats_enabled 0 mcast_igmp_version 2 mcast_mld_version 1 nf_call_iptables 0 nf_call_ip6tables 0 nf_call_arptables 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
16: fwpr100p0@fwln100i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether 9a:86:e3:2b:75:f5 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535
    veth
    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on port_id 0x8003 port_no 0x3 designated_port 32771 designated_cost 0 designated_bridge 8000.f8:b1:56:d2:5b:9e designated_root 8000.f8:b1:56:d2:5b:9e hold_timer    0.00 message_age_timer    0.00 forward_delay_timer    0.00 topology_change_ack 0 config_pending 0 proxy_arp off proxy_arp_wifi off mcast_router 1 mcast_fast_leave off mcast_flood on mcast_to_unicast off neigh_suppress off group_fwd_mask 0 group_fwd_mask_str 0x0 vlan_tunnel off isolated off numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
17: fwln100i0@fwpr100p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr100i0 state UP group default qlen 1000
    link/ether fa:d0:2f:e8:02:61 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535
    veth
    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on port_id 0x8001 port_no 0x1 designated_port 32769 designated_cost 0 designated_bridge 8000.4e:6f:c9:63:53:b4 designated_root 8000.4e:6f:c9:63:53:b4 hold_timer    0.00 message_age_timer    0.00 forward_delay_timer    0.00 topology_change_ack 0 config_pending 0 proxy_arp off proxy_arp_wifi off mcast_router 1 mcast_fast_leave off mcast_flood on mcast_to_unicast off neigh_suppress off group_fwd_mask 0 group_fwd_mask_str 0x0 vlan_tunnel off isolated off numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
19: tap500i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000
    link/ether 0e:c0:35:66:78:4a brd ff:ff:ff:ff:ff:ff promiscuity 2 minmtu 68 maxmtu 65521
    tun type tap pi off vnet_hdr on persist off
    bridge_slave state forwarding priority 32 cost 100 hairpin off guard off root_block off fastleave off learning on flood on port_id 0x8005 port_no 0x5 designated_port 32773 designated_cost 0 designated_bridge 8000.f8:b1:56:d2:5b:9e designated_root 8000.f8:b1:56:d2:5b:9e hold_timer    0.00 message_age_timer    0.00 forward_delay_timer    0.00 topology_change_ack 0 config_pending 0 proxy_arp off proxy_arp_wifi off mcast_router 1 mcast_fast_leave off mcast_flood on mcast_to_unicast off neigh_suppress off group_fwd_mask 0 group_fwd_mask_str 0x0 vlan_tunnel off isolated off numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
20: tap800i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000
    link/ether 0e:cc:e7:16:e0:d7 brd ff:ff:ff:ff:ff:ff promiscuity 2 minmtu 68 maxmtu 65521
    tun type tap pi off vnet_hdr on persist off
    bridge_slave state forwarding priority 32 cost 100 hairpin off guard off root_block off fastleave off learning on flood on port_id 0x8006 port_no 0x6 designated_port 32774 designated_cost 0 designated_bridge 8000.f8:b1:56:d2:5b:9e designated_root 8000.f8:b1:56:d2:5b:9e hold_timer    0.00 message_age_timer    0.00 forward_delay_timer    0.00 topology_change_ack 0 config_pending 0 proxy_arp off proxy_arp_wifi off mcast_router 1 mcast_fast_leave off mcast_flood on mcast_to_unicast off neigh_suppress off group_fwd_mask 0 group_fwd_mask_str 0x0 vlan_tunnel off isolated off numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
21: tap42069i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000
    link/ether 8a:d4:75:0c:b8:dc brd ff:ff:ff:ff:ff:ff promiscuity 2 minmtu 68 maxmtu 65521
    tun type tap pi off vnet_hdr on persist off
    bridge_slave state forwarding priority 32 cost 100 hairpin off guard off root_block off fastleave off learning on flood on port_id 0x8002 port_no 0x2 designated_port 32770 designated_cost 0 designated_bridge 8000.f8:b1:56:d2:5b:9e designated_root 8000.f8:b1:56:d2:5b:9e hold_timer    0.00 message_age_timer    0.00 forward_delay_timer    0.00 topology_change_ack 0 config_pending 0 proxy_arp off proxy_arp_wifi off mcast_router 1 mcast_fast_leave off mcast_flood on mcast_to_unicast off neigh_suppress off group_fwd_mask 0 group_fwd_mask_str 0x0 vlan_tunnel off isolated off numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
22: tap400i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000
    link/ether a6:0c:e4:31:2e:8b brd ff:ff:ff:ff:ff:ff promiscuity 2 minmtu 68 maxmtu 65521
    tun type tap pi off vnet_hdr on persist off
    bridge_slave state forwarding priority 32 cost 100 hairpin off guard off root_block off fastleave off learning on flood on port_id 0x8004 port_no 0x4 designated_port 32772 designated_cost 0 designated_bridge 8000.f8:b1:56:d2:5b:9e designated_root 8000.f8:b1:56:d2:5b:9e hold_timer    0.00 message_age_timer    0.00 forward_delay_timer    0.00 topology_change_ack 0 config_pending 0 proxy_arp off proxy_arp_wifi off mcast_router 1 mcast_fast_leave off mcast_flood on mcast_to_unicast off neigh_suppress off group_fwd_mask 0 group_fwd_mask_str 0x0 vlan_tunnel off isolated off numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
 
Can 2 VMs on the same host communicate in that VLAN?
 
Can 2 VMs on the same host communicate in that VLAN?
I tagged two VMs to VLAN 3, and then assigned static ip's in the /24 subnet. I was able to get them to ping each other!

1640793184330.png
1640793255833.png

1640793209808.png
1640793236008.png

So it seems that my VLAN traffic isn't getting to my pfsense VM anymore. I can't ping the gateway for VLAN 3, but can do intra-vlan 3 trafficking. I also removed the VLAN tag from one of the VMs and confirmed that I could no longer ping across VLAN segments.

pfsense is still properly trafficking VLAN traffic outside the proxmox boxes. VMs on untagged vlan 1 can also get across VLANs properly.

The problem seems isolated to getting Proxmox VM tagged traffic to pfsense VM on the same proxmox host
 
Last edited:
Here is the network config for the proxmox host that also has the pfsense VM. I can't get tagged traffic within this host to get to the pfsense gateways anymore? All these configs were working perfectly fine for years :(

https://pastebin.com/CBBz9gw5

1640794153304.png

1640794167467.png
 
pfSense in a VM on the same host as the other VMs, or a different host?
If it is a different host, is there a switch between those?
 
pfSense in a VM on the same host as the other VMs, or a different host?
If it is a different host, is there a switch between those?
I have both scenarios. These is a switch between them as well that is still successfully trafficking other tagged traffic.

To simplify, the most recent post reflects the host that has BOTH the pfsense + VM together. If I get that working, I'm sure I can get it across my external switches.
 
I have recently installed a brand new intel NIC and will keep trying at this. This is still otherwise a naive configuration that cannot seem to traffic VLAN tagged packets.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!