Multiple VMs on the same bridge no DNS

sabrtooth

New Member
Feb 6, 2024
4
0
1
Hey guys -- this is a weird one, I'm going to do my best to describe it.

Overview: I have a 10Gb SFP+ trunk to a server. When I have two vms set to the same vmbr using virtio, I lose DNS. I can ping, route, connect to services, just can't access udp/tcp 53 on any device internal or external.

Considerations:
1. If I switch VMS to independent vmbrs using different nics, everything works, but speed is slow (this is likely by design).
2. If I put both VMS on the same vmbr but change to Realtek RTL8139, everything works, but speed is slow (this is likely by design).
3. Clients connected to the same switch in a VLAN configured port work as intended.

Layout:
Proxmox VE 8.1.3 Host
- X10SDV-TP8F SuperMicro Motherboard
- 128GB RAM
- 4 Cores, 8 threads
- pve-firewall off
- vmbr0 - Linux Bridge - eno8 - D1500 SFP+ SoC
- vmbr0.1 - Linux Lan 192.168.10.10/24 | gateway 192.168.10.1
- vmbr1 - Linux Bridge - eno1 - RTL8211E 1GbE

VM1 - 101 - FreeBSD 14 Router [4 Cores/8GB Ram]
- vtnet0 - EXT.TER.NAL.IP/24 - virtio,bridge=vmbr0,tag=1
- PF firewall and Nat
- vtnet1 - 10.0.20.1/24 INTERNAL - virtio,bridge=vmbr0,tag=1003

VM2 - 103 - Ubuntu 22.04 Test Box [4 Cores/16GB Ram]
- ens18 - 10.0.20.10/24 - virtio,bridge=vmbr0,tag=1003

The layout looks like this.
Code:
                        +-------------------------------+
                        |     Proxmox                   |
                        |                               |
+------+    +------+    +----+   +-----+                |
|Modem +----+Switch+----+eno8+---+vmbr0|                |
+------+    +-+----+    +----+   +-+-+-+                |
              |         |        | | |                  |
             vlan1003   |   +----+ | +---------+        |
              |         |   vlan1  vlan1003    vlan1003 |
            +-+----+    |   |    +-+-+         +-+-+    |
            |Client|    |   +----|VM1|         |VM2|    |
            +------+    |        +---+         +---+    |
                        |                               |
                        +-------------------------------+



Code:
/etc/network/interfaces

# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface eno1 inet manual

iface eno2 inet manual

iface eno3 inet manual

iface eno4 inet manual

iface eno5 inet manual

iface eno6 inet manual

iface eno7 inet manual

iface eno8 inet manual

auto vmbr0
iface vmbr0 inet manual
        bridge-ports eno8
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
#10GbE D-1500 DAC Fiber Trunk

auto vmbr0.1
iface vmbr0.1 inet static
        address 192.168.10.20/24
        gateway 192.168.10.1

auto vmbr1
iface vmbr1 inet manual
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0
#RTL8211E - IPMI

Screenshots attached of examples.
1707238341849.png
1707238507612.png

1707238007085.png

Notes:
1. I think the above screenshots single out routing concerns as everything appears to work, so long as both VMs are not on the same vmbr with the same model.
2. It should be noted that, in working scenarios, iperf results between machines are accurate to the physical connection abilities, but to the internet they can be very slow around Fast Ethernet speeds (10Mbits) -- I include this in case it is relevant.
-- iperf between 10.0.20.10 and 10.0.20.1 is around 8Gb/s on vtnet1 via vmbr0 (SFP+)
-- iperf between 10.0.20.10 and 10.0.20.1 is around 750Mb/s on vtnet1 via vmbr1 (1GbE)
-- iperf between VM1 EXTERNAL and EXTERNAL IP is around 650Mb/s on vtnet0 via vmbr0 (500Mb/s Fiber Service)
-- iperf between 10.0.20.10 and EXTERNAL IP is around 9Mb/s on ens18 via vmbr0 or (Nat'd through vtnet1 to vtnet0)

Hope I can get some help, I'm having a lot of fun figuring this all out.
 
To anyone reading this, I've managed to work out a solution. The issue did seem to stem from offloading, however once I got the speed back by removing the offloading on all BSD nics

Bash:
#turn off offloading DO THIS FOR ALL NICS in FreeBSD 13.1 and 14.
ifconfig vtnet0 -tso -lro -rxcsum -tscsum
#to turn on
ifconfig vtner0 tso lro rxcsum txcsum

I still had the problem with them sharing the same physical NIC.

Please note: FreeBSD document notes state that offloading needs turned off in Proxmox. Speed wasn't my primary concern, but wanted to state that.

Performing updates seemed to fix it, allowing me to return to enabling the PVE firewall too. Wanted to share a solution.
 
Last edited:
You faced the same kind of behavior I had in the past with PFSense / OPnsense and FreeBSD in general within KVM.
I never was able to make it work properly, I replaced it by Vyos.

Just in case have a look also the offloading settings of your NIC from the PVE host with ethtool.

Bash:
ethtool -k eno8

may be you should also disable your TSO on Proxmox NIC attached to PFSense.

Bash:
ethtool -K eno8 tso off

In addition, only TX offloading should be disabled within the PFSense NICs.

https://docs.netgate.com/pfsense/en/latest/virtualization/virtio.html

Another point, disabling those kind of hardware features will put more stress on your CPU.

Checkout the original FreeBSD bug: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=165059

Igor Raschetov 2024-01-29 11:49:41 UTC
Hello
Adding parameters to /boot/loader.conf

hw.vtnet.X.tso_disable="1"
hw.vtnet.tso_disable="1"
hw.vtnet.lro_disable="1"
hw.vtnet.X.lro_disable="1"
hw.vtnet.csum_disable="1"
hw.vtnet.X.csum_disable="1"

Solved the problem

Last but not least, your main issue seems for DNS queries in UDP I guess.
So, have a look also to disable UFO with ethtool.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!