VMs unable to communicate directly on same subnet / VLAN

DragonHoc

Renowned Member
Mar 9, 2016
8
0
66
58
MY setup is :
Pfsense --> unifi switch --> proxmox
Unifi is configured as a trunk port, all VLANs are configured in pfsense

Each VM has multiple NICs, each NIC tagged with a different VLAN in the proxmox NIC settings.

I am having trouble getting 2 VMs tagged with the same VLAN to talk directly across vmbr0, rather than routing via pfsense and back,

I am unable to get vmbr0 to show anything other than tag 1
Code:
bridge vlan show | grep vmbr0
vmbr0             1 PVID Egress Untagged

my network is setup as simply as can be

Code:
auto lo
iface lo inet loopback

iface enp87s0 inet manual

iface enp89s0 inet manual

iface enp2s0f0np0 inet manual

iface enp2s0f1np1 inet manual

auto vmbr0
iface vmbr0 inet static
        address 10.100.0.126/24
        gateway 10.100.0.1
        bridge-ports enp87s0
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-100

iface wlp90s0 inet manual

source /etc/network/interfaces.d/*

I have also tried creating a dedicated linux bridge per VLAN but that also has the same issue.

Any suggestions?
 
How does the network configuration of the two VMs look like?
Code:
ip a
ip r

What does the configuration of the VMs look like?
Code:
qm config <vmid>
 
VM 1:

Code:
$ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: ens18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether bc:24:11:fb:59:6e brd ff:ff:ff:ff:ff:ff
    altname enp0s18
    inet 172.18.50.15/24 brd 172.18.50.255 scope global ens18
       valid_lft forever preferred_lft forever
    inet6 fe80::be24:11ff:fefb:596e/64 scope link
       valid_lft forever preferred_lft forever
3: ens19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether bc:24:11:9d:45:cf brd ff:ff:ff:ff:ff:ff
    altname enp0s19
    inet 10.100.10.15/24 brd 10.100.10.255 scope global ens19
       valid_lft forever preferred_lft forever
    inet6 fe80::be24:11ff:fe9d:45cf/64 scope link
       valid_lft forever preferred_lft forever
4: ens20: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether bc:24:11:3e:77:b8 brd ff:ff:ff:ff:ff:ff
    altname enp0s20

$ ip r
default via 172.18.50.1 dev ens18 proto static metric 100
10.100.10.0/24 dev ens19 proto kernel scope link src 10.100.10.15
172.18.50.0/24 dev ens18 proto kernel scope link src 172.18.50.15

Code:
qm config 222
agent: 1
balloon: 0
boot: order=virtio0
cores: 2
cpu: host
memory: 1024
meta: creation-qemu=9.0.2,ctime=1740443568
name: VM-1
net0: virtio=BC:24:11:FB:59:6E,bridge=vmbr0,firewall=1,tag=50
net1: virtio=BC:24:11:9D:45:CF,bridge=vmbr0,firewall=1,tag=10
net2: virtio=BC:24:11:3E:77:B8,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsihw: virtio-scsi-single
smbios1: uuid=b5c3fcbf-a3e8-4193-9c51-6b5d3f437fca
sockets: 1
virtio0: local-lvm:vm-222-disk-0,cache=unsafe,discard=on,iothread=1,size=32G
vmgenid: 67be2008-1262-4113-9ef7-a39887f2870e

VM 2:

Code:
$ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: ens18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 0a:03:d3:73:23:c2 brd ff:ff:ff:ff:ff:ff
    altname enp0s18
    inet 172.18.50.60/24 brd 172.18.50.255 scope global ens18
       valid_lft forever preferred_lft forever
    inet6 fe80::803:d3ff:fe73:23c2/64 scope link
       valid_lft forever preferred_lft forever
3: ens19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether bc:24:11:1c:38:ed brd ff:ff:ff:ff:ff:ff
    altname enp0s19
    inet 10.100.10.60/24 brd 10.100.10.255 scope global ens19
       valid_lft forever preferred_lft forever
    inet6 fe80::be24:11ff:fe1c:38ed/64 scope link
       valid_lft forever preferred_lft forever
      
$ip r
default via 172.18.50.1 dev ens18 proto static metric 100
10.100.10.0/24 dev ens19 proto kernel scope link src 10.100.10.60
172.18.50.0/24 dev ens18 proto kernel scope link src 172.18.50.60

Code:
qm config 560
agent: 1
balloon: 0
bootdisk: virtio0
cores: 6
memory: 6144
name: VM-2
net0: virtio=0A:03:D3:73:23:C2,bridge=vmbr0,firewall=1,tag=50
net1: virtio=BC:24:11:1C:38:ED,bridge=vmbr0,firewall=1,tag=10
numa: 0
onboot: 1
ostype: l26
scsihw: virtio-scsi-pci
smbios1: uuid=715c1d73-3665-416f-9cd7-2109dc5495d2
sockets: 1
tablet: 0
virtio0: pxmx-01-fast:vm-560-disk-0,size=39G
virtio1: pxmx-01-fast:vm-560-disk-1,size=250G
 
I am having trouble getting 2 VMs tagged with the same VLAN to talk directly across vmbr0, rather than routing via pfsense and back,
How are you checking that this is the case? Are you tracerouting?
Do you possibly have a tcpdump on vmbr0 that shows this behavior?
 
How are you checking that this is the case? Are you tracerouting?
Do you possibly have a tcpdump on vmbr0 that shows this behavior?

A few ways, packetcapture on pfsense on the vlan interface, disabling the firewall rule for the interface on pfsense. As soon as its disabled i can't access the other VM anymore.

I have also done local tcpdumps on each VM. I will get a capture to show

Example from pfsense of one VM pinging the other

Code:
12:36:24.137610 IP 172.18.50.60 > 172.18.50.15: ICMP echo reply, id 1721, seq 69, length 64
12:36:24.137626 IP 172.18.50.60 > 172.18.50.15: ICMP echo reply, id 1721, seq 69, length 64
12:36:25.138839 IP 172.18.50.60 > 172.18.50.15: ICMP echo reply, id 1721, seq 70, length 64
12:36:25.138851 IP 172.18.50.60 > 172.18.50.15: ICMP echo reply, id 1721, seq 70, length 64
12:36:26.140043 IP 172.18.50.60 > 172.18.50.15: ICMP echo reply, id 1721, seq 71, length 64
12:36:26.140051 IP 172.18.50.60 > 172.18.50.15: ICMP echo reply, id 1721, seq 71, length 64
12:36:27.179657 IP 172.18.50.60 > 172.18.50.15: ICMP echo reply, id 1721, seq 72, length 64
12:36:27.179666 IP 172.18.50.60 > 172.18.50.15: ICMP echo reply, id 1721, seq 72, length 64
12:36:28.181200 IP 172.18.50.60 > 172.18.50.15: ICMP echo reply, id 1721, seq 73, length 64
12:36:28.181214 IP 172.18.50.60 > 172.18.50.15: ICMP echo reply, id 1721, seq 73, length 64
12:36:29.182772 IP 172.18.50.60 > 172.18.50.15: ICMP echo reply, id 1721, seq 74, length 64
12:36:29.182781 IP 172.18.50.60 > 172.18.50.15: ICMP echo reply, id 1721, seq 74, length 64
12:36:30.184262 IP 172.18.50.60 > 172.18.50.15: ICMP echo reply, id 1721, seq 75, length 64
12:36:30.184269 IP 172.18.50.60 > 172.18.50.15: ICMP echo reply, id 1721, seq 75, length 64
12:36:31.186158 IP 172.18.50.60 > 172.18.50.15: ICMP echo reply, id 1721, seq 76, length 64
12:36:31.186169 IP 172.18.50.60 > 172.18.50.15: ICMP echo reply, id 1721, seq 76, length 64
12:36:32.187994 IP 172.18.50.60 > 172.18.50.15: ICMP echo reply, id 1721, seq 77, length 64
12:36:32.188008 IP 172.18.50.60 > 172.18.50.15: ICMP echo reply, id 1721, seq 77, length 64
 
The only thing I cam see that could be wrong is that you have firewall enabled on your VM NICs. If there are rules active, they might isolate the VM from another. IDK if Unifi can do client isolation for ethernet networks - it sure can for WLANs.
 
Last edited:
Currently no rules set up at all, however i would like to be able to use this to further isolate VMs down the line
 
Would you mid sending me a tcpdump from the host on vmbr0 with a ping?

Code:
tcpdump -i vmbr0 -w output.pcap