[SOLVED] Can't pass VLAN to Windows Virtual Machine

Oct 19, 2023
7
1
3
Hello,
I have a Windows VM that I need a VLAN passed through to so it can get DHCP on that VLAN. VMBR0 is set as VLAN aware and I have set the VLAN ID in the VM virtual adapter. The switch Is also passing VLANs through to Proxmox. It is not working. Thanks in advance.
 
Last edited:
Hello,
I have a Windows VM that I need a VLAN passed through to so it can get DHCP on that VLAN. VMBR0 is set as VLAN aware and I have set the VLAN ID in the VM virtual adapter. The switch Is also passing VLANs through to Proxmox. It is not working. Thanks in advance.
Hi,
please share your VM config qm config <VMID> --current as well as your network configuration on the Proxmox VE host cat /etc/network/interfaces. Try setting a static IP in the same subnet inside the Windows VM for now and test you network connectivity after that. Can you ping the Proxmox VE host from the Windows VM? Can you ping the DHCP Server?
 
Hello Chris,
Thanks for the reply. I will share my VM config when I get back to work tomorrow.

I have already tried a static IP withing the Windows Server VM with no luck. Can't ping anything with the static set either.

We will investigate tomorrow. Thanks for your help!

Setup info:
1: 4 node cluster
2: Ceph over a 25GB isolated network
3: All Enterprise NVME SSD
4: Cluster network is LACP 2 ports per node 10Gb
5: 10Gb = Intel
6: 25Gb = Broadcom Extreme
 
Last edited:
PVE HOST

auto lo
iface lo inet loopback

iface enp201s0 inet manual

auto enp193s0f3
iface enp193s0f3 inet manual

iface enp65s0f0 inet manual

iface enp65s0f1 inet manual

auto enp1s0f0np0
iface enp1s0f0np0 inet static
address 10.101.0.22/16
mtu 9000
#CEPH Storage Network

auto enp1s0f1np1
iface enp1s0f1np1 inet manual

auto enp1s0f2np2
iface enp1s0f2np2 inet manual

auto enp1s0f3np3
iface enp1s0f3np3 inet manual

auto enp193s0f0
iface enp193s0f0 inet manual

auto enp193s0f1
iface enp193s0f1 inet manual

iface enp193s0f2 inet manual

iface enx9efaee8e922d inet manual

iface enxda664113bd79 inet manual

auto bond0
iface bond0 inet manual
bond-slaves enp193s0f0 enp193s0f1
bond-miimon 100
bond-mode 802.3ad
mtu 9000
#Cluster Network Bond

auto vmbr0
iface vmbr0 inet static
address 10.40.0.61/24
gateway 10.40.0.254
bridge-ports bond0
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094
mtu 9000
#Cluster Network Bridge

VM CONFIG
agent: 1
bios: ovmf
boot: order=scsi0;ide2;net0;ide0
cores: 4
cpu: x86-64-v2-AES
efidisk0: Ceph-Storage:vm-106-disk-0,efitype=4m,pre-enrolled-keys=1,size=1M
ide0: none,media=cdrom
ide2: none,media=cdrom
machine: pc-q35-8.0
memory: 4096
meta: creation-qemu=8.0.2,ctime=1691076360
name: VLAN-TEST
net0: virtio=2E:51:11:6D:91:73,bridge=vmbr0,firewall=1,tag=85
numa: 0
ostype: win11
protection: 1
scsi0: Ceph-Storage:vm-106-disk-1,discard=on,iothread=1,size=32G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=485b44bc-f1ec-480b-a5f1-737389b00d3b
sockets: 1
tags: ws2022
tpmstate0: Ceph-Storage:vm-106-disk-2,size=4M,version=v2.0
vga: virtio
vmgenid: b547105f-b1a6-40fa-8406-7c0ec80859c0
 
Hi, thanks for sharing your configuration, this looks fine to me. So I guess that maybe you ran into this issue with bond NIC and bridge not getting assigned the same MAC address [0]. Please share the output of ip a and attach the systemd journal since boot journalctl -b > journal.txt.

Further, please share your pveversion -v. As mentioned in the linked thread [0], a downgrade of the ifupdown2 package solved the issue for affected users, you could try that as a workaround .

[0] https://forum.proxmox.com/threads/133152/
 
Last edited:
ip a Output:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: enp1s0f0np0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000
link/ether 00:62:0b:61:17:10 brd ff:ff:ff:ff:ff:ff
inet 10.101.0.22/16 scope global enp1s0f0np0
valid_lft forever preferred_lft forever
inet6 fe80::262:bff:fe61:1710/64 scope link
valid_lft forever preferred_lft forever
3: enp193s0f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0 state UP group default qlen 1000
link/ether 8a:13:42:d2:83:f3 brd ff:ff:ff:ff:ff:ff permaddr f0:b2:b9:08:c9:08
4: enp1s0f1np1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 00:62:0b:61:17:11 brd ff:ff:ff:ff:ff:ff
5: enp1s0f2np2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 00:62:0b:61:17:12 brd ff:ff:ff:ff:ff:ff
6: enp193s0f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0 state UP group default qlen 1000
link/ether 8a:13:42:d2:83:f3 brd ff:ff:ff:ff:ff:ff permaddr f0:b2:b9:08:c9:09
7: enp1s0f3np3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 00:62:0b:61:17:13 brd ff:ff:ff:ff:ff:ff
8: enp193s0f2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether f0:b2:b9:08:c9:0a brd ff:ff:ff:ff:ff:ff
9: enp193s0f3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether f0:b2:b9:08:c9:0b brd ff:ff:ff:ff:ff:ff
10: enp65s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 40:a6:b7:c0:11:20 brd ff:ff:ff:ff:ff:ff
11: enp65s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 40:a6:b7:c0:11:21 brd ff:ff:ff:ff:ff:ff
12: enp201s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 74:56:3c:5f:0f:fd brd ff:ff:ff:ff:ff:ff
13: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether 8a:13:42:d2:83:f3 brd ff:ff:ff:ff:ff:ff
14: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
link/ether 8a:13:42:d2:83:f3 brd ff:ff:ff:ff:ff:ff
inet 10.40.0.61/24 scope global vmbr0
valid_lft forever preferred_lft forever
inet6 fe80::8813:42ff:fed2:83f3/64 scope link
valid_lft forever preferred_lft forever
15: tap307i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast master fwbr307i0 state UNKNOWN group default qlen 1000
link/ether 5e:9f:02:ab:28:e8 brd ff:ff:ff:ff:ff:ff
16: fwbr307i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
link/ether ee:d2:98:f8:27:3c brd ff:ff:ff:ff:ff:ff
17: fwpr307p0@fwln307i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether 26:3c:c4:6c:48:fe brd ff:ff:ff:ff:ff:ff
18: fwln307i0@fwpr307p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master fwbr307i0 state UP group default qlen 1000
link/ether 0e:cf:cd:e0:43:89 brd ff:ff:ff:ff:ff:ff
24: tap303i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast master fwbr303i0 state UNKNOWN group default qlen 1000
link/ether da:ce:6f:6e:e7:d2 brd ff:ff:ff:ff:ff:ff
25: fwbr303i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
link/ether 9a:4b:bb:dd:6a:36 brd ff:ff:ff:ff:ff:ff
26: fwpr303p0@fwln303i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether 3a:15:6a:e3:4a:d9 brd ff:ff:ff:ff:ff:ff
27: fwln303i0@fwpr303p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master fwbr303i0 state UP group default qlen 1000
link/ether 02:92:d0:ee:ce:5b brd ff:ff:ff:ff:ff:ff
28: tap304i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast master fwbr304i0 state UNKNOWN group default qlen 1000
link/ether aa:44:2f:36:bb:25 brd ff:ff:ff:ff:ff:ff
29: fwbr304i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
link/ether 42:95:58:e6:2a:21 brd ff:ff:ff:ff:ff:ff
30: fwpr304p0@fwln304i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether c6:f3:8f:57:c3:ba brd ff:ff:ff:ff:ff:ff
31: fwln304i0@fwpr304p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master fwbr304i0 state UP group default qlen 1000
link/ether ce:91:11:82:3a:e6 brd ff:ff:ff:ff:ff:ff
32: tap305i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast master fwbr305i0 state UNKNOWN group default qlen 1000
link/ether be:0e:3b:70:a0:b2 brd ff:ff:ff:ff:ff:ff
33: fwbr305i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
link/ether 0a:0c:18:dc:e6:39 brd ff:ff:ff:ff:ff:ff
34: fwpr305p0@fwln305i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether a6:70:af:9a:5a:77 brd ff:ff:ff:ff:ff:ff
35: fwln305i0@fwpr305p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master fwbr305i0 state UP group default qlen 1000
link/ether 1e:9d:4a:8c:7a:21 brd ff:ff:ff:ff:ff:ff
36: tap306i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast master fwbr306i0 state UNKNOWN group default qlen 1000
link/ether a2:ea:c4:fe:0d:d6 brd ff:ff:ff:ff:ff:ff
37: fwbr306i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
link/ether c2:75:53:a5:63:92 brd ff:ff:ff:ff:ff:ff
38: fwpr306p0@fwln306i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether 4a:76:fe:1c:1a:af brd ff:ff:ff:ff:ff:ff
39: fwln306i0@fwpr306p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master fwbr306i0 state UP group default qlen 1000
link/ether 66:ef:d1:23:51:66 brd ff:ff:ff:ff:ff:ff
40: tap308i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast master fwbr308i0 state UNKNOWN group default qlen 1000
link/ether 36:5a:1b:9f:51:ad brd ff:ff:ff:ff:ff:ff
41: fwbr308i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
link/ether be:31:18:ab:d5:f3 brd ff:ff:ff:ff:ff:ff
42: fwpr308p0@fwln308i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether fa:4d:57:66:85:55 brd ff:ff:ff:ff:ff:ff
43: fwln308i0@fwpr308p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master fwbr308i0 state UP group default qlen 1000
link/ether 3a:e5:a2:62:bd:1d brd ff:ff:ff:ff:ff:ff
44: tap309i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast master fwbr309i0 state UNKNOWN group default qlen 1000
link/ether fa:b0:be:a3:38:56 brd ff:ff:ff:ff:ff:ff
45: fwbr309i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
link/ether 0e:56:b7:43:a5:b7 brd ff:ff:ff:ff:ff:ff
46: fwpr309p0@fwln309i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether d6:bc:2c:e6:8c:df brd ff:ff:ff:ff:ff:ff
47: fwln309i0@fwpr309p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master fwbr309i0 state UP group default qlen 1000
link/ether ce:0f:f8:1a:d4:6f brd ff:ff:ff:ff:ff:ff
48: tap302i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast master fwbr302i0 state UNKNOWN group default qlen 1000
link/ether d2:3d:2f:60:c3:59 brd ff:ff:ff:ff:ff:ff
49: fwbr302i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
link/ether c2:b2:c5:ac:50:18 brd ff:ff:ff:ff:ff:ff
50: fwpr302p0@fwln302i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether 4a:7a:47:08:7c:ba brd ff:ff:ff:ff:ff:ff
51: fwln302i0@fwpr302p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master fwbr302i0 state UP group default qlen 1000
link/ether 8e:3d:46:f3:52:ea brd ff:ff:ff:ff:ff:ff
56: tap106i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast master fwbr106i0 state UNKNOWN group default qlen 1000
link/ether de:b1:ed:b0:4f:ca brd ff:ff:ff:ff:ff:ff
57: fwbr106i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
link/ether 5e:6f:af:d5:25:c1 brd ff:ff:ff:ff:ff:ff
58: fwpr106p0@fwln106i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether b2:8c:fa:77:8b:1f brd ff:ff:ff:ff:ff:ff
59: fwln106i0@fwpr106p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master fwbr106i0 state UP group default qlen 1000
link/ether 7a:01:ce:ef:be:f9 brd ff:ff:ff:ff:ff:ff

PVE VERSION

proxmox-ve: 8.0.2 (running kernel: 6.2.16-6-pve)
pve-manager: 8.0.4 (running version: 8.0.4/d258a813cfa6b390)
pve-kernel-6.2: 8.0.5
proxmox-kernel-helper: 8.0.3
proxmox-kernel-6.2.16-6-pve: 6.2.16-7
proxmox-kernel-6.2: 6.2.16-7
pve-kernel-6.2.16-4-pve: 6.2.16-5
pve-kernel-6.2.16-3-pve: 6.2.16-3
ceph: 17.2.6-pve1+3
ceph-fuse: 17.2.6-pve1+3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx4
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-3
libknet1: 1.25-pve1
libproxmox-acme-perl: 1.4.6
libproxmox-backup-qemu0: 1.4.0
libproxmox-rs-perl: 0.3.0
libpve-access-control: 8.0.3
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.0.6
libpve-guest-common-perl: 5.0.3
libpve-http-server-perl: 5.0.4
libpve-rs-perl: 0.8.4
libpve-storage-perl: 8.0.2
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-2
openvswitch-switch: residual config
proxmox-backup-client: 3.0.1-1
proxmox-backup-file-restore: 3.0.1-1
proxmox-kernel-helper: 8.0.3
proxmox-mail-forward: 0.2.0
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.0.6
pve-cluster: 8.0.2
pve-container: 5.0.4
pve-docs: 8.0.4
pve-edk2-firmware: 3.20230228-4
pve-firewall: 5.0.3
pve-firmware: 3.7-1
pve-ha-manager: 4.0.2
pve-i18n: 3.0.5
pve-qemu-kvm: 8.0.2-3
pve-xtermjs: 4.16.0-3
qemu-server: 8.0.6
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.1.12-pve1
 

Attachments

  • journal.zip
    69.5 KB · Views: 2
Last edited:
Is downgrading ifupdown2 going to make me lose connectivity to the node? Can you please be more specific. Debian does not support downgrading packages. This is a production Machine so I need to be aware of any issues.
 
Last edited:
ifupdown2: 3.2.0-1+pmx4
You already are on the version without this patch https://git.proxmox.com/?p=ifupdown2.git;a=commit;h=a1a0ee382869f52b66ab67b237138b0375183a9e so downgrading will not have any effect. It does not seem to be the same issue, your journal does not contain any error messages like received packet on bond0 with own address as source address.

Are you able to connect to the network with untagged traffic for the Windows VM. Is the non default MTU supported by all devices in the network, otherwise that might be an issue.
 
You already are on the version without this patch https://git.proxmox.com/?p=ifupdown2.git;a=commit;h=a1a0ee382869f52b66ab67b237138b0375183a9e so downgrading will not have any effect. It does not seem to be the same issue, your journal does not contain any error messages like received packet on bond0 with own address as source address.

Are you able to connect to the network with untagged traffic for the Windows VM. Is the non default MTU supported by all devices in the network, otherwise that might be an issue.
I can connect to the network on the untagged traffic, yes. Only when I tag a VLAN does it not have connectivity.
 
You can check the tagged traffic on vmbr0, bond0 and/or the network interfaces via tcpdump, e.g. tcpdump -i bond0 -nn -e vlan.
Maybe that gives you a clue where the traffic is being dropped.
 
I figured it out. An upstream switch (5 switches up) was not configured correctly. The VLAN was set to Untagged on the upstream trunk not Tagged and that was causing the issue. Well, thanks for the help and sorry to have wasted your time.
 
  • Like
Reactions: Chris
I figured it out. An upstream switch (5 switches up) was not configured correctly. The VLAN was set to Untagged on the upstream trunk not Tagged and that was causing the issue. Well, thanks for the help and sorry to have wasted your time.
No worries, glad you could figure out the issue and it works now. Please mark the thread as solved for others to find a solution easier, thx!
 
  • Like
Reactions: SeanOnYa

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!