Changing VLAN tag activates network interface despite link_down=1 flag

meok

New Member
Dec 18, 2024
3
1
3
Summary:
When changing the VLAN tag of a VM network interface that has link_down=1,the interface becomes active despite the link_down flag remaining set.

Steps to Reproduce:
1.
Bring a net interface down with command: pvesh set /nodes/pve1/qemu/101/config --net0 e1000=BC:24:11:1C:17:17,bridge=vmbr7,link_down=1,tag=10
2.
Verify interface is DOWN in guest OS
3. Change VLAN tag in GUI(web interface) of Proxmox for VM 101 (in my case I switch to VLAN from 10 to 20)
4. Check interface state in guest OS

note: as you can see, I bring the network down by CLI as I need to do it in a script. Then I change VLAN tag in GUI as it is expected this action is made by the hypervisor operator in graphical environment.

Expected Behavior:
- Interface should remain DOWN
- link_down=1 flag should keep interface disabled

Actual Behavior:
- Interface becomes UP/active
- link_down=1 flag is still present in config but ignored
- Inconsistent state: config says DOWN but interface is UP

Environment:
- Proxmox VE version: 8.4.2 (I cannot upgrade till end of year because of no-shutdown allowed time window)
- Kernel version: 6.8.8-3-pve
- ifupdown2 version: 3.2.0-1+pmx9

Impact:
- Security: disabled interfaces can become active unexpectedly
- Network isolation: VMs might gain unexpected network access
- Automation: scripts relying on link_down fail (flag is set but can be invalidated by manual vlan tag change)

Workaround:
Manually re-apply link_down flag after VLAN change:
qm set VMID -net0 ...,link_down=0
qm set VMID -net0 ...,link_down=1
(Or the equivalent "Disconnect" flag removed, applied and restored in GUI/WEB environment).

Is this a known bug or am I missing something?
I didn't test this with other kind of interface (example: virtio) and parameters, but it can be present also in this cases.
Also my version is a bit old but I need to keep it until EOY, maybe it's already fixed but I found nothing googling about this....
 
Last edited:
Couldn't reproduce it, even with pve 8.4... Is there anything interesting in the logs (journalctl -b)? Which guest os are you using? Could you paste your vm config?

Edit: hmm this should be fixed in qemu-server 8.2: https://git.proxmox.com/?p=qemu-server.git;a=commit;h=feedc2f48efdf91e53777baa9b001a0101047048 Could you post pveversion -v?


proxmox-ve: 8.2.0 (running kernel: 6.8.8-3-pve)
pve-manager: 8.2.4 (running version: 8.2.4/faa83925c9641325)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.8-3
proxmox-kernel-6.8.8-3-pve-signed: 6.8.8-3
proxmox-kernel-6.5.13-5-pve-signed: 6.5.13-5
proxmox-kernel-6.5: 6.5.13-5
pve-kernel-5.4: 6.4-15
pve-kernel-5.4.174-2-pve: 5.4.174-2
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph-fuse: 17.2.7-pve3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown: residual config
ifupdown2: 3.2.0-1+pmx9
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.7
libpve-cluster-perl: 8.0.7
libpve-common-perl: 8.2.1
libpve-guest-common-perl: 5.1.3
libpve-http-server-perl: 5.1.0
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.9
libpve-storage-perl: 8.2.3
libqb0: 1.0.5-1
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.2.7-1
proxmox-backup-file-restore: 3.2.7-1
proxmox-firewall: 0.4.2
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.6
proxmox-widget-toolkit: 4.2.3
pve-cluster: 8.0.7
pve-container: 5.1.12
pve-docs: 8.2.2
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.1
pve-firewall: 5.0.7
pve-firmware: 3.13-1
pve-ha-manager: 4.0.5
pve-i18n: 3.2.2
pve-qemu-kvm: 9.0.0-6
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.2
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.4-pve1

It seems qemu-server is ok, by version....
But anyway the commit describes exactly what I'm experiencing.
Sadly I cannot update immediately the environment, but I can survive a month with this issue, hoping to find it effectively solved after major update.
Let me know, br,
 
Ok, good to know, I cannot confirm until I update (early next year), but as this is known and solved, I think this thread can be marked so. Thank you very much!
 
  • Like
Reactions: ggoller