[SOLVED] Problem migrating VMs in a Proxmox 9 cluster with VLAN on vmbr0.

Tacioandrade

Renowned Member
Sep 14, 2012
133
20
83
Vitória da Conquista, Brazil
Hello everyone, I have a Directory-type storage cluster with 3 nodes that came from Proxmox VE 7 and are now on version 9.

I just added a new node that we call pve03 and we are migrating the VMs back to this host after a format. However, during the installation, our analyst left the option to name the network cards as nic1, nic2, etc. checked.

The problem we are having is that when we try to migrate a VM that uses vmbr0, but with a VLAN set on it, it gives an error during migration, stating that there is no bridge for the vmbr0 network card.

Code:
2025-12-04 21:35:18 starting migration of VM 111 to node 'pve03' (192.168.25.205)
2025-12-04 21:35:18 found local disk 'local-ssd02:111/vm-111-disk-0.qcow2' (attached)
2025-12-04 21:35:18 starting VM 111 on remote node 'pve03'
2025-12-04 21:35:21 [pve03] no physical interface on bridge 'vmbr0'
2025-12-04 21:35:21 [pve03] kvm: -netdev type=tap,id=net0,ifname=tap111i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown,vhost=on: network script /usr/libexec/qemu-server/pve-bridge failed with status 6400
2025-12-04 21:35:21 [pve03] start failed: QEMU exited with code 1
2025-12-04 21:35:21 ERROR: online migrate failure - remote command failed with exit code 255
2025-12-04 21:35:21 aborting phase 2 - cleanup resources
2025-12-04 21:35:21 migrate_cancel
2025-12-04 21:35:22 ERROR: migration finished with problems (duration 00:00:06)
TASK ERROR: migration problems

When I migrate the same VM to another host that has the old name eno1, it works perfectly.

I would like to know if anyone else is having this problem.

Here is the pve version:
pveversion -v
proxmox-ve: 9.1.0 (running kernel: 6.17.2-1-pve)
pve-manager: 9.1.2 (running version: 9.1.2/9d436f37a0ac4172)
proxmox-kernel-helper: 9.0.4
proxmox-kernel-6.17.2-2-pve-signed: 6.17.2-2
proxmox-kernel-6.17: 6.17.2-2
proxmox-kernel-6.17.2-1-pve-signed: 6.17.2-1
ceph-fuse: 19.2.3-pve2
corosync: 3.1.9-pve2
criu: 4.1.1-1
frr-pythontools: 10.4.1-1+pve1
ifupdown2: 3.3.0-1+pmx11
intel-microcode: 3.20250812.1~deb13u1
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.1
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.4
libpve-apiclient-perl: 3.4.2
libpve-cluster-api-perl: 9.0.7
libpve-cluster-perl: 9.0.7
libpve-common-perl: 9.1.0
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.5
libpve-network-perl: 1.2.3
libpve-rs-perl: 0.11.3
libpve-storage-perl: 9.1.0
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2+pmx1
lxc-pve: 6.0.5-3
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
proxmox-backup-client: 4.1.0-1
proxmox-backup-file-restore: 4.1.0-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.2.1
proxmox-kernel-helper: 9.0.4
proxmox-mail-forward: 1.0.2
proxmox-mini-journalreader: 1.6
proxmox-offline-mirror-helper: 0.7.3
proxmox-widget-toolkit: 5.1.2
pve-cluster: 9.0.7
pve-container: 6.0.18
pve-docs: 9.1.1
pve-edk2-firmware: 4.2025.05-2
pve-esxi-import-tools: 1.0.1
pve-firewall: 6.0.4
pve-firmware: 3.17-2
pve-ha-manager: 5.0.8
pve-i18n: 3.6.5
pve-qemu-kvm: 10.1.2-4
pve-xtermjs: 5.5.0-3
qemu-server: 9.1.1
smartmontools: 7.4-pve1
spiceterm: 3.4.1
swtpm: 0.8.0+pve3
vncterm: 1.9.1
zfsutils-linux: 2.3.4-pve1
 
Last edited:
Have you checked the IP configuration of the hypervisor? Does vmbr0 indeed not have an interface attached?
Compare the output of "ip a" and "cat /etc/network/interfaces" across the nodes.

The error says that vmbr0 (which is a bridge) does not have an interface attached.


good luck

Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Have you checked the IP configuration of the hypervisor? Does vmbr0 indeed not have an interface attached?
Compare the output of "ip a" and "cat /etc/network/interfaces" across the nodes.

The error says that vmbr0 (which is a bridge) does not have an interface attached.


good luck

Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Yes, the Proxmox addressing is correct; it's exactly the same as other Proxmox devices. The only difference is that the network card is set to nic1.

I only discovered this problem when I tried to migrate a VM with three network cards: one on the management VLAN and the other two on VLANs 40 and 160.

When I remove the VLAN from the network card, I can migrate the VM to pve03.

Another thing I discovered is that I can't assign a VLAN to any network card on pve03; it always returns the following error when initializing it:
"no physical interface on bridge 'vmbr0'"

However, when the VM doesn't have any VLANs on vmbr0, it works perfectly.

Code:
ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: nic0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether b8:ca:3a:f7:15:cf brd ff:ff:ff:ff:ff:ff
    altname enp1s0f0
    altname enxb8ca3af715cf
3: nic1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether b8:ca:3a:f7:15:d0 brd ff:ff:ff:ff:ff:ff
    altname enp1s0f1
    altname enxb8ca3af715d0
4: nic2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether b8:ca:3a:f7:15:d1 brd ff:ff:ff:ff:ff:ff
    altname enp2s0f0
    altname enxb8ca3af715d1
5: nic3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether b8:ca:3a:f7:15:d2 brd ff:ff:ff:ff:ff:ff
    altname enp2s0f1
    altname enxb8ca3af715d2
6: nic4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether f4:e9:d4:af:d1:80 brd ff:ff:ff:ff:ff:ff
    altname enxf4e9d4afd180
7: nic5: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether f4:e9:d4:af:d1:82 brd ff:ff:ff:ff:ff:ff
    altname enxf4e9d4afd182
8: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether b8:ca:3a:f7:15:cf brd ff:ff:ff:ff:ff:ff
    inet 192.168.25.205/24 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::baca:3aff:fef7:15cf/64 scope link proto kernel_ll
       valid_lft forever preferred_lft forever

Code:
cat /etc/network/interfaces
auto lo
iface lo inet loopback

iface nic0 inet manual

auto vmbr0
iface vmbr0 inet static
        address 192.168.25.205/24
        gateway 192.168.25.1
        bridge-ports nic0
        bridge-stp off
        bridge-fd 0
 
I just reformatted the host and unchecked the Network PIN option, and after that everything worked perfectly.

I can't say for sure if the Network PIN is the problem, but I've been working with PVE since 2014 and had analyzed all the other possible causes, and the only one left, from my perspective, was the Network PIN.