Migration error: pve-bridge failed with status 6400

SMArt

New Member
May 5, 2021
3
0
1
43
Hello.


I have 3 hosts Proxmox HA cluster. I have 2 VMs on one host which interact with each other via linux bridge. This is made for security reasons. Traffic between these VMs should not be seen in our network from other hosts. This bridge is not attached to any physical interface and everything works fine. When I try to migrate both VMs to another node (I created same bridge with same name and parameters on it), I get error:

Code:
network script /var/lib/qemu-server/pve-bridge failed with status 6400

No matter, if I create task from GUI or CLI.

Please, help me to solve the problem. Thank you in advance.

Code:
task started by HA resource agent
2021-04-25 15:10:34 starting migration of VM 102 to node 'pve4' (*.*.*.*)
2021-04-25 15:10:34 starting VM 102 on remote node 'pve4'
2021-04-25 15:10:35 [pve4] no physical interface on bridge 'vmbr1'
2021-04-25 15:10:36 [pve4] kvm: network script /var/lib/qemu-server/pve-bridge failed with status 6400
2021-04-25 15:10:36 [pve4] start failed: QEMU exited with code 1
2021-04-25 15:10:36 ERROR: online migrate failure - remote command failed with exit code 255
2021-04-25 15:10:36 aborting phase 2 - cleanup resources
2021-04-25 15:10:36 migrate_cancel
2021-04-25 15:10:36 ERROR: migration finished with problems (duration 00:00:03)
TASK ERROR: migration problems
 
Last edited:
Hi,
I'm guessing you have a NIC attached to the VM that is attached to vmbr1 and no bridge with that name is available on the target node?

EDIT: Sorry, missed a bit when reading. Is the bridge up on the target node? Please share the output of pveversion -v, qm config 102 and the network configuration for vmbr1 on both source and target.
 
Last edited:
Code:
proxmox-ve: 6.3-1 (running kernel: 5.4.73-1-pve)
pve-manager: 6.3-2 (running version: 6.3-2/22f57405)
pve-kernel-5.4: 6.3-1
pve-kernel-helper: 6.3-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.5
libproxmox-backup-qemu0: 1.0.2-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.2-6
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.3-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.5-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-3
pve-cluster: 6.2-1
pve-container: 3.3-1
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-7
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-1
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.5-pve1
Code:
proxmox-ve: 6.3-1 (running kernel: 5.4.106-1-pve)
pve-manager: 6.3-6 (running version: 6.3-6/2184247e)
pve-kernel-5.4: 6.3-8
pve-kernel-helper: 6.3-8
pve-kernel-5.0: 6.0-11
pve-kernel-5.4.106-1-pve: 5.4.106-1
pve-kernel-4.15: 5.4-19
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.21-3-pve: 5.0.21-7
pve-kernel-4.15.18-30-pve: 4.15.18-58
pve-kernel-4.15.18-10-pve: 4.15.18-32
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.0.8
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-5
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-9
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.1-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.5-1
pve-cluster: 6.2-1
pve-container: 3.3-4
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.2-2
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-5
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-10
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.4-pve1
Code:
auto vmbr1
        iface vmbr1 inet manual
        bridge-ports none
        bridge-stp off                                                                                                         
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
Code:
agent: 1
balloon: 2048
boot: cdn
bootdisk: scsi1
cores: 8
ide2: none,media=cdrom
memory: 24576
name: NewTerm
net0: e1000=9A:39:2E:ED:2C:8B,bridge=vmbr0
net1: virtio=32:76:A6:19:9C:C1,bridge=vmbr1,rate=1000,tag=666
numa: 1
onboot: 1
ostype: win10
scsi1: RS-VMStore:102/vm-102-disk-0.qcow2,size=200G
scsihw: virtio-scsi-pci
smbios1: uuid=cc635ba6-3315-45e9-8851-a8607c6af13c
sockets: 2
virtio0: RS-VMStore:102/vm-102-disk-1.qcow2,size=200G
virtio1: RS-VMStore:102/vm-102-disk-2.qcow2,size=1524G
vmgenid: e6a234b7-0904-43ad-a0a3-77f8d2b91aca
vmbr1 is enabled and running on target node.
 
Note that migration from newer to older versions are generally not supported. Was the configuration for vmbr1 on the target recently changed? If yes, was the interface reloaded? You can check on the target if the bridge is actually vlan-aware:
Code:
cat /sys/class/net/vmbr1/bridge/vlan_filtering
should output 1.
 
  • Like
Reactions: Andre Reis
Thank you.

Manually restarted vmbr1: ifdown vmbr1 && ifup vmbr1. Now migration works without errors.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!