vm migration error on a cluster

Ting

Member
Oct 19, 2021
94
4
13
56
Hey,

I have 4 nodes cluster, a win7 vm migration from node 5 to node 4 has no problem, but I can not migration from node 5 to node 3, here is error code in status.

It would be a great help if someone can offer me some suggestions how to resolve this, thanks.


error status:

2021-10-30 14:44:44 starting migration of VM 405 to node 'proxmox3' (192.168.0.203)
2021-10-30 14:44:44 starting VM 405 on remote node 'proxmox3'
2021-10-30 14:44:46 [proxmox3] Error: Unknown device type.
2021-10-30 14:44:46 [proxmox3] can't create interface fwln405i0 - command '/sbin/ip link add name fwln405i0 mtu 1500 type veth peer name fwpr405p0 mtu 1500' failed: exit code 2
2021-10-30 14:44:46 [proxmox3]
2021-10-30 14:44:46 [proxmox3] kvm: -netdev type=tap,id=net0,ifname=tap405i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on: network script /var/lib/qemu-server/pve-bridge failed with status 512
2021-10-30 14:44:46 [proxmox3] start failed: QEMU exited with code 1
2021-10-30 14:44:47 ERROR: online migrate failure - remote command failed with exit code 255
2021-10-30 14:44:47 aborting phase 2 - cleanup resources
2021-10-30 14:44:47 migrate_cancel
2021-10-30 14:44:48 ERROR: migration finished with problems (duration 00:00:04)
TASK ERROR: migration problems
 
Hi, Moayad;

Here you go, thanks for your help.

:~# pveversion -v
proxmox-ve: 7.0-2 (running kernel: 5.11.22-5-pve)
pve-manager: 7.0-13 (running version: 7.0-13/7aa7e488)
pve-kernel-helper: 7.1-2
pve-kernel-5.11: 7.0-8
pve-kernel-5.11.22-5-pve: 5.11.22-10
pve-kernel-5.11.22-4-pve: 5.11.22-9
ceph: 16.2.6-pve2
ceph-fuse: 16.2.6-pve2
corosync: 3.1.5-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve1
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.0-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-10
libpve-guest-common-perl: 4.0-2
libpve-http-server-perl: 4.0-3
libpve-storage-perl: 7.0-12
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-4
lxcfs: 4.0.8-pve2
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.11-1
proxmox-backup-file-restore: 2.0.11-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.3-6
pve-cluster: 7.0-3
pve-container: 4.1-1
pve-docs: 7.0-5
pve-edk2-firmware: 3.20210831-1
pve-firewall: 4.2-4
pve-firmware: 3.3-2
pve-ha-manager: 3.3-1
pve-i18n: 2.5-1
pve-qemu-kvm: 6.0.0-4
pve-xtermjs: 4.12.0-1
qemu-server: 7.0-16
smartmontools: 7.2-1
spiceterm: 3.2-2
vncterm: 1.7-1
zfsutils-linux: 2.0.5-pve1



qm config 405:
Linux proxmox2 5.11.22-5-pve #1 SMP PVE 5.11.22-10 (Tue, 28 Sep 2021 08:15:41 +0200) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Tue Nov 2 07:07:26 PDT 2021 from 192.168.0.205 on pts/0
root@proxmox2:~# qm config 405
bootdisk: virtio0
cores: 4
cpu: qemu64
memory: 15258
name: Ting-VM
net0: virtio=A2:1A:66:E5:75:D9,bridge=vmbr3,firewall=1
numa: 0
onboot: 1
ostype: win10
scsihw: virtio-scsi-pci
smbios1: uuid=bb12d587-38bd-4fe6-805b-9c6ab80e0b59
sockets: 2
vga: qxl
virtio0: ssd_vm:vm-405-disk-0,size=100G
vmgenid: af9cb5ac-6dfc-4735-ad8d-bbbd96188529
 
did u ever get a resolution to this? i have a two node cluster and i am having the same error.
 
Hi,

I believe it was resolved by myself, but I do not remember what did I do, but you can try following:

1. find out which network link you are using to do migration, could be your ip for proxmox admin port, or cluster heart beat link. you try to reset that link, it may resolve the issue.

or

2. do this test: when you at shell of node #2, and type ssh ip of node #1, and see whether you have any error message, if you don, search how to resolve that error based on error message. there are many good posts in this forum.

or

3. do this test: you have two node cluster, when you GUI logon to node #2, and try to view console on a vm on node #1, if you do not have vnc, then search topic on that, and fix it

I believe after above 3 steps, your migration should work. Sorry, I do not remember what I did, but if I have this kind of issue today, I would try three steps from above.

Hope that helps.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!