PVE 6 - GUI - One way migrations, CLI two way

Jesster

Active Member
Mar 19, 2018
14
2
43
Hi Everyone,

I have a simple 2 node PVE6 cluster (no HA) and am running into an issue I've never seen before. The issue is while using the GUI, I am unable to migrate (or live migrate) from the peer node "blade07-ibc02" to "blade06-ibc02". The thing is, it's only this direction, and only with the GUI.

GUI:
  • Can migrate Virtual Machines (Shared or Local storage) from blade06-ibc02 to blade07-ibc02
  • Migrate button is not clickable if VM is on blade07-ibc02 and I try to migrate to blade06-ibc02

CLI:
  • Can migrate VMs from either node to it's peer without any issues.




blade06-ibc02 version info:


Code:
proxmox-ve: 6.0-2 (running kernel: 5.0.21-3-pve)
pve-manager: 6.0-9 (running version: 6.0-9/508dcee0)
pve-kernel-5.0: 6.0-9
pve-kernel-helper: 6.0-9
pve-kernel-5.0.21-3-pve: 5.0.21-7
pve-kernel-5.0.21-2-pve: 5.0.21-7
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.12-pve1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-5
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.0-9
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.1.0-65
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-8
pve-cluster: 6.0-7
pve-container: 3.0-7
pve-docs: 6.0-7
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-7
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-3
pve-qemu-kvm: 4.0.1-3
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-9
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve1



blade07-ibc02 version info:


Code:
proxmox-ve: 6.0-2 (running kernel: 5.0.21-3-pve)
pve-manager: 6.0-9 (running version: 6.0-9/508dcee0)
pve-kernel-5.0: 6.0-9
pve-kernel-helper: 6.0-9
pve-kernel-5.0.21-3-pve: 5.0.21-7
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.12-pve1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-5
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.0-9
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.1.0-65
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-8
pve-cluster: 6.0-7
pve-container: 3.0-7
pve-docs: 6.0-7
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-7
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-3
pve-qemu-kvm: 4.0.1-3
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-9
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve1


CLI Output using qm migrate..

Code:
root@blade07-ibc02:~# qm migrate 814999 blade06-ibc02
2019-11-01 06:16:18 use dedicated network address for sending migration traffic (192.168.0.116)
2019-11-01 06:16:18 starting migration of VM 814999 to node 'blade06-ibc02' (192.168.0.116)
2019-11-01 06:16:19 found local disk 'local-lvm:vm-814999-disk-0' (in current VM config)
2019-11-01 06:16:19 copying disk images
  WARNING: Device /dev/dm-7 not initialized in udev database even after waiting 10000000 microseconds.
  WARNING: Device /dev/dm-8 not initialized in udev database even after waiting 10000000 microseconds.
  WARNING: Device /dev/dm-7 not initialized in udev database even after waiting 10000000 microseconds.
  WARNING: Device /dev/dm-8 not initialized in udev database even after waiting 10000000 microseconds.
  WARNING: Device /dev/dm-7 not initialized in udev database even after waiting 10000000 microseconds.
  WARNING: Device /dev/dm-8 not initialized in udev database even after waiting 10000000 microseconds.
  WARNING: Device /dev/dm-7 not initialized in udev database even after waiting 10000000 microseconds.
  Logical volume "vm-814999-disk-0" created.
  WARNING: Device /dev/dm-8 not initialized in udev database even after waiting 10000000 microseconds.
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 92.0047 s, 11.7 MB/s
58+32688 records in
58+32688 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 10.696 s, 100 MB/s
  Logical volume "vm-814999-disk-0" successfully removed
2019-11-01 06:17:53 migration finished successfully (duration 00:01:35)


Sample VM Config:
Code:
root@blade06-ibc02:~# qm config 814999
bootdisk: scsi0
cores: 1
ide2: none,media=cdrom
memory: 512
name: test-migrations
numa: 0
ostype: l26
scsi0: local-lvm:vm-814999-disk-0,cache=writeback,size=1G
scsihw: virtio-scsi-pci
smbios1: uuid=4c2c2b34-867c-48f7-b061-2acdf0d91e61
sockets: 1
vmgenid: 5e73276d-2e03-4ef5-a6e4-fdf50be6fd38


storage.cfg:
Code:
dir: local
        path /var/lib/vz
        content iso,backup,vztmpl
        maxfiles 9
        shared 0

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images


pve-gui-unable-to-migrate.png


So just to be clear, I can migrate VMs over CLI in each direction, using GUI I am stuck with only "blade06-ibc02 --> blade07-ibc02" working, and "blade07-ibc02 --> blade06-
ibc02" prohibited.

Any suggestions appreciated, thanks!
 
do you see any errors in the javascript console?
can you try to select another node, and then the blade06 again?
 
do you see any errors in the javascript console?
can you try to select another node, and then the blade06 again?

Thanks for reaching out Dominik, I've been lucky with this one - a simple reboot of blade06-ibc02 fixed this for me. My hunch is a previous apt dist-upgrade was performed and never rebooted.
 
Spoke too soon, It was working last week and now it's the same issue.

Hey @dcsapak, I will check for errors in the javascript console. I am unable to select another node (2 node cluster - no HA).
 
So I am not a wiz with Chrome Dev tools - I didn't see any Javascript error happen when reproducing this.
I can say that I added more nodes to my cluster, so now the drop down for Migrate lets me pick other targets. If I toggle it over and over after about 30 seconds the migrate button finally "unlocks" and I can click it.

It doesn't seem to matter if I use local or shared storage.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!