[SOLVED] Cross-cluster Migration: Leaves VM on Old Host powered Off but still "Migrate Locked"

linux

Member
Dec 14, 2020
96
36
23
Australia
Hi there,

Just checking if this is expected behaviour. I know there is --delete which lets you remove it on-successful-completion however it details that if not used, it should stop.

So with the job finishing successfully as below, it seems odd that they are still locked with the paper plane symbol, unable to be deleted due to migrate locked status.

Bash:
2023-06-08 21:01:13 migration active, transferred 5.1 GiB of 8.0 GiB VM-state, 111.7 MiB/s
tunnel: done handling forwarded connection from '/run/qemu-server/225.migrate'
2023-06-08 21:01:14 average migration speed: 161.0 MiB/s - downtime 49 ms
2023-06-08 21:01:14 migration status: completed
all 'mirror' jobs are ready
drive-scsi0: Completing block job_id...
drive-scsi0: Completed successfully.
tunnel: done handling forwarded connection from '/run/qemu-server/225_nbd.migrate'
drive-scsi0: mirror-job finished
2023-06-08 21:01:15 stopping NBD storage migration server on target.
tunnel: -> sending command "nbdstop" to remote
tunnel: <- got reply
tunnel: -> sending command "resume" to remote
tunnel: <- got reply
tunnel: -> sending command "unlock" to remote
tunnel: <- got reply
tunnel: -> sending command "quit" to remote
tunnel: <- got reply
2023-06-08 21:01:17 migration finished successfully (duration 00:40:14)

System is close to up-to-date, same versions on both hosts:

Bash:
proxmox-ve: 7.4-1 (running kernel: 5.15.107-2-pve)
pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a)
pve-kernel-5.15: 7.4-3
pve-kernel-5.13: 7.1-9
pve-kernel-5.4: 6.4-13
pve-kernel-5.15.107-2-pve: 5.15.107-2
pve-kernel-5.15.83-1-pve: 5.15.83-1
pve-kernel-5.15.39-4-pve: 5.15.39-4
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-4-pve: 5.13.19-9
pve-kernel-5.4.166-1-pve: 5.4.166-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 14.2.21-1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx4
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4-3
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.4-1
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-3
libpve-rs-perl: 0.7.6
libpve-storage-perl: 7.4-2
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.2-1
proxmox-backup-file-restore: 2.4.2-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.1-1
proxmox-widget-toolkit: 3.7.0
pve-cluster: 7.3-3
pve-container: 4.4-3
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-2
pve-firewall: 4.3-2
pve-firmware: 3.6-5
pve-ha-manager: 3.6.1
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.4-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.11-pve1

Image showing the VMs in migrate-lock status:

Screenshot 2023-06-08 at 9.13.48 pm.png

The VMs are Powered On and functional on the gaining host which is great for an experimental feature!

Just would be nice to not have to dive around to remove files. Should it just be config / lock file for the VM?

Thanks,
Linux
 
Hi,
you can unlock the VM with qm unlock <ID>. And then remove the VMs. If you don't want to keep the original VM around, use the --delete option for the remote migration command.
 
Thanks for that, so it's intentional behaviour for it to leave the VM on the old-node in a locked state?
yes - as a sort of "marker" that these guests have been migrated away and likely shouldn't be started anymore on the source cluster (the lock also serves as a safeguard against starting ;)). unfortunately we don't have a lock that is allowing deletion without first unlocking..
 
Sounds like it's a good safety net! It makes sense to have it that way.

I guess I interpret that icon as a VM in-flight, though it was a first-time experience so now the behaviour is normal. :)
 
yeah, I can see how re-using the lock value and thus icon there can be confusing. maybe we will use a different value in the future, then we could also special case the lock for ignoring in deletion, for example..
 
  • Like
Reactions: linux

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!