Colleagues, I had a problem with migrating a virtual machine from one cluster node to another when using lvmthin. The essence of the problem lies in the fact that the process seems to be going on, but the result is 0%
After canceling the task on the node that was supposed to accept the virtual machine, a virtual disk appears that cannot be deleted because the system writes that it is in use
Helps to terminate the kvm process kill -9 19014
Then the disk can be removed
The virtual machine itself is also blocked. Helps qm unlock
How can I fix this and get the migration working again?
Code:
()
2023-05-16 22:12:49 starting migration of VM 137 to node 'node02' (10.8.6.2)
2023-05-16 22:12:50 found local disk 'local-data-raid:vm-137-disk-0' (in current VM config)
2023-05-16 22:12:50 starting VM 137 on remote node 'node02'
2023-05-16 22:12:52 volume 'local-data-raid:vm-137-disk-0' is 'local-data-raid:vm-137-disk-0' on the target
2023-05-16 22:12:52 start remote tunnel
2023-05-16 22:12:54 ssh tunnel ver 1
2023-05-16 22:12:54 starting storage migration
2023-05-16 22:12:54 scsi0: start migration to nbd:unix:/run/qemu-server/137_nbd.migrate:exportname=drive-scsi0
drive mirror is starting for drive-scsi0
drive-scsi0: transferred 0.0 B of 20.0 GiB (0.00%) in 0s
drive-scsi0: transferred 0.0 B of 20.0 GiB (0.00%) in 1s
drive-scsi0: transferred 0.0 B of 20.0 GiB (0.00%) in 2s
drive-scsi0: transferred 0.0 B of 20.0 GiB (0.00%) in 3s
drive-scsi0: transferred 0.0 B of 20.0 GiB (0.00%) in 4s
drive-scsi0: transferred 0.0 B of 20.0 GiB (0.00%) in 5s
drive-scsi0: transferred 0.0 B of 20.0 GiB (0.00%) in 6s
drive-scsi0: transferred 0.0 B of 20.0 GiB (0.00%) in 7s
drive-scsi0: Cancelling block job
After canceling the task on the node that was supposed to accept the virtual machine, a virtual disk appears that cannot be deleted because the system writes that it is in use
Code:
root@node02:~# lvremove /dev/raid/vm-137-disk-0
Logical volume raid/vm-137-disk-0 in use.
Code:
root@node02:~# lsof /dev/mapper/raid-vm--137--disk--0
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
kvm 19014 root 31u BLK 253,11 0t0 495 /dev/mapper/../dm-11
Helps to terminate the kvm process kill -9 19014
Then the disk can be removed
The virtual machine itself is also blocked. Helps qm unlock
How can I fix this and get the migration working again?