Hello,
I'm running a 3-node PVE 8.0.4 cluster and I have the following error when trying to migrate a VM from one host to another:
The VM resides on a shared storage - LVM iSCSI. I do know about the "solution" of running
but this is quite a pain. What I'm trying to do is to migrate about 30 - 40 VMs using the bulk migration feature. Well, most of the VMs return an error just like this one. Why is it happening, what can I do to stop it from happening and, wouldn't it be a good idea to have
automatically, when this error is triggered, and retry the power-on? I see no downside to this, but maybe I'm missing something.
Thank you.
Later edit: Some machines are using HA with a certain group. That is a group of 1 host, since I want to have certain machines on certain hosts. This causes an issue when using Bulk migration because, as soon as the VMs are moved to the target host, they come back to the host the HA dictates. I would say that this is an unwanted behavior so as a suggestion, I would love to either have the option to tick a box when using Bulk migration saying "Ignore HA Groups for this migration", which would set the HA state as ignore.
I'm running a 3-node PVE 8.0.4 cluster and I have the following error when trying to migrate a VM from one host to another:
Code:
task started by HA resource agent
2023-11-21 07:18:52 use dedicated network address for sending migration traffic (10.41.199.33)
2023-11-21 07:18:52 starting migration of VM 100000 to node 'ZRH-GLT-CS03' (10.41.199.33)
2023-11-21 07:18:52 starting VM 100000 on remote node 'ZRH-GLT-CS03'
2023-11-21 07:18:53 [ZRH-GLT-CS03] can't activate LV '/dev/NAS_VG1/vm-100000-disk-0': device-mapper: create ioctl on NAS_VG1-vm--100000--disk--0 LVM-5YfbLpbU7mwS4wUHeM1vAfSMXTyemeIOPEaddO7heGZpJyHRV5D536tNUUlduOzO failed: Device or resource busy
2023-11-21 07:18:53 ERROR: online migrate failure - remote command failed with exit code 255
2023-11-21 07:18:53 aborting phase 2 - cleanup resources
2023-11-21 07:18:53 migrate_cancel
2023-11-21 07:18:54 ERROR: migration finished with problems (duration 00:00:02)
TASK ERROR: migration problems
The VM resides on a shared storage - LVM iSCSI. I do know about the "solution" of running
Code:
dmsetup remove /dev/NAS_VG1/vm-100000-disk-0
Code:
dmsetup remove
Thank you.
Later edit: Some machines are using HA with a certain group. That is a group of 1 host, since I want to have certain machines on certain hosts. This causes an issue when using Bulk migration because, as soon as the VMs are moved to the target host, they come back to the host the HA dictates. I would say that this is an unwanted behavior so as a suggestion, I would love to either have the option to tick a box when using Bulk migration saying "Ignore HA Groups for this migration", which would set the HA state as ignore.
Last edited: