I can interact with my VM fine in console, but I cannot migrate it to another node in the cluster. Other VMs I do not have such problem.
Task log shows
Likewise when I shutoff the VM, HA fails to start it up with the following error:
qm start 106
I'm not sure where to look for this...It should be noted that is VM is just a fresh install of the OS. Not critical. I already have a new VM spinning up as a replacement incase it's the VM that's screwed. I would like to know if it is the storage in case I should expect this problem again with a future VM.
Task log shows
Code:
task started by HA resource agent
2023-12-15 10:49:35 starting migration of VM 106 to node 'cobra-s-pm01' (10.1.0.10)
2023-12-15 10:49:36 found local disk 'SAN-VMStorage:vm-106-disk-3' (attached)
2023-12-15 10:49:36 found generated disk 'SAN-VMStorage:vm-106-disk-4' (in current VM config)
2023-12-15 10:49:36 found local disk 'SAN-VMStorage:vm-106-disk-5' (attached)
2023-12-15 10:49:36 can't migrate local disk 'SAN-VMStorage:vm-106-disk-4': can't get size of '/dev/VMStorage/vm-106-disk-4': Failed to find logical volume "VMStorage/vm-106-disk-4"
2023-12-15 10:49:36 ERROR: Problem found while scanning volumes - can't migrate VM - check log
2023-12-15 10:49:36 aborting phase 1 - cleanup resources
2023-12-15 10:49:36 ERROR: migration aborted (duration 00:00:02): Problem found while scanning volumes - can't migrate VM - check log
TASK ERROR: migration aborted
Likewise when I shutoff the VM, HA fails to start it up with the following error:
Code:
TASK ERROR: can't activate LV '/dev/VMStorage/vm-106-disk-4': Failed to find logical volume "VMStorage/vm-106-disk-4"
qm start 106
Code:
service 'vm:106' in error state, must be disabled and fixed first
command 'ha-manager set vm:106 --state started' failed: exit code 255
I'm not sure where to look for this...It should be noted that is VM is just a fresh install of the OS. Not critical. I already have a new VM spinning up as a replacement incase it's the VM that's screwed. I would like to know if it is the storage in case I should expect this problem again with a future VM.
Last edited: