VM is locked (migrate)

bread-baker

Member
Mar 6, 2010
432
0
16
A migrate failled, and the kvm will not start. more details on that later.

the KVM was controlled by HA .

I tried to do an on line migrate , the target system rebooted - not sure yet if it was a panic or what caused that.

now trying to start the kvm this was in the log:
Task started by HA resource agent
TASK ERROR: VM is locked (migrate)


I looked and searched and could not find the lock file.

So removed the kvm from HA and tried to start from cliresulting in:

qm start 2091
VM is locked (migrate)


any clue as to where the lock file is?
 
I'm getting the same problem except mine is locked by (Backup) cant boot, backup or delete the server.
Happened after the host server hard locked during a backup of said VM (console displayed kernel time out issues - image posted in another thread)
Any help appreciated.
 
same issue but unlock returns an error.
5 node cluster

node version
Code:
pveversion -v
pve-manager: 2.0-54 (pve-manager/2.0/4b59ea39)
running kernel: 2.6.32-10-pve
proxmox-ve-2.6.32: 2.0-63
pve-kernel-2.6.32-10-pve: 2.6.32-63
lvm2: 2.02.88-2pve2
clvm: 2.02.88-2pve2
corosync-pve: 1.4.1-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.8-3
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.7-2
pve-cluster: 1.0-26
qemu-server: 2.0-33
pve-firmware: 1.0-15
libpve-common-perl: 1.0-23
libpve-access-control: 1.0-17
libpve-storage-perl: 2.0-16
vncterm: 1.0-2
vzctl: 3.0.30-2pve2
vzprocps: 2.0.11-2
vzquota: 3.0.12-3
pve-qemu-kvm: 1.0-8
ksm-control-daemon: 1.1-1



failed backup log
Code:
INFO: starting new backup job: vzdump 1215 --quiet 1 --mailto [EMAIL="it@domain.com"]it@domain.com[/EMAIL] --mode snapshot --compress lzo --storage NASbkup2disk4 --node proxmox7
INFO: Starting Backup of VM 1215 (qemu)
INFO: status = running
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: suspend vm to make snapshot
INFO:   Logical volume "vzsnap-proxmox7-0" created
INFO:   Logical volume "vzsnap-proxmox7-1" created
INFO: resume vm
INFO: vm is online again after 1 seconds
INFO: creating archive '/NASBKUP/NASbkup2disk4/dump/vzdump-qemu-1215-2012_04_23-22_30_01.tar.lzo'
INFO: adding '/NASBKUP/NASbkup2disk4/dump/vzdump-qemu-1215-2012_04_23-22_30_01.tmp/qemu-server.conf' to archive ('qemu-server.conf')
INFO: adding '/dev/lvm1/vzsnap-proxmox7-0' to archive ('vm-disk-ide0.raw')
INFO: adding '/dev/lvm1/vzsnap-proxmox7-1' to archive ('vm-disk-ide3.raw')
INFO: lzop: No space left on device: <stdout>
INFO: received signal - terminate process
INFO: unable to open file '/etc/pve/nodes/proxmox7/qemu-server/1215.conf.tmp.296787' - Input/output error
command 'qm unlock 1215' failed: exit code 16
INFO: lvremove failed - trying again in 8 seconds
INFO: lvremove failed - trying again in 16 seconds
INFO: lvremove failed - trying again in 32 seconds
ERROR: command 'lvremove -f /dev/lvm1/vzsnap-proxmox7-0' failed: exit code 5
INFO: lvremove failed - trying again in 8 seconds
INFO: lvremove failed - trying again in 16 seconds
INFO: lvremove failed - trying again in 32 seconds
ERROR: command 'lvremove -f /dev/lvm1/vzsnap-proxmox7-1' failed: exit code 5
ERROR:
 Backup of VM 1215 failed - command '/usr/lib/qemu-server/vmtar  '/NASBKUP/NASbkup2disk4/dump/vzdump-qemu-1215-2012_04_23-22_30_01.tmp/qemu-server.conf' 'qemu-server.conf' /dev/lvm1/vzsnap-proxmox7-0' 'vm-disk-ide0.raw' '/dev/lvm1/vzsnap-proxmox7-1' 'vm-disk-ide3.raw'|lzop >/NASBKUP/NASbkup2disk4/dump/vzdump-qemu-1215-2012_04_23-2_30_01.tar.dat' failed: exit code 1
INFO: Backup job finished with errors
TASK ERROR: job errors


qm unlock
Code:
root@proxmox7:~# qm unlock 1215
unable to open file '/etc/pve/nodes/proxmox7/qemu-server/1215.conf.tmp.478314' - Device or resource busy
 
INFO: lzop: No space left on device: <stdout>

Seems your disk run out of space?

qm unlock
Code:
root@proxmox7:~# qm unlock 1215
unable to open file '/etc/pve/nodes/proxmox7/qemu-server/1215.conf.tmp.478314' - Device or resource busy

what is the output of

# pvecm status
 
we have fixed the out of space problem, had forgot to add cifs mount to fstab for this node.



pvecm status
Code:
root@proxmox7:~# pvecm status
Version: 6.2.0
Config Version: 9
Cluster Name: lnp
Cluster Id: 764
Cluster Member: Yes
Cluster Generation: 468
Membership state: Cluster-Member
Nodes: 5
Expected votes: 5
Total votes: 5
Node votes: 1
Quorum: 3
Active subsystems: 5
Flags:
Ports Bound: 0
Node name: proxmox7
Node ID: 4
Multicast addresses: 239.192.2.254
Node addresses: 10.10.140.107
 
Please try to restart the pve-cluster service (re-mount /etc/pve)

# /etc/init.d/pve-cluster stop
# /etc/init.d/pve-cluster start

The try unlock again

# qm unlock 1215
 
i had the same issue and the 'qm unlock' command solves the problem

my question is : why did that happens ?
i did quite a number of VM migration between node and it was successful without this lock issue
but i had the issue with one VM only

can you explain the reason for this to happen please ?
 
Hi,
i had the same issue and the 'qm unlock' command solves the problem

my question is : why did that happens ?
i did quite a number of VM migration between node and it was successful without this lock issue
but i had the issue with one VM only

can you explain the reason for this to happen please ?
if there's an unexpected problem during migration, the VM will stay locked, so the admin can check and fix the problem before the VM is available for other operations. You can check the migration task log, which should contain the error.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!