I already reported it below the announcement f the new proxmox version, but I think a new thread is better.
old thread (http://forum.proxmox.com/threads/12237-Updates-for-Proxmox-VE-2-2-including-QEMU-1-3)
I am looking at a problem with the new version. VM greater then 8GB will not finish live migration. It will hang around:
The VM will e unresponsive and is not running on the "new" node. Only possibility to get the VM running again is to cancel the migration AND unlock the VM trough qm.
old node:
new node:
THIS IS INCORRECT:
I already have disabled the HA on the VM, but it still fails on approx the last 1.5GB RAM what needs to be transferd.
I also noticed that the "new" node in the begin of the process had the 201 running, but when the transfers stalss the process is gone on the "new" node.
old thread (http://forum.proxmox.com/threads/12237-Updates-for-Proxmox-VE-2-2-including-QEMU-1-3)
I am looking at a problem with the new version. VM greater then 8GB will not finish live migration. It will hang around:
Code:
[COLOR=#000000][FONT=tahoma]Dec 18 16:43:12 migration status: active (transferred 1887637973, remaining 1518489600), total 8598913024)[/FONT][/COLOR]
The VM will e unresponsive and is not running on the "new" node. Only possibility to get the VM running again is to cancel the migration AND unlock the VM trough qm.
old node:
Code:
root@timo:~# pveversion -vpve-manager: 2.2-32 (pve-manager/2.2/3089a616)
running kernel: 2.6.32-17-pve
proxmox-ve-2.6.32: 2.2-83
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-17-pve: 2.6.32-83
pve-kernel-2.6.32-7-pve: 2.6.32-60
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-34
qemu-server: 2.0-71
pve-firmware: 1.0-21
libpve-common-perl: 1.0-40
libpve-access-control: 1.0-25
libpve-storage-perl: 2.0-36
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.3-7
ksm-control-daemon: 1.1-1
new node:
Code:
pve-manager: 2.2-32 (pve-manager/2.2/3089a616)running kernel: 2.6.32-17-pve
proxmox-ve-2.6.32: 2.2-83
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-17-pve: 2.6.32-83
pve-kernel-2.6.32-7-pve: 2.6.32-60
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-34
qemu-server: 2.0-71
pve-firmware: 1.0-21
libpve-common-perl: 1.0-40
libpve-access-control: 1.0-25
libpve-storage-perl: 2.0-36
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.3-7
ksm-control-daemon: 1.1-1
THIS IS INCORRECT:
On our production VM's the change virtio to IDE doesn't work. It still hangs at the last point.Ok narrowed the problem: If the disk is LVM trough vertio then it hangs at the end. LVM trough IDE works without problem.
I already have disabled the HA on the VM, but it still fails on approx the last 1.5GB RAM what needs to be transferd.
I also noticed that the "new" node in the begin of the process had the 201 running, but when the transfers stalss the process is gone on the "new" node.
Code:
/usr/bin/kvm -id 201 -chardev socket,id=qmp,path=/var/run/qemu-server/201.qmp,server,nowait -mon chardev=qmp,mode=control -vnc..................(it is a large line ;-))