Live migration problem

rayk_sland

Active Member
Jul 30, 2009
53
1
28
PVE 2.1 clustered to 2 clone pc's
lvm+iscsi storage on another machine available to both
If I create a vm on one node and attempt to live migrate it to the other one, the migration reports as successful, but the vm stops and the lvm volume becomes inaccessible. the only way I can get it back is to ad a new volume to the vm and then edit /etc/pve/nodes/NODE/qemu-server/VMID.conf to point back to the original lvm volume. Then the VM can boot again. Any help for this? I'm looking forward to implementing PVE 2.1 but this is a bit of an oopsie...
 
PVE 2.1 clustered to 2 clone pc's
lvm+iscsi storage on another machine available to both
If I create a vm on one node and attempt to live migrate it to the other one, the migration reports as successful, but the vm stops and the lvm volume becomes inaccessible. the only way I can get it back is to ad a new volume to the vm and then edit /etc/pve/nodes/NODE/qemu-server/VMID.conf to point back to the original lvm volume. Then the VM can boot again. Any help for this? I'm looking forward to implementing PVE 2.1 but this is a bit of an oopsie...
Hi,
if you stop the VM, can you successfull migrate and start on the other node? Mean, is the storage on both server accessible?
Do you use other things inside the VM (cd-rom-image ...)?
For further help post the config of the storage (/etc/pve/storage.cfg), the vm-config and the output of lvs + vgs from both nodes.

Udo
 
Hi,
if you stop the VM, can you successfull migrate and start on the other node? Mean, is the storage on both server accessible?
Do you use other things inside the VM (cd-rom-image ...)?
For further help post the config of the storage (/etc/pve/storage.cfg), the vm-config and the output of lvs + vgs from both nodes.

Udo

1) yes. stop vm, migrate, restart vm works fine
2) no cd-rom image at all or any other virtual hardware besides a virtio disk and a virtio lan adapter. (the original vm was done using a pxe-boot debian iso but that shouldn't have affected anything, because once the install is done, the installation knows nothing of that origin)
3) The two hosts are mox-5 and mox-6
mox-5:/etc/pve/storage.cfg
dir: local
path /var/lib/vz
content images,iso,vztmpl,rootdir
maxfiles 0

iscsi: ISCSI
target iqn.2012-09.com.pirate.sea.sea-chest:storage.lun0
portal 172.19.1.22
content none

lvm: TEST
vgname LVM-ISCSI
base ISCSI:0.0.0.scsi-14945540000000000294e577c516788d9b030de58ade3491d
shared
content images

mox-5:/etc/pve/storage.cfg
dir: local
path /var/lib/vz
content images,iso,vztmpl,rootdir
maxfiles 0

iscsi: ISCSI
target iqn.2012-09.com.pirate.sea.sea-chest:storage.lun0
portal 172.19.1.22
content none

lvm: TEST
vgname LVM-ISCSI
base ISCSI:0.0.0.scsi-14945540000000000294e577c516788d9b030de58ade3491d
shared
content images
mox-5 lvs + vgs
root@mox-5:~# lvs
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
vm-100-disk-1 LVM-ISCSI -wi----- 32.00g
data pve -wi-ao-- 814.02g
root pve -wi-ao-- 96.00g
swap pve -wi-ao-- 5.00g
root@mox-5:~# vgs
VG #PV #LV #SN Attr VSize VFree
LVM-ISCSI 1 1 0 wz--n- 2.00t 1.97t
pve 1 3 0 wz--n- 931.01g 16.00g
mox-6 lvs + vgs

root@mox-6:~# lvs
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
vm-100-disk-1 LVM-ISCSI -wi-a--- 32.00g
data pve -wi-ao-- 42.28g
root pve -wi-ao-- 18.50g
swap pve -wi-ao-- 4.00g

root@mox-6:~# vgs
VG #PV #LV #SN Attr VSize VFree
LVM-ISCSI 1 1 0 wz--n- 2.00t 1.97t
pve 1 3 0 wz--n- 74.03g 9.25g
VM-config before live migration
bootdisk: virtio0
cores: 1
ide2: none,media=cdrom
memory: 512
name: herman
net0: virtio=8E:67:BF:BD:04:95,bridge=vmbr0
ostype: l26
sockets: 1
virtio0: TEST:vm-100-disk-1
VM-config before live migration
bootdisk: virtio0
cores: 1
ide2: none,media=cdrom
memory: 512
name: herman
net0: virtio=8E:67:BF:BD:04:95,bridge=vmbr0
ostype: l26
sockets: 1


NEWSFLASH: Now it's not losing the hard disk configuration (I know I've changed nothing) But the VM stills stops abruptly after live migration.
 
the destination node appears to fail to start the vm at the time when the web based migration log says that it is doing so

/var/log/pve/tasks/E/UPID:mox-6:0000BA8B:0088479A:5059F53E:qmstart:100:root@pam:
contains only
migration listens on port 60000
TASK OK
I would think it would include a note about starting the vm, if all things were correct...
 
Last edited:
1) yes. stop vm, migrate, restart vm works fine
2) no cd-rom image at all or any other virtual hardware besides a virtio disk and a virtio lan adapter. (the original vm was done using a pxe-boot debian iso but that shouldn't have affected anything, because once the install is done, the installation knows nothing of that origin)
3) The two hosts are mox-5 and mox-6
mox-5:/etc/pve/storage.cfg


mox-5:/etc/pve/storage.cfg

mox-5 lvs + vgs

mox-6 lvs + vgs


VM-config before live migration

VM-config before live migration



NEWSFLASH: Now it's not losing the hard disk configuration (I know I've changed nothing) But the VM stills stops abruptly after live migration.
Hi,
can you test if the same issue appear, if you change the /etc/pve/storage.cfg (is cluserwide) to
Code:
dir: local
    path /var/lib/vz
    content images,iso,vztmpl,rootdir
    maxfiles 0

iscsi: ISCSI
    target iqn.2012-09.com.pirate.sea.sea-chest:storage.lun0
    portal 172.19.1.22
    content none

lvm: TEST
    vgname LVM-ISCSI
    shared
    content images
Udo
 
Hi,
can you test if the same issue appear, if you change the /etc/pve/storage.cfg (is cluserwide) to
Code:
dir: local
    path /var/lib/vz
    content images,iso,vztmpl,rootdir
    maxfiles 0

iscsi: ISCSI
    target iqn.2012-09.com.pirate.sea.sea-chest:storage.lun0
    portal 172.19.1.22
    content none

lvm: TEST
    vgname LVM-ISCSI
    shared
    content images
Udo

no change
The vm dies on migrate although now it keeps the disk configuration intact
 
What kernel emulation do you use. In my experience if you have problems doing live migration but succeed with off-line migration 9 out of 10 times the reason is related to differences in the nodes hardware. So are your nodes 100% identical in any way?
 
Problem solved. I thought I had applied all the latest updates to both but I hadn't. Thought I'd doublecheck that and found the embarrassing truth. They are now at the same patch level and migration works as before.
 
I use best from both. Everything i can do with openvz, i will. If i didnt can use openvz i use kvm. Have 10 openvz and 5 kvm. Runs like a horse.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!