KVM Block Device Migration

bert64

Renowned Member
Aug 14, 2008
74
4
73
I have a setup whereby i have 2 proxmox installs, connected to an iscsi back end for storage over gigabit ethernet...

The setup works well, and live migration works nicely... However, if any of the images get bogged down and start to swap heavily it very quickly saturates the iscsi link.

I can solve this by disabling swap and increasing the RAM on the images, however this is not an ideal solution.

Would it be possible to use local disks on the proxmox systems for the swap drives, and then use kvm block device migration if migrating these images to other servers? Such a setup should work well, as the swap volumes will never be more than a couple of GB in size and it should be able to retain the data volumes over iscsi. Also means i don't need to waste space on my iscsi device (and thus the backups of it) with swap.

http://www.linux-kvm.com/content/qemu-kvm-012-adds-block-migration-feature
 
That's the point, kvm has the capability to live migrate block devices, so really the question is how can i enable such functionality?
 
That's the point, kvm has the capability to live migrate block devices, so really the question is how can i enable such functionality?
Hi,
this was used by early pve-versions to use live-migration (before pve has shared storage support). Since shared storage the kvm-local-live-migration was not implemented back (not enough man-power)...

Udo
 
I believe previous versions of pve used an rsync based approach to live migration, as the block migration support was not present in kvm at the time...

I'm wondering how much work it would be to implement support for the kvm block migration, and if this would work when theres a mix of shared storage and local block devices...

Hopefully it shouldn't be that hard, just seems to be a couple of extra flags to kvm... The only stumbling block i see is making sure it only migrates certain devices and doesn't try to overwrite the shared iscsi storage.
 
Are you sure you really want kvm block-level migration ?

When we tested kvm block-device migration last time it has blown up all thin-provisioned to its full size. Lets say we have a vm with a hd of 200 GB virtual space. Only 20 Gigs are in use. Before block migration the vm disk file was 20 Gigs of size. After migration it was 200 Gigs large. This was a show-stopper for us.
 
Yes, since it would only be used for swap..
Also i use LVM for disk allocation, which doesn't do thin provisioning anyway.
 
kvm block migration is already on my task list (thought it ha slow priority). The plan is to look at that after 2.0.