Live KVM migration without share storage

spirit

Famous Member
Apr 2, 2010
5,927
716
143
www.odiso.com
Spirit, I like this idea.

One additional suggestion:
NBD should be over SSH unless "migration_unsecure: 1" is set in datacenter.cfg

Yes,sure, no problem.

I think I'll try to work on it this month, as it seem that proxmox users need it :)
I think it's not a too big task, we have already almost all we need it to implemented this.

(I just need to push some other patches before that, so I think for the end of octobre it should be ready)
 

felipe

Active Member
Oct 28, 2013
222
6
43
i also can just say that live migration without shared storage is sometimes a very nice feature. and a lot of platforms like hyper v implemented it in the last time.
at the moment we dont have shared store (and trying to implement it in the next month) but all old (8) nodes will still have the old local storage.
live migration gives me the flexibility to move machines to other nodes if i need more cpu power or disk or whatever...
and even migration from local storage to shared storage with live migration would reduce downtime....

regards
philipp
 

mir

Famous Member
Apr 14, 2012
3,559
122
83
Copenhagen, Denmark
On the TODO you find:

- Be able to migrate both VM and storage as a single process

When above feature is implemented you will be able to live migrate without shared storage.
 

tincboy

Active Member
Apr 13, 2010
464
3
38
On the TODO you find:

- Be able to migrate both VM and storage as a single process

When above feature is implemented you will be able to live migrate without shared storage.
Any update on this feature?
 

Qoke

New Member
Feb 23, 2016
7
0
1
44
It is possible. Go to hardware, virtual disk, move to another storage, done.
...but not HA and not multiple disks at the same time

So are you saying that by going to "hardware, virtual disk and move to another storage" that you are actually live migrating the entire virtual (ie while it is running) to another host?

The reason I ask is because the title of this thread is about live migration of a kvm virtual machine (I. E. like vmotion with live storage migration) , not just moving a virtual disk from one local storage to another local storage.

Could you please confirm this.
 

macday

Member
Mar 10, 2010
408
0
16
Stuttgart / Germany
So are you saying that by going to "hardware, virtual disk and move to another storage" that you are actually live migrating the entire virtual (ie while it is running) to another host?

The reason I ask is because the title of this thread is about live migration of a kvm virtual machine (I. E. like vmotion with live storage migration) , not just moving a virtual disk from one local storage to another local storage.

Could you please confirm this.

I done this many times, e.g. migrating VMs to a NFS-Storage and after that to another host. And this while the vm is running.
 

Qoke

New Member
Feb 23, 2016
7
0
1
44
I appreciate this can be done, however you need NFS (i.e. a shared storage system) in order to perform what you are describing above.

The title of this thread is:
Live KVM migration without share storage

The OP is asking for the equivalent of VMWare vmotion with storage vmotion, which does not require any shared storage system. This would be very useful when Proxmox is deployed using locally attached storage (or DAS).
 

rkl

New Member
Sep 21, 2014
18
2
3
I'm wondering if a "double rsync" would work here to reduce the pause-time when migrating local filestore:

Rsync node1 local filestore to node2 with no VM pausing/migration
Live migrate node1 VM to node2, but pause node2's VM rather than running it at the end of the migration (node1's VM would be paused or stopped as well)
Rsync node1 local filestore to node2 - this should be much faster than the first rsync because it will be differential
Resume node2 VM, stop node1 VM if it's paused
If everything worked, delete node1's local filestore

I suspect this might work fine with uncompressed (e.g. raw) images, but might not be so hot with qcow2 compressed images. I still think that for VMs of any size (e.g. more than 10GB), even the above method would result in noticeable downtime, which is why shared filestore (I use iSCSI) is still a better route.
 

Qoke

New Member
Feb 23, 2016
7
0
1
44
Another possible approach could be as follows:

1. Snapshot storage (lets call this snapshot "1st snapshot")
2. Sync the 1st snapshot to the new host (this will take some time)
3. Snapshot storage again (2nd snapshot)
4. Sync the 2nd snapshot across to new host (much like your rsync approach, this only send incremental changes, so this should not take too long)
5. Pause the VM on the original host
6. Snapshot storage on the original host (3rd snapshot)
7. Sync this 3rd storage snapshot across to the new host (very small incremental changes, this should be very quick)
8. Resume the VM on the new host
 

Ingo S

Well-Known Member
Oct 16, 2016
295
36
53
39
Sorry for digging up this old thread but I came across this issue a few days ago when I found out that Proxmox refused to migrate VMs that are stored on a local LVM.
I don't know if this behavior is intended but I've found a workaround that even makes it possible to do a kind of live migration on LVM based local storage with only some seconds of downtime.

I created a bash script that takes care of every step neccessary to migrate a VM from one cluster node to another while keeping it online as long as possible.

Basically this boils down to the following steps:

1.) lookup VMs Harddisks
2.) background sync them to the target node
3.) pause node
4.) move node to target
5.) unpause it
6.) done!

More detailed description of my script is available on my Website (sorry still building it up...) here: https://ischmidt.info/pages/home/it/pveulm.sh.php
It is called pveulm. (Proxmox virtual environment unsynced live migration)

I attached the script to this post as a .txt file. Of course it is also available under the above site and there you will always find the newest version. (Maybe i will move to a GitHub project later on)
Feel free to use it, share it, modify it. Please let me know if you have suggestions, improvements etc.

I hope this helps, saving some people a lot of time...

EDIT: Found a dangerous BUG!
Variable $_snapname was not correctly set when handling more than one virtual Disk. This leads to data corruption when a VM with multiple disks is migrated. - FIXED!
 

Attachments

  • pveulm.txt
    10.1 KB · Views: 11
Last edited:

spirit

Famous Member
Apr 2, 2010
5,927
716
143
www.odiso.com
Hi all,

I have send some prelimary patches to proxmox dev mailing list. It seem to be already in good shape.
Good news, multiple disks is supported (drive mirroring in parralel :) .

If somebody want help to test and debug, I can send a link to a prebuild deb package for testing.
 
  • Like
Reactions: gkovacs

Robstarusa

Active Member
Feb 19, 2009
80
3
28
I know this is late, but I've done live migration without 'shared storage' by doing shared LVM on top of DRBD on top of local disk. The storage at the bottom of the stack was all local disk.

This works very well actually.
 

spirit

Famous Member
Apr 2, 2010
5,927
716
143
www.odiso.com
I know this is late, but I've done live migration without 'shared storage' by doing shared LVM on top of DRBD on top of local disk. The storage at the bottom of the stack was all local disk.

This works very well actually.

Well, in your case, you have a "shared" storage, ad drbd is replicating datas between hosts.
We are talking here to doing it without drbd, with raw,qcow2,zfs or any other local storage.
 

gkovacs

Well-Known Member
Dec 22, 2008
509
48
48
Budapest, Hungary
Hi all,

I have send some prelimary patches to proxmox dev mailing list. It seem to be already in good shape.
Good news, multiple disks is supported (drive mirroring in parralel :) .

If somebody want help to test and debug, I can send a link to a prebuild deb package for testing.

So how is this coming along? Has anyone tested it?
I would gladly test...
 

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
8,424
1,679
174
So how is this coming along? Has anyone tested it?
I would gladly test...

if you want to follow ongoing development and discussions, check out the pve-devel mailing list. once something is ready for wider testing without compiling yourself, a package is uploaded to the pvetest repository anyway (which you can track on your test machines). this patch set is currently under review in its 7th incarnation, because larger changes like this often require some back and forth and detailed review.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!