Live KVM migration without share storage

Live migration is not possible without shared storage;
you need to backup/restore or shutdown and migrate the vm and the downtime depends on your hardware/network equipment;

for minimal downtime you need a shared storage
 
Last edited:
please make sure the links in your post are correct and working;

in an virtualization cluster the software bring's up the vm on another node if the current node fail's (or you do it manually) - this is only possible with shared storage;
if you prefer manually migrations between nodes you also want to have a shared storage for fast migrations;

following scenario: when you have two nodes without shared storage and have to bring down one node but need to fail over the vm's to the second node, how much time will that take?
correct answer is, you don't know because if you have to copy several vm's with in sum 2TB of data over that would take hours - how would you plan your maintenance window?

sounds like a nice feature, but not that useful
 
openvz has this feature too, It's use full some times,
Anyone from Proxmox team can have a comment on this thread?
 
This feature only works if source is up and running from start of migration to end so you cannot use this for fail-over. Fail-over is only possible with shared storage!
 
I don't want it for fail over, I just want to use this feature to move a VM while it's running to an other host in the same LAN
 
I don't want it for fail over, I just want to use this feature to move a VM while it's running to an other host in the same LAN

This is currently not implemented. We had such feature some years ago, but removed it due to the large overhead (copy all data).
 
When I saw live backup feature without LVM, I thought maybe in rare cases moving live VM without share storage would be nice feature
Do you have any plan to make it back? or you are in disagreement with this feature?
 
When I saw live backup feature without LVM, I thought maybe in rare cases moving live VM without share storage would be nice feature
Do you have any plan to make it back? or you are in disagreement with this feature?

Hi, Indeed proxmox doesn't support yet live migration + storage migration at the same time.
But technically, it's possible, we just need a little time to implement this.
Can you fill a request in proxmox bugzilla ?
 
Although it is a nice feature to have, in my opinion it is not very practical. The reason is any production cluster will want as little downtime as possible and will hardly run any VM on local storage since it really does not provide any redundancy. Even on a super fast LAN, the copy will still take more than few minutes during live migration from local storage. In a situation where one has 20 VMs on a Proxmox node, a simple task of reboot will take a while since all VMs has to be copied to another storage.

At the same time i can also the need of Local storage live migration. For example, i have 7 VMs running on 4 Proxmox nodes local storage. Simply because these VMs are MONs, MDSs and Admin machines for the CEPH Cluster. Since Proxmox nodes are required to be up at all time i pu them in VM local storage. When i have to reboot one of the node, i do have to shutdown these VMs. Each VMs only have 10GB Virtual drive so its not a big issue. But i will never put mission critical VMs on any of the Proxmox nodes.
 
I seem to remember the original proxmox feature used rsync to copy the disk images, which had considerable overhead because it had to:

1, copy disk images
2, pause the vm
3, checksum the whole disk image on both sides in chunks to determine which bits had changed, and then copy those
4, restart the vm

between stages 2 and 4 there is a LOT of io, and in some cases quite considerable time.

Since then KVM has implemented storage migration natively, so instead of having to checksum the whole disk image the KVM process keeps track of which blocks have changed, resulting in much less downtime, although obviously transferring a large disk image can still be time consuming.
 
I seem to remember the original proxmox feature used rsync to copy the disk images, which had considerable overhead because it had to:

1, copy disk images
2, pause the vm
3, checksum the whole disk image on both sides in chunks to determine which bits had changed, and then copy those
4, restart the vm

between stages 2 and 4 there is a LOT of io, and in some cases quite considerable time.

Since then KVM has implemented storage migration natively, so instead of having to checksum the whole disk image the KVM process keeps track of which blocks have changed, resulting in much less downtime, although obviously transferring a large disk image can still be time consuming.

Yes, the plan is to implement kvm storage migration feature through nbd network server, so it'll work with any storage, and not only files


Code:
[COLOR=#000000][FONT=monospace]phase1[/FONT][/COLOR]
[COLOR=#000000][FONT=monospace]------[/FONT][/COLOR]
[COLOR=#000000][FONT=monospace]target host[/FONT][/COLOR]
[COLOR=#000000][FONT=monospace]-----------[/FONT][/COLOR]
[COLOR=#000000][FONT=monospace]create new volumes if storage != share[/FONT][/COLOR][COLOR=#000000][FONT=monospace] [/FONT][/COLOR]



[COLOR=#000000][FONT=monospace]phase2[/FONT][/COLOR]
[COLOR=#000000][FONT=monospace]------[/FONT][/COLOR]
[COLOR=#000000][FONT=monospace]1)target host[/FONT][/COLOR]


[COLOR=#000000][FONT=monospace]start nbd_server[/FONT][/COLOR]
[COLOR=#000000][FONT=monospace]----------------[/FONT][/COLOR]
[COLOR=#000000][FONT=monospace]nbd_server_start ip:port[/FONT][/COLOR]

[COLOR=#000000][FONT=monospace]add drives to mirror to nbd[/FONT][/COLOR]
[COLOR=#000000][FONT=monospace]---------------------[/FONT][/COLOR]
[COLOR=#000000][FONT=monospace]nbd_server_add drive-virtio0[/FONT][/COLOR]
[COLOR=#000000][FONT=monospace]nbd_server_add drive-virtio1[/FONT][/COLOR]


[COLOR=#000000][FONT=monospace]3)source host[/FONT][/COLOR]

[COLOR=#000000][FONT=monospace]start mirroring of the drives[/FONT][/COLOR]
[COLOR=#000000][FONT=monospace]------------------------------[/FONT][/COLOR]
[COLOR=#000000][FONT=monospace]drive-mirror target = nbd:host:port:exportname=drive-virtioX[/FONT][/COLOR]

[COLOR=#000000][FONT=monospace]when drive-mirror is finished, (block-job-complete),[/FONT][/COLOR]
[COLOR=#000000][FONT=monospace]the source vm will continue to access volume on the remote host through nbd[/FONT][/COLOR]

[COLOR=#000000][FONT=monospace]start vm migration[/FONT][/COLOR]
[COLOR=#000000][FONT=monospace]------------------[/FONT][/COLOR]

[COLOR=#000000][FONT=monospace]end of vm migration[/FONT][/COLOR]
[COLOR=#000000][FONT=monospace]-------------------[/FONT][/COLOR]


[COLOR=#000000][FONT=monospace]phase3[/FONT][/COLOR]
[COLOR=#000000][FONT=monospace]------[/FONT][/COLOR]
[COLOR=#000000][FONT=monospace]1)target host[/FONT][/COLOR]

[COLOR=#000000][FONT=monospace]resume vm[/FONT][/COLOR]


[COLOR=#000000][FONT=monospace]nbd_server_stop[/FONT][/COLOR]


[COLOR=#000000][FONT=monospace]2) source vm[/FONT][/COLOR]

[COLOR=#000000][FONT=monospace]delete source mirrored volumes[/FONT][/COLOR]
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!