How to migrate VM from one PVE cluster to another

G'day there,

As we had to move VMs to one box (with several nodes) to facilitate a rebuild on the other (ESXi to PVE), we now have the need to move inter-cluster.

Easiest way: Backup/Restore

Is Tom's recommendation of using vzdump and then qmrestore still the easiest/best option to move VMs between clusters?

The main concern is migration speed and the resulting downtime as the moves wouldn't be done online.

Are there any other methods that would avoid at least a large portion of the downtime?
proxmove is old & needs offline VMs so is no help. Is there an unsupported CLI route?

Using vzdump with snapshot mode may be ideal?
How large is the inconsistency risk in that case?

Cheers,
LinuxOz
 
Is Tom's recommendation of using vzdump and then qmrestore still the easiest/best option to move VMs between clusters?
Yes.

Using vzdump with snapshot mode may be ideal?
If you can run the old VM while you create the new one in the new cluster, then yes.
 
yes, it's a feature that is currently being developed.

Hello!
I just stumpled on this thread, because I have currently the same situation of transfering VMs between clusters.

Is there any kind of prediction, when this feature may be included in a stable version?
Or some kind of roadmap?
 
To reduce backup and restore time then I usually shutdown the VM, then copy both config and image files to new Cluster.
  • config: /etc/pve/qemu-server or lxc for container
  • image location: click on your VM/CT at Resources section
So you do this manually via ssh?

And then, when copied, the new cluster recognized the config and thus automatically adds the VM?
 
use a shared filesystem that is available to both cluster, live migrate the disk(s) to the shared system, once done; shutdown VM, copy config file over, start in new cluster, once confirmed; delete config file in old cluster.

conf file location is something like /etc/pve/nodes/xxy/qemu...
 
  • Like
Reactions: Volker Lieder
use a shared filesystem that is available to both cluster, live migrate the disk(s) to the shared system, once done; shutdown VM, copy config file over, start in new cluster, once confirmed; delete config file in old cluster.

conf file location is something like /etc/pve/nodes/xxy/qemu...
Ah. Alright!

Thank you for the instructions.
I'll try that. :)
 
Another Option is proxmove: A python program that does offline migration via rsync. I just gave it a try again.

What one should know before using it:
  • It's faster than backup/restore since it's directly copying the data from source server to target server - so shorter downtime.
  • It's a one step process instead of 2 two steps of backup/restore. It uses less I/O(maybe Network- and/or Disk-I/O) because of that.
  • It's not as easy and well supported as backup/restore(I just hit a bug with latest PVE 7.x which caused the program to not function at all; but since I got it fixed with the developer, all is working fine at the moment).
  • It does not support all storage backends(At least not supported, as far as I know: LVM-Thin, Ceph)
 
Last edited:
Last edited:
  • Like
Reactions: robm and DerDanilo
Hi,
Proxmox v7.3 including
  • Framework for remote migration to cluster-external Proxmox VE hosts

it's possible to work with that already or should wait for next releases?
I didn't find any related docs yet...

thanks!
 
Hi,
Proxmox v7.3 including
  • Framework for remote migration to cluster-external Proxmox VE hosts

it's possible to work with that already or should wait for next releases?
I didn't find any related docs yet...

thanks!
it's included as a preview/experimental feature, see the commands pct remote_migrate and qm remote_mgirate. you'll need an API token with the relevant privileges (the command will error out if your are missing them) and I would strongly suggest playing around with it in a test lab setting before letting it near a production environment - as I said, it's a preview/experimental feature and might still have bugs and rough edges.

among the things not yet supported are:
- snapshots (this requires some refactoring of our privilege checks, nothing else blocking it since we re-use the same storage migration code)
- pending changes (this requires some refactoring of our privilege checks, nothing else blocking it)
- replication (this one is a bit of a bigger feature, but definitely planned)
- non-dir based shared storages for offline/container migration (this one just lacks some implemented functions in the storage plugins and should be easiest of them all to implement)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!