Framework for remote migration to cluster-external Proxmox VE hosts

1) Is there a plan to add support for cloned disks? Most of my VMs are clones.

Trying to migrate a clone leads to this error:

2023-02-06 13:11:48 ERROR: Problem found while scanning volumes - can't migrate 'ceph-data:base-100-disk-0/vm-103-disk-0' as it's a clone of 'base-100-disk-0' at /usr/share/perl5/PVE/ line 503, <GEN24> line 2.

2) Cloud-Init enabled VMs appear to be failing too:

2023-02-06 13:18:42 ERROR: migration aborted (duration 00:00:01): no export formats for 'ceph-data:vm-103-cloudinit' - check storage plugin support!

3) Custom CPU types not defined on a target lead to a failure:

failed - failed to handle 'config' command - vm 103 - unable to parse value of 'cpu' - Custom cputype 'Cascadelake-Server-noTSX-noCLWB' not found

(Side note, it's pretty exciting functionality)
Last edited:
One issue that I'm seeing, after the migration the origin cluster is leaving an artifact behind. I've not looked at the files as of yet. Is there any data that would help determine the cause of this? I've not done any cleanup or anything like that on these in case data is requested.
It's actually documented here: , see the "delete" parameter. For "artifacts", you can qm unlock <VMID> and then delete or do whatever you need to do.
  • Like
Reactions: itNGO and Darkk
This qm remote-migrate very cool to move VMs between clusters but what if I just want it to sync the VMs as cold standby? Something like qm remote-sync where it'll sync via ZFS snapshots? This way I can have a working copy as cold standby on the remote cluster in case the primary cluster fails.

I am looking at PVE-zsync but am still using ZFS replication between the nodes on the same cluster so didn't want to break those if I have to migrate the VMs to another node which is a limitation of PVE-zsync. I love it when I mirgrate the VM to another node and replication automatically flips it to the other direction.

help find the error

qm remote-migrate 172 172 --target-endpoint',apitoken=PVEAPIToken=root@pam=monitoring!296f2fee-0e39-4e14-bbb4-6f5b417aff18' --target-bridge=vmbr0 --target-storage local-lvm5 --online

what should the team look like in the end?

I have
What's the output, i.e., error message you get when executing the command..?
thanks i figured it out

correct command in my case
qm remote-migrate 172 172  'apitoken=PVEAPIToken=root@pam!monitoring=296f2fee-0e39-4e14-bbb4-6f5b417aff18,host=,fingerprint=E1:D8:2D:88:00:2C:CE:62:46:92:DA:56:F3:CC:C5:14:C8:02:2B:8F:05:5F:69:CF:D2:73:77:01:06:8C:BE:A6' --target-bridge vmbr0 --target-storage local-lvm5 --online

it was not clear with fingerprint it was necessary to execute the command without this parameter in order to see the correct fingerprint and then insert it into the original command
  • Like
Reactions: fettfoen
Really good features and works quite well. It would be nice to define the 'migration network.' In my case, I have two separate clusters. However, the migration only works through the mgmt interface. In my old cluster, the management interface is only 1GB. But both clusters have the same 10GB storage network. Unfortunately, the bottleneck for my qm remote-migrate is that the migration network operates by default only through the management interface and cannot be defined. The directive --migration_network didnt work in my case.


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!