How to migrate VM from one PVE cluster to another

Hi,

does your token have permissions to access the storage?

EDIT: also, there should be upgrades available, 7.4-18 is not the latest PVE 7 version.
The source cluster is our production cluster (50+ hosts) so it's not that easy to upgrade to the latest 7.4 i'm afraid

errr ... token does not have the correct permissions :-/
fixed that (added full access for the token) and tried again, but fails with a different error :

# qm remote-migrate 101 1111 'apitoken=PVEAPIToken=root@pam!migrate=<token>,host=10.x.y.z,fingerprint=<fingerprint>' --target-bridge vmbr1 --target-storage TN05_NFS001
Establishing API connection with remote at '10.x.y.z'
2025-05-28 10:32:10 remote: started tunnel worker 'UPID:krk-svt-prox035:002D513B:048ED5D3:6836CA0A:qmtunnel:1111:root@pam!migrate:'
tunnel: -> sending command "version" to remote
tunnel: <- got reply
2025-05-28 10:32:10 local WS tunnel version: 2
2025-05-28 10:32:10 remote WS tunnel version: 2
2025-05-28 10:32:10 minimum required WS tunnel version: 2
websocket tunnel started
2025-05-28 10:32:10 starting migration of VM 101 to node 'krk-svt-prox035' (10.x.y.z)
tunnel: -> sending command "bwlimit" to remote
tunnel: <- got reply
tunnel: -> sending command "bwlimit" to remote
tunnel: <- got reply
2025-05-28 10:32:10 found local disk 'FN03_NFS002:101/vm-101-disk-1.qcow2' (via storage)
2025-05-28 10:32:10 found local disk 'TN05_NFS001:101/vm-101-disk-0.qcow2' (in current VM config)
2025-05-28 10:32:10 copying local disk images
tunnel: -> sending command "disk-import" to remote
tunnel: <- got reply
2025-05-28 10:32:10 ERROR: error - tunnel command '{"migration_snapshot":"","cmd":"disk-import","volname":"vm-1111-disk-1.qcow2","format":"qcow2","export_formats":"qcow2+size","snapshot":null,"allow_rename":"1","storage":"TN05_NFS001","with_snapshots":1}' failed - failed to handle 'disk-import' command - 400 Parameter verification failed.
2025-05-28 10:32:10 ERROR: migration_snapshot: type check ('boolean') failed - got ''
2025-05-28 10:32:10 aborting phase 1 - cleanup resources
tunnel: -> sending command "quit" to remote
tunnel: <- got reply
2025-05-28 10:32:11 ERROR: migration aborted (duration 00:00:02): error - tunnel command '{"migration_snapshot":"","cmd":"disk-import","volname":"vm-1111-disk-1.qcow2","format":"qcow2","export_formats":"qcow2+size","snapshot":null,"allow_rename":"1","storage":"TN05_NFS001","with_snapshots":1}' failed - failed to handle 'disk-import' command - 400 Parameter verification failed.
2025-05-28 10:32:11 ERROR: migration_snapshot: type check ('boolean') failed - got ''
migration aborted
 
you need to update, else you are lacking the fixes for exactly the issue you are running into..
 
  • Like
Reactions: fiona
Is there a timeline this (qm remote-migrate) will move from experimental to fully supported or at least beta? I have been using it and didn't realize it was still experimental.
 
update :
i upgraded my prox7 cluster to the latest (7.4-20) and I was able to migrate
:cool:

note : the source VM stays locked in 'migrating' state even after the migration is done
i have to manually unlock it (qm unlock <vm id>)
 
Last edited:
it looks like migrating VM's that use shared storage from cluster X (prox7) to cluster Y (Prox8) works differently that a migration done within the same cluster :
Iso simply detaching the disk from the source VM and attaching it to the destination VM (so without ever moving or changing the disk image) the migration creates a clone of the disk image.
This does work, but is obviously a lot slower.
Note that both source and dest node have full access to the same NFS share

Is this expected ?

i used this command :
qm remote-migrate 383 383 'apitoken=PVEAPIToken=root@pam!migrate=<my token>,host=10.X.Y.Z,fingerprint=<fingerprint>' --target-bridge vmbr1 --target-storage FN02_NFS003
 
it looks like migrating VM's that use shared storage from cluster X (prox7) to cluster Y (Prox8) works differently that a migration done within the same cluster :
Iso simply detaching the disk from the source VM and attaching it to the destination VM (so without ever moving or changing the disk image) the migration creates a clone of the disk image.
This does work, but is obviously a lot slower.
Note that both source and dest node have full access to the same NFS share

Is this expected ?

i used this command :
qm remote-migrate 383 383 'apitoken=PVEAPIToken=root@pam!migrate=<my token>,host=10.X.Y.Z,fingerprint=<fingerprint>' --target-bridge vmbr1 --target-storage FN02_NFS003

yes - you should never give two clusters access to the same shared storage, that is very dangerous! each cluster will assume it has full ownership of all the files there, which might lead to deletion of files actually used by the other cluster. since we have to treat shared storages like that, we also cannot assume that we can re-use anything for a remote migration.
 
  • Like
Reactions: thijz