Migration between clusters

nick-a

New Member
Jun 28, 2023
2
0
1
Has anyone successfully done this yet using the new built in mechanism, and if so, would you be willing to share your commands/experience please?
 
Command is described in https://pve.proxmox.com/pve-docs/qm.1.html:
qm remote-migrate <vmid> [<target-vmid>] <target-endpoint> --target-bridge <string> --target-storage <string> [OPTIONS]
Migrate virtual machine to a remote cluster. Creates a new migration task. EXPERIMENTAL feature!
<vmid>: <integer> (100 - 999999999)
The (unique) ID of the VM.
<target-vmid>: <integer> (100 - 999999999)
The (unique) ID of the VM.
<target-endpoint>: apitoken=<A full Proxmox API token including the secret value.> ,host=<Remote Proxmox hostname or IP> [,fingerprint=<Remote host's certificate fingerprint, if not trusted by system store.>] [,port=<integer>]
Remote target endpoint
--bwlimit <integer> (0 - N) (default =migrate limit from datacenter or storage config)
Override I/O bandwidth limit (in KiB/s).
--delete <boolean> (default =0)
Delete the original VM and related data after successful migration. By default the original VM is kept on the source cluster in a stopped state.
--online <boolean>
Use online/live migration if VM is running. Ignored if VM is stopped.
--target-bridge <string>
Mapping from source to target bridges. Providing only a single bridge ID maps all source bridges to that bridge. Providing the special value 1 will map each source bridge to itself.
--target-storage <string>
Mapping from source to target storages. Providing only a single storage ID maps all source storages to that storage. Providing the special value 1 will map each source storage to itself.

In case of ZFS it will make use of replication, like the normal migration, so won't work when using ZFS native encryption :(
 
Last edited:
Thank you, I had seen that but was wondering if anyone's actually tried it, also I'm not sure on exact string syntax for ceph storage and bridge etc.