Framework for remote migration to cluster-external Proxmox VE hosts

Maksimus

Member
May 16, 2022
60
2
8
In the description of changes in 7.3, there is an item Framework for remote migration to cluster-external Proxmox VE hosts, tell me how to use it. Now we use proxmove from ossobv, but this is a very inconvenient tool.
 
It has some documentation under "man qm".

The api token part is a bit unclear but here's an example:
Code:
qm remote-migrate 112 116 apitoken='Authorization: PVEAPIToken=root@pam!token1=your-api-key-goes-here',host=target.host.tld,port=443 --target-bridge vmbr0 --target-storage local-zfs --online

112 is source vmid
116 is target vmid on target host
In the part you'll have to create a new API token and use your new token id instead of "root@pam!token1" and secret instead of "your-api-key-goes-here".
If you want you can connect the vm to another bridge, same goes for storage.
--online is only used if you want to do a live migration

Hope this helps
 
  • Like
Reactions: zooky
It has some documentation under "man qm".

The api token part is a bit unclear but here's an example:
Code:
qm remote-migrate 112 116 apitoken='Authorization: PVEAPIToken=root@pam!token1=your-api-key-goes-here',host=target.host.tld,port=443 --target-bridge vmbr0 --target-storage local-zfs --online

112 is source vmid
116 is target vmid on target host
In the part you'll have to create a new API token and use your new token id instead of "root@pam!token1" and secret instead of "your-api-key-goes-here".
If you want you can connect the vm to another bridge, same goes for storage.
--online is only used if you want to do a live migration

Hope this helps

Me, i have this error:

remote: storage 'local-zfs' does not exist!

however, local-zfs does exist on both clusters.
 
Last edited:
#1

sd1012:~# cat /etc/pve/storage.cfg dir: local path /var/lib/vz content vztmpl,backup,iso prune-backups keep-last=1 shared 0 zfspool: local-zfs pool rpool/data content images,rootdir sparse 1 zfspool: velociraptor pool velociraptor content images,rootdir mountpoint /velociraptor sparse 0 pbs: pbs datastore datastore01 server 10.x.x.x content backup fingerprint xxxxxxxxxx prune-backups keep-all=1 username sd1012@pbs

#2

sd1013:~# cat /etc/pve/storage.cfg dir: local path /var/lib/vz content iso,vztmpl,backup zfspool: local-zfs pool rpool/data content images,rootdir sparse 1 pbs: pbs datastore datastore01 server 10.x.x.x content backup fingerprint xxxxxxxxxxxx prune-backups keep-all=1 username sd1013@pbs
 
Last edited:
Node 1. 192.168.1.231 local / localzfs
1 add api token
2 pvenode cert info
3 run VM101. windows 10 (local)

test-1 fail ----- localzfs not support
qm remote-migrate 101 102 'host=192.168.1.231,apitoken=PVEAPIToken=root@pam!root=xxxx,fingerprint=yyyyyy' --target-bridge vmbr0 --target-storage localzfs --online

test-2 ok
qm remote-migrate 101 102 'host=192.168.1.231,apitoken=PVEAPIToken=root@pam!root=xxxx,fingerprint=yyyyyy' --target-bridge vmbr0 --target-storage local --online



Node 2 192.168.1.232. local-btrfs

test-3 fail ----- localzfs not support

node1 cmd
qm remote-migrate 101 102 'host=192.168.1.232,apitoken=PVEAPIToken=root@pam!root=xxxx,fingerprint=yyyyyy' --target-bridge vmbr0 --target-storage local-btrfs --online


test-4 ok -----ADD disk. Directory

node1 cmd
qm remote-migrate 101 102 'host=192.168.1.232,apitoken=PVEAPIToken=root@pam!root=xxxx,fingerprint=yyyyyy' --target-bridge vmbr0 --target-storage disk2 --online
 
Last edited:
Node 1. 192.168.1.231 local / localzfs
1 add api token
2 pvenode cert info
3 run VM101. windows 10 (local)

test-1 fail ----- localzfs not support
qm remote-migrate 101 102 'host=192.168.1.231,apitoken=PVEAPIToken=root@pam!root=xxxx,fingerprint=yyyyyy' --target-bridge vmbr0 --target-storage localzfs --online

test-2 ok
qm remote-migrate 101 102 'host=192.168.1.231,apitoken=PVEAPIToken=root@pam!root=xxxx,fingerprint=yyyyyy' --target-bridge vmbr0 --target-storage local --online



Node 2 192.168.1.232. local-btrfs

test-3 fail ----- localzfs not support

node1 cmd
qm remote-migrate 101 102 'host=192.168.1.232,apitoken=PVEAPIToken=root@pam!root=xxxx,fingerprint=yyyyyy' --target-bridge vmbr0 --target-storage local-btrfs --online


test-4 ok -----ADD disk. Directory

node1 cmd
qm remote-migrate 101 102 'host=192.168.1.232,apitoken=PVEAPIToken=root@pam!root=xxxx,fingerprint=yyyyyy' --target-bridge vmbr0 --target-storage disk2 --online
Hi,
I am confused..
Why are you migrating to host=192.168.1.232 from the same host "192.168.1.232. local-btrfs" ?

Isn't that supposed to be 192.168.1.231, as you are migrating to remote from source 192.168.1.232?
 
Hi,
I am confused..
Why are you migrating to host=192.168.1.232 from the same host "192.168.1.232. local-btrfs" ?

Isn't that supposed to be 192.168.1.231, as you are migrating to remote from source 192.168.1.232?
test-1. test-2. 192.168.1.231 migrating----->192.168.1.231



test-3. test-4. 192.168.1.231 migrating----->192.168.1.232
Execute the following command on 192.168.1.231

node1 cmd
qm remote-migrate 101 102 'host=192.168.1.232,apitoken=PVEAPIToken=root@pam!root=xxxx,fingerprint=yyyyyy' --target-bridge vmbr0 --target-storage disk2 --online
 
Hi,
Just a side note..

After you add API Token, Assign API Permissions as required else it will give you an error of Target Storage not found.

1674301625757.png
 
test-1. test-2. 192.168.1.231 migrating----->192.168.1.231



test-3. test-4. 192.168.1.231 migrating----->192.168.1.232
Execute the following command on 192.168.1.231

node1 cmd
qm remote-migrate 101 102 'host=192.168.1.232,apitoken=PVEAPIToken=root@pam!root=xxxx,fingerprint=yyyyyy' --target-bridge vmbr0 --target-storage disk2 --online
Hi,
Just a side note..

After you add API Token, Assign API Permissions as required else it will give you an error of Target Storage not found.

View attachment 45874
test-5. 192.168.1.231 migrating----->192.168.1.232. add tokenapi permissons local-btrfs
Execute the following command on 192.168.1.231

node1 cmd
qm remote-migrate 100 100 'host=192.168.1.232,apitoken=PVEAPIToken=root@pam!root=xxxx,fingerprint=yyyyyy' --target-bridge vmbr0 --target-storage local-btrfs --online

test 5 fail
2023-01-21 21:01:30 ERROR: migration aborted (duration 00:00:01): error - tunnel command '{"format":"qcow2","export_formats":"qcow2+size","cmd":"disk-import","with_snapshots":1,"volname":"vm-100-disk-0.qcow2","storage":"local-btrfs","snapshot":null,"migration_snapshot":"","allow_rename":"1"}' failed - failed to handle 'disk-import' command - unsupported format 'qcow2' for storage type btrfs
migration aborted
====================================================================
test-6. 192.168.1.231 migrating----->192.168.1.232. add tokenapi permissons local-btrfs
and change VM101 disk(qcow2) -----> RAW mode
Execute the following command on 192.168.1.231

qm remote-migrate 100 100 'host=192.168.1.232,apitoken=PVEAPIToken=root@pam!root=xxxx,fingerprint=yyyyyy' --target-bridge vmbr0 --target-storage local-btrfs --online

test 6 is OK
 
Last edited:
Hello, I'm having trouble with the qm remote-migrate command syntax. Currently I'm getting a 400 not enough arguments error.

qm remote-migrate 129 111 --target-endpoint '192.168.x.x,apitoken=pvetransfer=root@pam!pvetransfer<apikey>' --target-bridge vmbr0.10 --target-storage PROD-POC --online

I suspect that I've got my syntax for the --target-endpoint incorrect but if someone can help me with the formatting a bit that would be great.
 
Hello, I'm having trouble with the qm remote-migrate command syntax. Currently I'm getting a 400 not enough arguments error.

qm remote-migrate 129 111 --target-endpoint '192.168.x.x,apitoken=pvetransfer=root@pam!pvetransfer<apikey>' --target-bridge vmbr0.10 --target-storage PROD-POC --online

I suspect that I've got my syntax for the --target-endpoint incorrect but if someone can help me with the formatting a bit that would be great.
I figured out that part but now I'm getting a 401 error for "No Ticket"
 
apitoken=pvetransfer=root@pam!pvetransfer<apikey>'
well above part is wrong, it needs to be:

apitoken=PVEAPIToken=pvetransfer=root@pam!pvetransfer<apikey>

iow. the PVEAPIToken must not be changed.
 
  • Like
Reactions: jcraig
That did the trick. Thank you very much. If I run into bugs or anything to provide feedback on what I'm seeing with my testing what is the best way to provide feedback?
 
This forum works fine in general, as it is quite comfortable to ask follow-up questions and setup info to better understand where an issue lies.
But for very specific and reproducible issues you might even open a bug report over at our bug tracker: https://bugzilla.proxmox.com/
 
Perfect, Doing my first test right now so I'll report issues if I find them. I'm one of "those" engineers that run head long into bugs on the regular. Always looking for ways to improve products I like and ProxMox is one of those for me.
 
  • Like
Reactions: linux and Darkk
This forum works fine in general, as it is quite comfortable to ask follow-up questions and setup info to better understand where an issue lies.
But for very specific and reproducible issues you might even open a bug report over at our bug tracker: https://bugzilla.proxmox.com/
One issue that I'm seeing, after the migration the origin cluster is leaving an artifact behind. I've not looked at the files as of yet. Is there any data that would help determine the cause of this? I've not done any cleanup or anything like that on these in case data is requested.
 

Attachments

  • proxmoxartifact.png
    proxmoxartifact.png
    9.7 KB · Views: 17

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!