qm remote-migrate 112 116 apitoken='Authorization: PVEAPIToken=root@pam!token1=your-api-key-goes-here',host=target.host.tld,port=443 --target-bridge vmbr0 --target-storage local-zfs --online
It has some documentation under "man qm".
The api token part is a bit unclear but here's an example:
Code:qm remote-migrate 112 116 apitoken='Authorization: PVEAPIToken=root@pam!token1=your-api-key-goes-here',host=target.host.tld,port=443 --target-bridge vmbr0 --target-storage local-zfs --online
112 is source vmid
116 is target vmid on target host
In the part you'll have to create a new API token and use your new token id instead of "root@pam!token1" and secret instead of "your-api-key-goes-here".
If you want you can connect the vm to another bridge, same goes for storage.
--online is only used if you want to do a live migration
Hope this helps
sd1012:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content vztmpl,backup,iso
prune-backups keep-last=1
shared 0
zfspool: local-zfs
pool rpool/data
content images,rootdir
sparse 1
zfspool: velociraptor
pool velociraptor
content images,rootdir
mountpoint /velociraptor
sparse 0
pbs: pbs
datastore datastore01
server 10.x.x.x
content backup
fingerprint xxxxxxxxxx
prune-backups keep-all=1
username sd1012@pbs
sd1013:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content iso,vztmpl,backup
zfspool: local-zfs
pool rpool/data
content images,rootdir
sparse 1
pbs: pbs
datastore datastore01
server 10.x.x.x
content backup
fingerprint xxxxxxxxxxxx
prune-backups keep-all=1
username sd1013@pbs
Hi,Node 1. 192.168.1.231 local / localzfs
1 add api token
2 pvenode cert info
3 run VM101. windows 10 (local)
test-1 fail ----- localzfs not support
qm remote-migrate 101 102 'host=192.168.1.231,apitoken=PVEAPIToken=root@pam!root=xxxx,fingerprint=yyyyyy' --target-bridge vmbr0 --target-storage localzfs --online
test-2 ok
qm remote-migrate 101 102 'host=192.168.1.231,apitoken=PVEAPIToken=root@pam!root=xxxx,fingerprint=yyyyyy' --target-bridge vmbr0 --target-storage local --online
Node 2 192.168.1.232. local-btrfs
test-3 fail ----- localzfs not support
node1 cmd
qm remote-migrate 101 102 'host=192.168.1.232,apitoken=PVEAPIToken=root@pam!root=xxxx,fingerprint=yyyyyy' --target-bridge vmbr0 --target-storage local-btrfs --online
test-4 ok -----ADD disk. Directory
node1 cmd
qm remote-migrate 101 102 'host=192.168.1.232,apitoken=PVEAPIToken=root@pam!root=xxxx,fingerprint=yyyyyy' --target-bridge vmbr0 --target-storage disk2 --online
test-1. test-2. 192.168.1.231 migrating----->192.168.1.231Hi,
I am confused..
Why are you migrating to host=192.168.1.232 from the same host "192.168.1.232. local-btrfs" ?
Isn't that supposed to be 192.168.1.231, as you are migrating to remote from source 192.168.1.232?
test-1. test-2. 192.168.1.231 migrating----->192.168.1.231
test-3. test-4. 192.168.1.231 migrating----->192.168.1.232
Execute the following command on 192.168.1.231
node1 cmd
qm remote-migrate 101 102 'host=192.168.1.232,apitoken=PVEAPIToken=root@pam!root=xxxx,fingerprint=yyyyyy' --target-bridge vmbr0 --target-storage disk2 --online
test-5. 192.168.1.231 migrating----->192.168.1.232. add tokenapi permissons local-btrfsHi,
Just a side note..
After you add API Token, Assign API Permissions as required else it will give you an error of Target Storage not found.
View attachment 45874
I figured out that part but now I'm getting a 401 error for "No Ticket"Hello, I'm having trouble with the qm remote-migrate command syntax. Currently I'm getting a 400 not enough arguments error.
qm remote-migrate 129 111 --target-endpoint '192.168.x.x,apitoken=pvetransfer=root@pam!pvetransfer<apikey>' --target-bridge vmbr0.10 --target-storage PROD-POC --online
I suspect that I've got my syntax for the --target-endpoint incorrect but if someone can help me with the formatting a bit that would be great.
well above part is wrong, it needs to be:apitoken=pvetransfer=root@pam!pvetransfer<apikey>'
apitoken=PVEAPIToken=pvetransfer=root@pam!pvetransfer<apikey>
PVEAPIToken
must not be changed.One issue that I'm seeing, after the migration the origin cluster is leaving an artifact behind. I've not looked at the files as of yet. Is there any data that would help determine the cause of this? I've not done any cleanup or anything like that on these in case data is requested.This forum works fine in general, as it is quite comfortable to ask follow-up questions and setup info to better understand where an issue lies.
But for very specific and reproducible issues you might even open a bug report over at our bug tracker: https://bugzilla.proxmox.com/