Ironically it works for live migration, just not for offline migration. In any case, just managed to move a full cluster over a frustratingly slow WAN link, awesome feature! Took a week, but there were zero hiccups outside of having to comment out the checks that prevent migration if the VM was based on a snapshot of an image (which presumably just meant more disk usage on the target cluster, but otherwise appears to have left disks totally fine on the other side).
Thanks for the update. I am using proxmox in my homelab to get away from some non-genuine vmware keys and loving pve so far. I am also at an org the is very close to going with proxmox or xen and has 2 large data centers. The only concerns they had were native support for iscsi and the single pane of glass multi-cluster management. Which isn't really the need. The need was cross cluster migration. once that makes enterprise and can be tested, our org will likely make the jump over.the cross cluster ("remote") migration feature exists, but it is still marked as experimental.
Hello, fabian! Is it still experimental today ?the cross cluster ("remote") migration feature exists, but it is still marked as experimental.
It is still marked as experimental in the man-page of qm: https://pve.proxmox.com/pve-docs/qm.1.html (See remote-migrate).Hello, fabian! Is it still experimental today ?![]()
Cool! I'll give it a try as soon as I have some spare time!
old thread, but fyi :it's included as a preview/experimental feature, see the commandspct remote_migrate
andqm remote_mgirate
. you'll need an API token with the relevant privileges (the command will error out if your are missing them) and I would strongly suggest playing around with it in a test lab setting before letting it near a production environment - as I said, it's a preview/experimental feature and might still have bugs and rough edges.
among the things not yet supported are:
- snapshots (this requires some refactoring of our privilege checks, nothing else blocking it since we re-use the same storage migration code)
- pending changes (this requires some refactoring of our privilege checks, nothing else blocking it)
- replication (this one is a bit of a bigger feature, but definitely planned)
- non-dir based shared storages for offline/container migration (this one just lacks some implemented functions in the storage plugins and should be easiest of them all to implement)
could you post the exact versions on both ends and share the migration task log? we've recently backported a few compat fixes, maybe you were still missing those?old thread, but fyi :
I have successfully used the 'qm remote_migrate' tool between prox7.4 clusters, but it seems to fail when migrating a vm form a 7.4 to a 8.4 cluster
I wouldn't be surprised if this is expected behavior (since different major release) but this might be interesting to someone
Using the backup-restore (using a shared storage) between 7.4 and 8.4 works as expected
# qm remote-migrate 101 1111 'apitoken=PVEAPIToken=root@pam!migrate=<token of target node>,host=10.X.Y.Z,fingerprint=<fingerprint>' --target-bridge vmbr1 --target-storage local-lvm
remote: storage 'local-lvm' does not exist!
# pvesm status
Name Type Status Total Used Available %
FN02_NFS003 nfs active 3770391552 306011392 3464380160 8.12%
FN02_NFS004 nfs active 3770412672 2513310592 1257102080 66.66%
FN02_NFS006-RO nfs active 3770667520 388459904 3382207616 10.30%
TN05_NFS001 nfs active 1127268352 245147648 882120704 21.75%
local dir active 98497780 4148848 89299384 4.21%
local-lvm lvmthin active 448917504 0 448917504 0.00%
does your token have permissions to access the storage?sure :
7.4-18 -> 8.4.1
This is the error :
# qm remote-migrate 101 1111 'apitoken=PVEAPIToken=root@pam!migrate=<token of target node>,host=10.X.Y.Z,fingerprint=<fingerprint>' --target-bridge vmbr1 --target-storage local-lvm
remote: storage 'local-lvm' does not exist!
I tried with other options and other storage locations but i keep getting the same 'storage does not exist' message
on the target host :
# pvesm status
Name Type Status Total Used Available %
FN02_NFS003 nfs active 3770391552 306011392 3464380160 8.12%
FN02_NFS004 nfs active 3770412672 2513310592 1257102080 66.66%
FN02_NFS006-RO nfs active 3770667520 388459904 3382207616 10.30%
TN05_NFS001 nfs active 1127268352 245147648 882120704 21.75%
local dir active 98497780 4148848 89299384 4.21%
local-lvm lvmthin active 448917504 0 448917504 0.00%
7.4-18
is not the latest PVE 7 version.# pveversion -vcould you post "pveversion -v" from both sides? thanks!
We use essential cookies to make this site work, and optional cookies to enhance your experience.