[SOLVED] Migration between two clusters - no matching format found

Nov 3, 2021
5
0
1
45
I have been trying to work with your new qm remote-migrate command.
The command i have been using is as follows
qm remote-migrate 102 102 apitoken='Authorization: PVEAPIToken='mytoken information',host=target.server.com --target-storage local-zfs --target-bridge vmbr3
It connects fine but when it comes time to start the migration, we get.
ERROR: migration aborted (duration 00:00:03): error - tunnel command '{"export_formats":"raw+size","snapshot":null,"storage":"local-zfs","cmd":"disk-import","migration_snapshot":"","with_snapshots":0,"volname":"vm-102-disk-0","format":"raw","allow_rename":"1"}' failed - failed to handle 'disk-import' command - no matching import/export format found for storage 'local-zfs'.

I am not sure how to proceed, there is nothing wrong with the local-zfs, we have also tried other storage on the new cluster. There are already raw files in there and the source is in raw format as well. I do not see any documentation on where i should be configurating our system or add to the above command for force raw format.

The old cluster it is coming from is running - 7.3-1 , pve-manager/7.3-3/c3928077 (running kernel: 5.15.74-1-pve)
to a new cluster with the lasted ISO for 7.3 and any updates that came after we installed our subscriptions
 
Is it one-time activity? When I migrated my VMs/LXCs I did backup-scp-restore. Isn't it suitable for you?
 
This is a one-time event. We are decommissioning the old cluster. Our backup server can handle the storage. But we have a time constraint. Using the remote-migrate made the most sense though still listed as experimental. Both the old cluster and new use 10gb connections for the Ceph. While the backup server only had a 1gb connection. We are looking at all options, but if we could get this working would be ideal.
 
Hi,
is the source VM's storage also ZFS? Unfortunately, offline disk migration to/from ZFS is only implemented from/to ZFS at the moment. As a workaround, you can try to migrate the VM while it is running (assuming the disk is attached to the VM).
 
Thank you, i went the backup and restore route. To answer your question was going from Ceph to Ceph at first that gave the same error, before i tried to go from Ceph to local-zfs.
 
Thank you, i went the backup and restore route. To answer your question was going from Ceph to Ceph at first that gave the same error, before i tried to go from Ceph to local-zfs.
Yes, our Ceph, or to be specific RBD, plugin currently doesn't implement export/import, because in a single cluster, it's shared. But again, it should work when doing an online migration.
 
Hmm, another qm remote-migrate question.

We are getting the below response when trying to live-migrate, however the thin-pool and VG are both named as such on the gaining node.

remote: storage 'raid10ssd' does not exist!

Any idea why? We added a new storage pool, then created a Thin/Logical volume, ticking Add Storage, which looks to have made the VG too.

They are both (VG and LV) named 'raid10ssd' and slashes are illegal characters, so I can't nest them as in raid10ssd/raid10ssd - any ideas?

EDIT: Fixed my own problem, I had enabled Segregated Permissions for the root-level API Token, then not assigned any. The ACL works!
 
Last edited:
  • Like
Reactions: tomee and fiona
Hi,
Hmm, another qm remote-migrate question.

We are getting the below response when trying to live-migrate, however the thin-pool and VG are both named as such on the gaining node.

remote: storage 'raid10ssd' does not exist!

Any idea why? We added a new storage pool, then created a Thin/Logical volume, ticking Add Storage, which looks to have made the VG too.

They are both (VG and LV) named 'raid10ssd' and slashes are illegal characters, so I can't nest them as in raid10ssd/raid10ssd - any ideas?

EDIT: Fixed my own problem, I had enabled Segregated Permissions for the root-level API Token, then not assigned any. The ACL works!
glad you were able to fix it! But surely the current error message is not ideal. I sent a patch to improve it: https://lists.proxmox.com/pipermail/pve-devel/2023-June/057388.html
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!