[SOLVED] cephFS , can we some how force allow Vm image for QM-REMOTE-MIGRATE purpose?

Jan 16, 2022
195
8
23
38
senario.

we want to move VM from SHARED SAS storage on CLUSTER1 to RBD on CLUSTER2

the vm image on cluster1 are Qcow2, i get a error saying RBD is not supported.

my only solution is to use a tmp file system from ceph and move the vm there at step 1 and switch them on RBD at step 2

but proxmox do not allow me to enable Vm image, even if i forced it in /etc/pve/storage.conf , i have same error saying cephfs traget do not support image.

based on my research back in 2015 people was able to have VM with qcow2 on cephFS

thx for advices
 
Hi sorry I was referring to qm-remote-migrate.

We can't do qcow2 file to RBD block with the qm remote . It always fail as unsupported. Even if it work from the same cluster when moving disk.

So I need to send the qcow2 to another filesystem and vmdisk seem disabled in cephfs and it's the only storage type I have in this new cluster...
 
Last edited:
Hi sorry I was referring to qm-remote-migrate.

We can't do qcow2 file to RBD block with the qm remote . It always fail as unsupported. Even if it work from the same cluster when moving disk.

So I need to send the qcow2 to another filesystem and vmdisk seem disabled in cephfs and it's the only storage type except the RBD pool in this new cluster...
 
The easiest way is Backup and Restore.
You can also add the RBD to the old cluster and made live migrate Disks.
 
you can define a directory storage that actually sits on top of CephFS - but we plan on extending the volume export/import code to also support rbd storages.
 
you can define a directory storage that actually sits on top of CephFS - but we plan on extending the volume export/import code to also support rbd storages.
worked like charm thx.
question: is it normal that that each successfuly live migrated VM , left the old one in a lock migrate state ?

this appened on each of my test and migration task job report OK ( no error )
 
yes, the feature is still experimental so this is kind of a safeguard in case there are still bugs lurking. you can bypass it by setting "--delete", then the source VM will be cleaned up after a successful migration.
 
you can define a directory storage that actually sits on top of CephFS - but we plan on extending the volume export/import code to also support rbd storages.
I use a workaround. Instead of doing a direct qm/pct remote-migrate to the ceph storage, copy the VM to another storage (like local storage), then migrate from local to the ceph (just remember to select the option to destroy the virtual disk from the local sotage at the web UI, or you will have to remove it using the terminal as it has the same VMID as an existing VM/QT).

Also, as the remote-migrate doesn't remove the VM/QT from the origin by default it should be safe do to a local storage. Just check if you have enough space at the storage for the virtual disk and you'll be good to go.

Sorry to answer to a one year old and solved post, but I would like to share my solution.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!