I've provisioned 1 Proxmox host with Ceph that has 2 SSDs for OSDs. Ceph is configured and running. I've created a ceph pool called "ceph-dev". When I attempt to move a VM from local storage to Ceph I am getting a lock error.
storage migration failed: error with cfs lock 'storage-ceph-dev': rbd...
We have a setup of ZFS over ISCSI using LIO on Ubuntu 18 , and we have an issue with high IO load once we move disks that are bigger than 100GB,
once the move starts the Load is low until about a half of the transfer is done, and then it's getting crazy high,
our setup is very high end ...
my VMs use an EFI disk (in addition to the standard disk).
Now I want to move all disks to another storage.
There's no issue with the standard disk.
However, the option to move the EFI disk is not available in WebUI.
This means, I need drop the EFI disk and re-create it in new storage.
I'm playing with PVE 6 and have 2 storages : one NFS and another Ceph (external, not managed on PVE). Both are working fine, and I can move disk from Ceph to NFS without any issue. But not the other way arround : from NFS to Ceph, the transfert starts, and then hangs arround 3% indefinitely...
one proxmox is available to select lokal as target moving a disk, the 2nd proxmox offers lokal-lv only.
How do I get the disk or vm moved to local?
workarroud: ho can I import a .vdmk to lokal-lv ?
Maybe not an issue, but I couldn't easily move a cloud-init drive from a local to a shared storage (for live migration)
The workaround was just remove&recreate the drive, but the "move disk" button is just a bit easier.
I am trying to do some move disk from an NAS to local, but sometime I had to stop them for the overload of server and do it again when activity is lower. But when I do it, the zfs disk are created and use space in server, but not appear as "unused" disk in VM and can't be deleted from gui...