I have a cluster of 3 PVE nodes with a Ceph pool hosted between them for VM and container disk images. I recently updated from PVE 7 to 8 and Ceph Pacific to Reef.
Specifically, I did the following:
Today I tried cloning a VM for the first time since the update and I got the following error:
vm-119 is the target new VM, but I have tried this with several source VMs and a few different target VM IDs. I also get the same exact error when I try to move a disk image from another storage to the ceph-images rbd storage.
I am able to create new VMs and new disks for existing VMs without issue. I am also able to migrate VMs between nodes without issue. And I was able to clone VMs prior to the update.
The Pacific to Quincy migration docs mention a possible issue with "device_health_metrics" pools, but that does not appear to be the case here.
Any help with this would be really appreciated
Specifically, I did the following:
- Update PVE to the latest v7
- Update Ceph from Pacific to Quincy as per the wiki instructions
- Update PVE to v8 as per the wiki instruction
- Update Ceph from Quincy to Reef as per the wiki instructions
Today I tried cloning a VM for the first time since the update and I got the following error:
Code:
qemu-img: Could not open 'zeroinit:rbd:ceph-images/vm-119-disk-0:conf=/etc/pve/ceph.conf:id=admin:keyring=/etc/pve/priv/ceph/ceph-images.keyring': Could not open 'rbd:ceph-images/vm-119-disk-0:conf=/etc/pve/ceph.conf:id=admin:keyring=/etc/pve/priv/ceph/ceph-images.keyring': No such file or directory
vm-119 is the target new VM, but I have tried this with several source VMs and a few different target VM IDs. I also get the same exact error when I try to move a disk image from another storage to the ceph-images rbd storage.
I am able to create new VMs and new disks for existing VMs without issue. I am also able to migrate VMs between nodes without issue. And I was able to clone VMs prior to the update.
The Pacific to Quincy migration docs mention a possible issue with "device_health_metrics" pools, but that does not appear to be the case here.
Any help with this would be really appreciated