We had a hardware issue in one 3 node cluster with both local and CEPH storages. We had to move some VMs around both from nodes and from storages. All VMs are running correctly, albeit in just two nodes until we get replacement hardware.
Somehow during this procedure (probably forgot to tick the "remove source" while moving disks among storages), there are some "orphan" disks on CEPH storage whose ID match the ID of a couple of VMs.
The modification timestamp (rbd info poolname/imagename) shows that they have not been modified for a couple of days. Still, I want to check the content of the disk, so I tried to use
Is that command supposed to scan CEPH storages for "missing/orphan" disks which may belong to a given VM?
If it is, why it isn't adding the disk to the VM config?
Thanks!
Somehow during this procedure (probably forgot to tick the "remove source" while moving disks among storages), there are some "orphan" disks on CEPH storage whose ID match the ID of a couple of VMs.
The modification timestamp (rbd info poolname/imagename) shows that they have not been modified for a couple of days. Still, I want to check the content of the disk, so I tried to use
qm rescan --vmid VMID
in order to add the disk to the VM config. Unfortunately, it does nothing.Is that command supposed to scan CEPH storages for "missing/orphan" disks which may belong to a given VM?
If it is, why it isn't adding the disk to the VM config?
Thanks!