Hi,
I just found a nasty bug when using ProxMox 4.2 in a clustered setup with Ceph and a KRBD configured storage.
Using KRBD, a /dev/rbdxx entry is created on the server to gain access to the RBD image.
When migrating a VM using such volume from server A to server B, the /dev/rbdxx device is still mapped on the server A and then mapped again on server B.
If you try to delete this VM, Ceph will complain because there are still existing watchers on this image.
In fact the "watcher" is the /dev/rbdxx that is still mapped on the server A.
In order to be able to remove that image again, you first have to unmap this device on the server A.
So to conclude, when migrating VM between servers when there are KRBD images involved, ProxMox should not forget to unmap properly the device on the preceding node.
Regards.
I just found a nasty bug when using ProxMox 4.2 in a clustered setup with Ceph and a KRBD configured storage.
Using KRBD, a /dev/rbdxx entry is created on the server to gain access to the RBD image.
When migrating a VM using such volume from server A to server B, the /dev/rbdxx device is still mapped on the server A and then mapped again on server B.
If you try to delete this VM, Ceph will complain because there are still existing watchers on this image.
In fact the "watcher" is the /dev/rbdxx that is still mapped on the server A.
In order to be able to remove that image again, you first have to unmap this device on the server A.
So to conclude, when migrating VM between servers when there are KRBD images involved, ProxMox should not forget to unmap properly the device on the preceding node.
Regards.