Hello all. I have problem with proxmox4 and remove ceph disk.
I saw bug report https://bugzilla.proxmox.com/show_bug.cgi?id=553#c1
My problem reproduced this way:
I created vm on proxmox cluster and first node on this cluster, after create I transfered vm to third node on cluster, and after 5minutes I try to remove this vm from cluster.
Proxmox said:
that's strange,
Try remove from console:
Oo that's very strange, because my VM work on node: 192.168.126.3 - third node, but watcher open on node first.
after that I try to start and stop VM on third node, and remove VM, I got error message.
In the end I migrate my VM to first node and removed this VM, maybe this situation is bug and need fix this problem? but I don't understand how I can fix?
I saw bug report https://bugzilla.proxmox.com/show_bug.cgi?id=553#c1
My problem reproduced this way:
I created vm on proxmox cluster and first node on this cluster, after create I transfered vm to third node on cluster, and after 5minutes I try to remove this vm from cluster.
Proxmox said:
Code:
Removing all snapshots: 100% complete...done.
image has watchers - not removing
Removing image: 0% complete...failed.
rbd: error: image still has watchers
TASK ERROR: rbd rm 'vm-624-disk-1' error: rbd: error: image still has watchers
that's strange,
Code:
rbd -p MYPOOL ls
vm-624-disk-1
Try remove from console:
Code:
# rbd rm vm-624-disk-1 -p MYPOOL
2015-11-09 11:24:08.616860 7f4f2b28c800 -1 librbd: image has watchers - not removing
Removing image: 0% complete...failed.
rbd: error: image still has watchers
This means the image is still open or the client using it crashed. Try again after closing/unmapping it or waiting 30s for the crashed client to timeout.
Code:
rbd status vm-624-disk-1 -p MYPOOL
Watchers:
watcher=192.168.126.1:0/2093160405 client.2137999 cookie=10
Oo that's very strange, because my VM work on node: 192.168.126.3 - third node, but watcher open on node first.
after that I try to start and stop VM on third node, and remove VM, I got error message.
In the end I migrate my VM to first node and removed this VM, maybe this situation is bug and need fix this problem? but I don't understand how I can fix?