[SOLVED] CEPH Error with 5.4.5

GPLExpert

Renowned Member
Jul 18, 2016
40
2
73
France
gplexpert.com
Hello,

I Have two cephs cluster and i moved one vm disk from one to another with set delete the source disk.

So now i have one unused disk 0 on the old ceph pool.

I unset the protected vm and i try to delete the disk.

I Get :
Error with cfs lock 'storage-myceph': rbd snap purge 'vm-600-disk-2' error. Removing all snaphots: 0% complete.... failed

There are no snapshots.

Any ideas ?
 
Hello,

Code:
agent: 1
boot: cdn
bootdisk: virtio0
cores: 4
cpu: kvm64,flags=+pcid
ide2: none,media=cdrom
memory: 4096
name: PC-WIN-01
net0: virtio=3A:62:33:64:34:62,bridge=vmbr107
numa: 0
ostype: win7
protection: 1
scsihw: virtio-scsi-pci
smbios1: uuid=3fc6c0aa-681e-4587-90fb-be7004c37389
sockets: 1
unused0: CEPH01:vm-600-disk-2
unused1: CEPH01:vm-600-disk-1
vga: virtio,memory=512
virtio0: CEPH02:vm-600-disk-2,size=60G
virtio1: CEPH02:vm-600-disk-1,size=40G
virtio2: CEPH02:vm-600-disk-0,size=500G
 
The unused disk appear after moving.
What was the output of the move disk? I may be, that it couldn't delete the image and might have written the reason for it.

Error with cfs lock 'storage-myceph': rbd snap purge 'vm-600-disk-2' error. Removing all snaphots: 0% complete.... failed
As @sb-jw said, are you sure all snapshots are gone? With that, I mean, the snapshots made with Ceph tools directly, as the info for those are not written to the vmid.conf.
Code:
rbd -p <pool> ls -l
 
ok i finnaly found and resolve my problem.

I used rbd snapshot. so before moving on another ceph cluster, we have to delete snapshot before.
This snapshot are not seen on proxmox.

Warning : not forget to unprotect snapshot if not : error

After i can move without problem

Thanks Alwin for the info.