[SOLVED] Ceph Pool listing VM and CT Disks "rbd error: rbd: listing images failed: (2) No such file or directory (500)"

Noah0302

Member
Jul 21, 2022
42
4
8
Hello guys,

I updated my PVE Cluster yesterday to the newest Ceph Version and did not notice any issues at first.
The VMs and CTs do run normally, I can read and write to the virtual disks, even create new ones, just fine! Migrating between Nodes also works, as well as deleting CTs...
But listing the VM and CT disks in the GUI gives me
Code:
rbd error: rbd: listing images failed: (2) No such file or directory (500)

List of installed packages:
Code:
proxmox-ve: 7.4-1 (running kernel: 5.15.107-2-pve)
pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a)
pve-kernel-5.15: 7.4-3
pve-kernel-5.15.107-2-pve: 5.15.107-2
pve-kernel-5.15.107-1-pve: 5.15.107-1
pve-kernel-5.15.104-1-pve: 5.15.104-2
pve-kernel-5.15.30-2-pve: 5.15.30-3
ceph: 17.2.6-pve1
ceph-fuse: 17.2.6-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx4
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4-3
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.4-1
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-3
libpve-rs-perl: 0.7.6
libpve-storage-perl: 7.4-2
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.2-1
proxmox-backup-file-restore: 2.4.2-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.1-1
proxmox-widget-toolkit: 3.7.0
pve-cluster: 7.3-3
pve-container: 4.4-3
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-2
pve-firewall: 4.3-2
pve-firmware: 3.6-5
pve-ha-manager: 3.6.1
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.4-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.11-pve1

Output of rbd list:
Code:
rbd: error opening default pool 'rbd'                                                                                                                                                                                                  
Ensure that the default pool has been created or specify an alternate pool name.                                                                                                                                                        
rbd: listing images failed: (2) No such file or directory

Output of rbd info Ceph-NVMe-Pool:
Code:
rbd: error opening default pool 'rbd'
Ensure that the default pool has been created or specify an alternate pool name.

Output of rbd ls --long -p Ceph-NVMe-Pool:
Code:
rbd: error opening vm-190040-disk-0: (2) No such file or directory
NAME                           SIZE     PARENT                                        FMT  PROT  LOCK
base-88888888-disk-0            16 GiB                                                  2        excl
base-88888888-disk-0@__base__   16 GiB                                                  2  yes    
base-99999999-disk-0            16 GiB                                                  2        excl
base-99999999-disk-0@__base__   16 GiB                                                  2  yes    
vm-1100003-disk-0                8 GiB                                                  2        excl
vm-110002-disk-0                 8 GiB                                                  2        excl
vm-110003-disk-0                 8 GiB                                                  2        excl
vm-110005-disk-0                32 GiB                                                  2        excl
vm-110030-disk-0                16 GiB                                                  2        excl
vm-110249-disk-0                32 GiB                                                  2        excl
vm-110252-disk-0                32 GiB                                                  2        excl
vm-1200003-disk-0                8 GiB                                                  2        excl
vm-120003-disk-0                 8 GiB                                                  2        excl
vm-1222003-disk-0                8 GiB                                                  2        excl
vm-190002-disk-0                16 GiB                                                  2        excl
vm-190003-disk-0                 8 GiB                                                  2        excl
vm-190019-disk-1                16 GiB                                                  2        excl
vm-190050-disk-0                 1 MiB                                                  2        excl
vm-190050-disk-1                 4 MiB                                                  2        excl
vm-190050-disk-2                64 GiB                                                  2        excl
vm-190050-disk-3               128 GiB                                                  2        excl
vm-190200-disk-0                32 GiB                                                  2        excl
vm-190201-disk-0                32 GiB                                                  2        excl
vm-190250-disk-0                32 GiB                                                  2        excl
vm-190251-disk-0                32 GiB                                                  2        excl
vm-210254-disk-0                32 GiB                                                  2        excl
vm-210254-disk-0@PreCron        32 GiB                                                  2        
vm-24101-disk-0                 16 GiB  Ceph-NVMe-Pool/base-88888888-disk-0@__base__    2        excl
vm-24102-disk-0                 16 GiB  Ceph-NVMe-Pool/base-88888888-disk-0@__base__    2        excl
vm-24103-disk-0                 16 GiB  Ceph-NVMe-Pool/base-88888888-disk-0@__base__    2        excl
vm-390041-disk-0                48 GiB                                                  2        excl
rbd: listing images failed: (2) No such file or directory

This started happening after the most recent Ceph Package Update.
Did Ceph forget that my only Pool is the default one?

Can anyone help me here?!


Thanks for reading
 
Last edited:
This happens when you interrupt a rbd rm task.

just reissue the rm command like so
rbd rm -p poolname vm-190040-disk-0
This fixed it, thank you very much!

I honestly dont know how the interrupt might have happened. This is a VM that I explicitly run on local-zfs and not Ceph, also why just after the update?
 
I confirm that I suffered the same problem when deleting a vm-xxxx-disk-x and in the GUI it showed that error message. And same solution as above.
Captura desde 2023-06-13 09-40-46.png

rbd ls --long -p Name-Pool: -> gives error, without the use of "--long" it does not give such error.

Solution delete the image by command.
 
  • Like
Reactions: Soe Hoe

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!