rbd: error opening pool 'cephpool': (2) No such file or directory

dejhost

Member
Dec 13, 2020
64
1
13
45
Hi.
I have mounted a cephfs-disk in proxmox 7.3 using sshfs. The cephfs-disk is physically installed on another proxmox-server in the same LAN, but not in the same cluster.

I backed up some VM's and containers on this drive, in order to restore them again on the destination server for migration purposes.

One of the backup failed due to too little diskspace, but the backup-process didn't seem to exit in a clean way. So on the destination host, I removed all backups (the incomplete one + the ones I have already successfully migrated) using the rm-command.

However, the disk on the destination-server "Raid1" that hosts the freshly migrated VM's and containers tells me:
"Usage: 25.81% (2.45 TB of 9.49 TB)". This is strange, since the VM's have combined 3.5TB of VM-disks (see second code snippet below). I have not (yet) encountered any data missing. Instead of the VM-disks, the GUI shows "rbd error: rbd: listing images failed: (2) No such file or directory (500)".

root@s301:~# rbd ls -l cephpool rbd: error opening pool 'cephpool': (2) No such file or directory rbd: listing images failed: (2) No such file or directory

Here is some info about the two VM's with VM-disks on the drive giving the rdb-error:
root@s301:~# grep disk /etc/pve/qemu-server/104.conf scsi0: local-lvm:vm-104-disk-0,size=32G unused0: Raid1:vm-104-disk-0 virtio1: Raid1:vm-104-disk-1,size=2500G root@s301:~# grep disk /etc/pve/qemu-server/105.conf unused0: Raid1:vm-105-disk-0 virtio0: local-lvm:vm-105-disk-0,size=80G virtio1: Raid1:vm-105-disk-1,size=1000G root@s301:~#

Could you please advise on how to proceed? How may I investigate and fix this without risking to loose data?
 
Last edited:
Hi,
please share your /etc/pve/storage.cfg. It seems that cephpool is not accessible/present on this host. Did you mean to access the sshfs mount point instead? Depending on what kind of storage Raid1 is, the actual usage can be lower than the virtual size of VM images.
 
Hi,
please share your /etc/pve/storage.cfg. It seems that cephpool is not accessible/present on this host. Did you mean to access the sshfs mount point instead? Depending on what kind of storage Raid1 is, the actual usage can be lower than the virtual size of VM images.
Code:
root@s301:/etc/pve# cat storage.cfg
dir: local
        path /var/lib/vz
        content iso,backup,vztmpl

lvmthin: local-lvm
        thinpool data
        vgname pve
        content images,rootdir

rbd: Raid1
        content images,rootdir
        krbd 0
        pool Raid1

cephfs: CephFS
        path /mnt/pve/CephFS
        content snippets,iso,vztmpl,backup
        fs-name cephfs
        prune-backups keep-all=1

root@s301:/etc/pve#

temp.jpg
 
Last edited:
RBD storages are thinly provisioned, so the actual usage can be significantly less than what is reserved for VM images. What does rbd ls -l Raid1 show? So this host is part of a Ceph cluster after all?
 
RBD storages are thinly provisioned, so the actual usage can be significantly less than what is reserved for VM images. What does rbd ls -l Raid1 show? So this host is part of a Ceph cluster after all?
Code:
root@s301:~# rbd ls -l Raid1
rbd: error opening vm-105-disk-0: (2) No such file or directory
NAME            SIZE      PARENT  FMT  PROT  LOCK
vm-104-disk-0     32 GiB            2           
vm-104-disk-1    2.4 TiB            2        excl
vm-105-disk-1   1000 GiB            2        excl
vm-1102-disk-0    80 GiB            2           
vm-121-disk-0     50 GiB            2        excl
rbd: listing images failed: (2) No such file or directory
root@s301:~#

No, the host is not part of a cluster.
 
What does rbd info vm-105-disk-0 -p Raid1 show? If it's broken, you could try removing it with rbd rm vm-105-disk-0 -p Raid1 to get rid of the error.
 
What does rbd info vm-105-disk-0 -p Raid1 show? If it's broken, you could try removing it with rbd rm vm-105-disk-0 -p Raid1 to get rid of the error.
Code:
root@s301:~# rbd info vm-105-disk-0 -p Raid1
rbd: error opening image vm-105-disk-0: (2) No such file or directory
root@s301:~# rbd rm vm-105-disk-0 -p Raid1
Removing image: 100% complete...done.
root@s301:~# rbd ls -l cephpool
rbd: error opening pool 'cephpool': (2) No such file or directory
rbd: listing images failed: (2) No such file or directory
root@s301:~# rbd ls -l Raid1
NAME            SIZE      PARENT  FMT  PROT  LOCK
vm-104-disk-0     32 GiB            2            
vm-104-disk-1    2.4 TiB            2        excl
vm-105-disk-1   1000 GiB            2        excl
vm-1102-disk-0    80 GiB            2            
vm-121-disk-0     50 GiB            2        excl
root@s301:~#

Even though rbd ls -l cephpool still returns an error, the GUI shows the disks again:
temp2.jpg

So this looks much better than before. Should I keep it as it is?
 
Even though rbd ls -l cephpool still returns an error, the GUI shows the disks again:
Does that pool exist? You can check with ceph osd pool ls.

So this looks much better than before. Should I keep it as it is?
If your VMs work as expected, I'd say yes. But it's not clear how the vm-105-disk-0 got broken. Was there an error during the restore of that VM?
 
Does that pool exist? You can check with ceph osd pool ls.


If your VMs work as expected, I'd say yes. But it's not clear how the vm-105-disk-0 got broken. Was there an error during the restore of that VM?

Code:
root@s301:~# ceph osd pool ls
Raid1
.mgr
cephfs_data
cephfs_metadata
root@s301:~#

vm-105-disk-0 was not part of the migration process. I added it later and had right from the start troubles mounting it.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!