various ceph-octopus issues

Dec 2, 2020
43
6
13
Hello Forum!

I run a 3 nodes hyper-converged meshed 10GbE ceph cluster, currently updated to the latest version on 3 identical hp server as a test-environment (pve 6.3.3 and ceph octopus 15.28, no HA,) with 3 x 16 SAS-HDs connected via HBA, 3x pve-os + 45 osds, rbd only, activated ceph-dashboard) Backups are made via PVE-GUI to a shared nfs NAS (vzdump.lzo).

This cluster was originally set up in early 2019 and ran for some month with sparse load. It was then in 2020 updated to 6.x and nautilus. I set up some vms, configured bridged networks, added a dedicated corosync-network, added a 10GbE Interface to the LAN-bridge, migrated some vms on and offline and did some tests which were quite satisfying in order to learn managing the environment. Once I was forced to relock and restore a stucked vm from backup and after ceph update to nautilus, I recreated successfully all osds of a node which after all resulted in a healthy ceph-cluster running 45 osds in one pool containing 1024 pgs.
Tow weeks ago, after having upgraded to 6.3.3 and ceph octopus, things started to go wrong, which was, as far as I can recall it, the beginning of a still ongoing failure/problem cascade.
Ceph autoscaling started to reduce the amount of pgs from1024 to 128 and that changed several times during the ongoing rebalancing. While heavy osd activity was to be seen, health warnings for slow ops arose which flooded syslog up to 30 gigabytes, so as the nodes root directory ran almost out of space and a nightly backup-task hung for hours, before I stoped it. The related vm couldn't be started any more. Finally I was able to restore the vm after unlocking it from backup into a new vm-id and then I tried to get rid of the original vm, which failed.
The restored vm stuck again after 3 days when I, without success, tried to migrate it to a different node, while massive health warnings continued to flood syslog and heavy rebalancing activity occurred. This time restoring the vm into a new id stucked with the following message (task output):
...
trying to acquire cfs lock 'storage-vm_store' ...
trying to acquire cfs lock 'storage-vm_store' ...
new volume ID is 'vm_store:vm-108-disk-0'

Several attempts to restore the vm all failed and finally I had to stop the pending tasks. Eventually I removed the disks via rbd. Then I tried to clean-up by removing the unresposive vms via GUI which just didn't happen.

To make a long story short - all attempts to resolve the problems and the failures I certainly made in order to restore/clean-up vms let to an unstable hardly manageable ceph cluster, which I will have to rebuild from scratch. Since I don't want to repeat errors, I have a bunch of question regarding some of the issues I faced:
But first some basic informations about the environment:

# pveversion -v
proxmox-ve: 6.3-1 (running kernel: 5.4.78-2-pve)
pve-manager: 6.3-3 (running version: 6.3-3/eee5f901)
pve-kernel-5.4: 6.3-3
pve-kernel-helper: 6.3-3
pve-kernel-5.3: 6.1-6
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-5.4.73-1-pve: 5.4.73-1
pve-kernel-5.4.65-1-pve: 5.4.65-1
pve-kernel-5.4.60-1-pve: 5.4.60-2
pve-kernel-5.4.44-2-pve: 5.4.44-2
pve-kernel-5.4.34-1-pve: 5.4.34-2
pve-kernel-4.15: 5.4-12
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.18-1-pve: 5.3.18-1
pve-kernel-5.3.13-1-pve: 5.3.13-1
pve-kernel-4.15.18-24-pve: 4.15.18-52
pve-kernel-4.15.18-9-pve: 4.15.18-30
ceph: 15.2.8-pve2
ceph-fuse: 15.2.8-pve2
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.7
libproxmox-backup-qemu0: 1.0.2-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-2
libpve-guest-common-perl: 3.1-4
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-4
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.6-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-3
pve-cluster: 6.2-1
pve-container: 3.3-2
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-8
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-3
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.5-pve1

Current state of ceph cluster (2 pools with 512 pgs each + one automatically created 'device_health_metrics' pool, osd.2, osd.74, osd.77 were shut down due to slow ops;no vm running) :
# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 12.26364 root default
-3 4.08829 host amcvh11
1 hdd 0.27280 osd.1 up 1.00000 1.00000
2 hdd 0.27280 osd.2 down 0 1.00000
3 hdd 0.27249 osd.3 up 1.00000 1.00000
4 hdd 0.27249 osd.4 up 1.00000 1.00000
5 hdd 0.27249 osd.5 up 1.00000 1.00000
6 hdd 0.27249 osd.6 up 1.00000 1.00000
7 hdd 0.27280 osd.7 up 1.00000 1.00000
8 hdd 0.27249 osd.8 up 1.00000 1.00000
9 hdd 0.27249 osd.9 up 1.00000 1.00000
10 hdd 0.27249 osd.10 up 1.00000 1.00000
11 hdd 0.27249 osd.11 up 1.00000 1.00000
12 hdd 0.27249 osd.12 up 1.00000 1.00000
14 hdd 0.27249 osd.14 up 1.00000 1.00000
45 hdd 0.27249 osd.45 up 1.00000 1.00000
47 hdd 0.27249 osd.47 up 1.00000 1.00000
-5 4.08768 host amcvh12
13 hdd 0.27280 osd.13 up 1.00000 1.00000
48 hdd 0.27249 osd.48 up 1.00000 1.00000
49 hdd 0.27249 osd.49 up 1.00000 1.00000
50 hdd 0.27249 osd.50 up 1.00000 1.00000
51 hdd 0.27249 osd.51 up 1.00000 1.00000
52 hdd 0.27249 osd.52 up 1.00000 1.00000
54 hdd 0.27249 osd.54 up 1.00000 1.00000
55 hdd 0.27249 osd.55 up 1.00000 1.00000
56 hdd 0.27249 osd.56 up 1.00000 1.00000
57 hdd 0.27249 osd.57 up 1.00000 1.00000
58 hdd 0.27249 osd.58 up 1.00000 1.00000
59 hdd 0.27249 osd.59 up 1.00000 1.00000
60 hdd 0.27249 osd.60 up 1.00000 1.00000
61 hdd 0.27249 osd.61 up 1.00000 1.00000
62 hdd 0.27249 osd.62 up 1.00000 1.00000
-7 4.08768 host amcvh13
0 hdd 0.27280 osd.0 up 1.00000 1.00000
63 hdd 0.27249 osd.63 up 1.00000 1.00000
64 hdd 0.27249 osd.64 up 1.00000 1.00000
65 hdd 0.27249 osd.65 up 1.00000 1.00000
66 hdd 0.27249 osd.66 up 1.00000 1.00000
68 hdd 0.27249 osd.68 up 1.00000 1.00000
69 hdd 0.27249 osd.69 up 1.00000 1.00000
70 hdd 0.27249 osd.70 up 1.00000 1.00000
71 hdd 0.27249 osd.71 up 1.00000 1.00000
72 hdd 0.27249 osd.72 up 1.00000 1.00000
73 hdd 0.27249 osd.73 up 1.00000 1.00000
74 hdd 0.27249 osd.74 down 0 1.00000
75 hdd 0.27249 osd.75 up 1.00000 1.00000
76 hdd 0.27249 osd.76 up 1.00000 1.00000
77 hdd 0.27249 osd.77 down 0 1.00000

# ceph -s
cluster:
id: ae713943-83f3-48b4-a0c2-124c092c250b
health: HEALTH_WARN
Reduced data availability: 31 pgs inactive, 14 pgs peering
Degraded data redundancy: 18368/1192034 objects degraded (1.541%), 23 pgs degraded, 23 pgs undersized
1 pools have too many placement groups
2 daemons have recently crashed
1225 slow ops, oldest one blocked for 59922 sec, daemons [osd.13,osd.2,osd.62,osd.74,osd.77] have slow ops.
services:
mon: 3 daemons, quorum amcvh11,amcvh12,amcvh13 (age 33h)
mgr: amcvh11(active, since 47h), standbys: amcvh12, amcvh13
osd: 45 osds: 42 up (since 15h), 42 in (since 14h); 27 remapped pgs
data:
pools: 3 pools, 993 pgs
objects: 397.36k objects, 1.5 TiB
usage: 1.1 TiB used, 10 TiB / 11 TiB avail
pgs: 3.122% pgs not active
18368/1192034 objects degraded (1.541%)
950 active+clean
11 active+undersized+degraded+remapped+backfill_wait
11 activating+undersized+degraded+remapped
10 peering
6 activating
4 remapped+peering
1 active+undersized+degraded+remapped+backfilling
progress:
Rebalancing after osd.2 marked in (16h)
[=======================.....] (remaining: 3h)
PG autoscaler decreasing pool 4 PGs from 512 to 128 (10h)
[==..........................] (remaining: 2w)

Here are my questions:

1. How to rebuild a ceph-cluster from scratch?

For rebuilding ceph from scratch I found the following threat/procedure: https://forum.proxmox.com/threads/how-to-clean-up-a-bad-ceph-config-and-start-from-scratch.68949/
1.1 Is this the way to go or has somebody any additional suggestions?

2. It should be possible to restore vzdump backups to a rebuilt ceph storage - is that correct?

I have some vzdump backups on a shared nfs-nas- if I reconnect the nas to the new rebuilt ceph, will I be able to restore the vms into a rebuilt rbd-storage ?

3. What exactly causes the ceph health-warnings and why are the by far most frequent 'get_health_metrics reporting ...' messages are repeated continuously for hours? How can this be stopped in a save way?

My biggest concern is due to the flooding of sysslogwithh the following messages, which starts out of the blue and which I cannot, as mentioned in a forum thread, stop by restarting the reported osds with slow ops or by restarting the monitor of the node. The only way to stop the health reporting was to shut down the related osds permanently.
This may be associated with the automatic appearance of a 'device_health_metrics' pool.

extract from syslog:

Jan 18 00:06:23 amcvh11 spiceproxy[4697]: restarting server
Jan 18 00:06:23 amcvh11 spiceproxy[4697]: starting 1 worker(s)
Jan 18 00:06:23 amcvh11 spiceproxy[4697]: worker 1370310 started
Jan 18 00:06:24 amcvh11 pveproxy[4648]: restarting server
Jan 18 00:06:24 amcvh11 pveproxy[4648]: starting 3 worker(s)
Jan 18 00:06:24 amcvh11 pveproxy[4648]: worker 1370311 started
Jan 18 00:06:24 amcvh11 pveproxy[4648]: worker 1370312 started
Jan 18 00:06:24 amcvh11 pveproxy[4648]: worker 1370313 started
# the following message is continously repeated
Jan 18 00:06:24 amcvh11 ceph-osd[409163]: 2021-01-18T00:06:24.466+0100 7f4d26cab700 -1 osd.46 65498 get_health_metrics reporting 530 slow ops, oldest is osd_op(client.127411400.0:57227 4.12d 4.5e53992d (undecoded) ondisk+retry+read+known_if_redirected e65446)
Jan 18 00:06:25 amcvh11 ceph-osd[409163]: 2021-01-18T00:06:25.422+0100 7f4d26cab700 -1 osd.46 65498 get_health_metrics reporting 530 slow ops, oldest is osd_op(client.127411400.0:57227 4.12d 4.5e53992d (undecoded) ondisk+retry+read+known_if_redirected e65446)
# for almost 11 hours
Jan 18 10:55:36 amcvh11 ceph-osd[409163]: 2021-01-18T10:55:36.423+0100 7f4d26cab700 -1 osd.46 65498 get_health_metrics reporting 2510 slow ops, oldest is osd_op(client.127411400.0:57227 4.12d 4.5e53992d (undecoded) ondisk+retry+read+known_if_redirected e65446)

3.1 What does this message say in detail, apart form my understanding, that the disk does not respond in time ?
3.2 Why was the pool 'device_health_metrics' created and what is its function (see the output of ceph -s above) ?
3.3 Why is ceph executing 'Rebalancing after osd.2 marked in' although I shut it down/out in order to replace the disk (see end of 'ceph -s' output above)?
3.4 Debugging 'slow ops' of osds seems to be a complex issue - I found lots of partly confusing information. Does anyone know a concise debugging checklist or related information which possibly includes the entire environment?

4. Can anybody elaborate how to debug and manage a non responsive vm running on ceph-rbd?
Any pointer is welcome!

5. Is there a save procedure to remove completely a vm (stucked or not) and restore it from backup afterwards?
Again, any advice or pointer is highly appreciated

Sorry for the lengthy story and the bunch of questions- any help or advice is highly appreciated !
If you need any further information or data, pls. let my know.
 
1. How to rebuild a ceph-cluster from scratch?
You may not need to. Let's see what's wrong first. ;)

3. What exactly causes the ceph health-warnings and why are the by far most frequent 'get_health_metrics reporting ...' messages are repeated continuously for hours? How can this be stopped in a save way?
That's the question. :) Best disable the pg autoscaler first and set the PG count on the pool to its original.

3.1 What does this message say in detail, apart form my understanding, that the disk does not respond in time ?
Yes, waiting on operations from other OSDs.

3.2 Why was the pool 'device_health_metrics' created and what is its function (see the output of ceph -s above) ?
Collects smart values on the OSD disks.

3.3 Why is ceph executing 'Rebalancing after osd.2 marked in' although I shut it down/out in order to replace the disk (see end of 'ceph -s' output above)?
I can't say, why it didn't go out but once a OSD is down & out (10min grace) the recovery / rebalance sets in.

3.4 Debugging 'slow ops' of osds seems to be a complex issue - I found lots of partly confusing information. Does anyone know a concise debugging checklist or related information which possibly includes the entire environment?
Not really. Each OSDs log has information on what produces the slow ops.

4. Can anybody elaborate how to debug and manage a non responsive vm running on ceph-rbd?
Any pointer is welcome!
It's mostly because the VMs storage isn't responding.

5. Is there a save procedure to remove completely a vm (stucked or not) and restore it from backup afterwards?
Again, any advice or pointer is highly appreciated
Kill the VM and restore from backup. :)

On which interface(s) is corosync running? The same as Ceph's?
 
  • Like
Reactions: t.lamprecht
Hi Alvin - thanks for your quick answers!
ad your answer to 5. - That's exactly what I tired to - but it failed - so the question is how to cleanup the residues (rbd error file not found , etc.) in order to be able to repeat a sucessful restore?

1611680370226.png
all three vms are zombies - I'am not able to restore any of them

To your question - no - dedicated 1GbE interfaces in each node, dedicated switch and dedicated network; ceph is meshed (3x2 10 GbE ) on a dedicated (confusingly named public ) network; 2 vm-lans are dedicated bridged networks with different interfaces, gateways and lines; no vlan

Ok - I'll following your suggestion to switch off autoscaler (although this rises the question, why is it the default config of the octopus-update anyway?), and will give you a feedback...
If you need more specific information, there is a lot of logs and data of the various problems - feel free to ask and again - I appreciate your input very much!
 
Ok - I'll following your suggestion to switch off autoscaler (although this rises the question, why is it the default config of the octopus-update anyway?), and will give you a feedback...
It's on for new pools, not existing ones. For the others its set to warn.

To your question - no - dedicated 1GbE interfaces in each node, dedicated switch and dedicated network; ceph is meshed (3x2 10 GbE ) on a dedicated (confusingly named public ) network; 2 vm-lans are dedicated bridged networks with different interfaces, gateways and lines; no vlan
That sounds like the reference setup. :)

ad your answer to 5. - That's exactly what I tired to - but it failed - so the question is how to cleanup the residues (rbd error file not found , etc.) in order to be able to repeat a sucessful restore?
Well, because Ceph is not responding in time. Try to set norebalance and norecovery temporary. Then restart the OSDs one at a time.
 
  • Like
Reactions: inxamc
Changed the pool values:

POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE
vm_store 335.4G 3.0 11719G 0.0859 1.0 1024 128 warn
device_health_metrics 4423k 2.0 11719G 0.0000 1.0 1 warn
vmx_store 55 3.0 11719G 0.0000 1.0 512 32 warn
 
you wrote: It's on for new pools, not existing ones. For the others its set to warn.
Nope - vm_store was my one and only pool before updating to octopus and it was set to 1024 pgs originally, which was changed to autoscale
Someboy misbehaved?!
 
you wrote: It's on for new pools, not existing ones. For the others its set to warn.
Nope - vm_store was my one and only pool before updating to octopus and it was set to 1024 pgs originally, which was changed to autoscale
Someboy misbehaved?!
From Ceph's release notes:
The PG autoscaler feature introduced in Nautilus is enabled for new pools by default, allowing new clusters to autotune pg num without any user intervention.
https://docs.ceph.com/en/latest/releases/octopus/#v15-2-0-octopus
 
you wrote: Well, because Ceph is not responding in time. Try to set norebalance and norecovery temporary. Then restart the OSDs one at a time.
I've done that 44/45 osd came back - one (osd.7) was kicked out after serveral attemps to start it - will replace this one tomorrow.
ceph is now heavily busy - I'll wait for the outcome and report tomorrow - Thanks for hanging on!
Ok - since I don't believe in real magic - it must be me to blame and we consider this to be solved - but I'll keep an eye on it ...
 
  • Like
Reactions: Alwin Antreich
Hello Alvin

following your recommandations initiated a successful ceph self-healing and led to
a healty ceph-cluster again - great!

# ceph -s
cluster:
id: ae713943-83f3-48b4-a0c2-124c092c250b
health: HEALTH_WARN
2 pools have too many placement groups

services:
mon: 3 daemons, quorum amcvh11,amcvh12,amcvh13 (age 3h)
mgr: amcvh11(active, since 3h), standbys: amcvh12, amcvh13
osd: 44 osds: 44 up (since 20h), 44 in (since 3h)

task status:

data:
20h), 44 in (since 3h)

task status:

data:
pools: 3 pools, 1537 pgs
objects: 397.39k objects, 1.5 TiB
usage: 1.1 TiB used, 11 TiB / 12 TiB avail
pgs: 1537 active+clean

Meanwhile I did some debugging, because osd.7 continually flooded syslog with messages again. This happens obviously if an osd state changes to out & down which
seems to causes the ceph mgr to start this massive logging. Removing and purging the osd stoped it.
This is possibly a bug, because I faced that behavior with multiple osds and different messages during the last days without recognizing that this is related to an osd state change to down & out. (see attached file)

Sanitizing my still existing rbd problem(s)will have to be done next:
Storages are reporting the following error:
rbd error: rbd: listing images failed: (2) No such file or directory (500)

because of:

# rbd ls -l vm_store
2021-01-27T14:18:28.914+0100 7f50acff9700 -1 librbd::io::AioCompletion: 0x56187aaec4e0 fail: (2) No such file or directory
2021-01-27T14:18:28.914+0100 7f50acff9700 -1 librbd::io::AioCompletion: 0x56187ad35b20 fail: (2) No such file or directory
2021-01-27T14:18:28.922+0100 7f50acff9700 -1 librbd::io::AioCompletion: 0x56187ad7dd20 fail: (2) No such file or directory
2021-01-27T14:18:28.922+0100 7f50acff9700 -1 librbd::io::AioCompletion: 0x56187ad5dad0 fail: (2) No such file or directory
rbd: error opening vm-101-disk-0: (2) No such file or directory
rbd: error opening vm-102-disk-0: (2) No such file or directory
rbd: error opening vm-104-disk-0: (2) No such file or directory
rbd: error opening vm-110-disk-0: (2) No such file or directory
2021-01-27T14:18:28.926+0100 7f50acff9700 -1 librbd::io::AioCompletion: 0x56187ad31480 fail: (2) No such file or directory
rbd: error opening vm-100-disk-0: (2) No such file or directory
2021-01-27T14:18:28.926+0100 7f50acff9700 -1 librbd::io::AioCompletion: 0x56187ad79500 fail: (2) No such file or directory
rbd: error opening vm-105-disk-0: (2) No such file or directory
NAME SIZE PARENT FMT PROT LOCK
vm-106-disk-0 512 GiB 2 excl
vm-107-disk-0 512 GiB 2 excl
vm-108-disk-0 512 GiB 2 excl
rbd: listing images failed: (2) No such file or directory

Any sugestion how to resolve this problem(s)?
 

Attachments

  • DEBUG CEPH ERR - rotating keys expired way to early 2021-01-27.txt
    3.8 KB · Views: 4
This is possibly a bug, because I faced that behavior with multiple osds and different messages during the last days without recognizing that this is related to an osd state change to down & out. (see attached file)
If you refer to the _check_auth_rotating possible clock skew, could it be like the message says? Do all nodes have the exact same time?

2021-01-27T14:18:28.914+0100 7f50acff9700 -1 librbd::io::AioCompletion: 0x56187aaec4e0 fail: (2) No such file or directory
Are these message observed on other nodes as well? Are all the VM/CT running?
 
If you refer to the _check_auth_rotating possible clock skew, could it be like the message says? Do all nodes have the exact same time?


Are these message observed on other nodes as well? Are all the VM/CT running?
you asked: Are these message observed on other nodes as well? These messages were only connected to one osd (pls. check the attachment) and logged in syslog of the hosting node only.
you asked: Are all the VM/CT running? Nothing was active - as mentioned I' tried to get rid of all vms (no containers) to start from scratch. There were some images possibly defect, left over, and I wasn't able to delete them and purge the pool (this is done now). Maybe you should ignore these error messages - I don't see any connection with the logging problem.
 
Last edited:
you asked: Are these message observed on other nodes as well? These messages were only connected to one osd (pls. check the attachment) and logged in syslog of the hosting node only.
I think you misunderstood me, I meant the rbd ls messages. If you execute the command on other nodes, does it behave the same?

you asked: Are all the VM/CT running? Nothing was active - as mentioned I' tried to get rid of all vms (no containers) to start from scratch. There were some images possibly defect, left over, and I wasn't able to delete them and purge the pool (this is done now). Maybe you should ignore these error messages - I don't see any connection with the logging problem.
Ok, I will do. :)
 
If you think that the logging problem is relevant
Hi Alvin

I do, because it caused a lot of trouble (in total about 100 GByte of syslog entires on all 3 nodes and little less in ceph.log daemon.log and some osd.logs within some days) and the sources were osds flaged down & out (see debug attachment) which - correct me, if I'am wrong, should be silent. Isn't this relevant? Sure, I could only debug the symtoms and have no deeper knowledge about the interplay between osd - mgr - ceph - proxmox - debian and logging in detail (yet ;)) - I only see the players and their behavior - and yes I' wasn't even aware that I misunderstood you :oops:.
Well, finally I suspect what you might be looking for, and I checked my documentation (meanwhile I was able to clenaup this pool and delted it - it had too much pgs anyway) and found what you were asking for. Two outputs of 'rbd ls -l' commands of two different nodes at different dates:
# 1
oot@amcvh13:~# rbd ls -l vm_store
2021-01-22T18:11:59.958+0100 7faa82ffd700 -1 librbd::io::AioCompletion: 0x560911486f60 fail: (2) No such file or directory
2021-01-22T18:12:04.790+0100 7faa82ffd700 -1 librbd::io::AioCompletion: 0x56091143f240 fail: (2) No such file or directory
rbd: error opening vm-101-disk-0: (2) No such file or directory
rbd: error opening vm-100-disk-0: (2) No such file or directory
NAME SIZE PARENT FMT PROT LOCK
vm-102-disk-0 128 GiB 2 excl
vm-104-disk-0 512 GiB 2 excl
vm-105-disk-0 512 GiB 2 excl
vm-106-disk-0 512 GiB 2 excl
vm-110-disk-0 64 GiB 2 excl
rbd: listing images failed: (2) No such file or directory
# 2
root@amcvh11:~# rbd ls -l vm_store
2021-01-27T14:18:28.914+0100 7f50acff9700 -1 librbd::io::AioCompletion: 0x56187aaec4e0 fail: (2) No such file or directory
2021-01-27T14:18:28.914+0100 7f50acff9700 -1 librbd::io::AioCompletion: 0x56187ad35b20 fail: (2) No such file or directory
2021-01-27T14:18:28.922+0100 7f50acff9700 -1 librbd::io::AioCompletion: 0x56187ad7dd20 fail: (2) No such file or directory
2021-01-27T14:18:28.922+0100 7f50acff9700 -1 librbd::io::AioCompletion: 0x56187ad5dad0 fail: (2) No such file or directory
rbd: error opening vm-101-disk-0: (2) No such file or directory
rbd: error opening vm-102-disk-0: (2) No such file or directory
rbd: error opening vm-104-disk-0: (2) No such file or directory
rbd: error opening vm-110-disk-0: (2) No such file or directory
2021-01-27T14:18:28.926+0100 7f50acff9700 -1 librbd::io::AioCompletion: 0x56187ad31480 fail: (2) No such file or directory
rbd: error opening vm-100-disk-0: (2) No such file or directory
2021-01-27T14:18:28.926+0100 7f50acff9700 -1 librbd::io::AioCompletion: 0x56187ad79500 fail: (2) No such file or directory
rbd: error opening vm-105-disk-0: (2) No such file or directory
NAME SIZE PARENT FMT PROT LOCK
vm-106-disk-0 512 GiB 2 excl
vm-107-disk-0 512 GiB 2 excl
vm-108-disk-0 512 GiB 2 excl
rbd: listing images failed: (2) No such file or directory

Most of the images were created during restore operations which failed. Meanwhile they are all deleted and the pool, as mentioned, has been removed.
Again, you set me on track to figure it out - and you were right - correct the errors (autoscaling) - cleanup and ceph will do its job.
Now, I can proceed with a healthy functional ceph storage and add as planned the backup server - mostly due to your great support!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!