Ceph complete outage

DrillSgtErnst

Active Member
Jun 29, 2020
91
6
28
Hi,
today one of my clusters randomly shutdown completely.

Within a second 5 OSDs crashed
1655821845053.png
I can't seem to see all OSDs in lsblk. Two osds are visible as completely empty and unused disks.
Network is fine.
All disks are the same and 9 months old. I doubt that 5 drives failed at a time.
5 drives failure couldn't be rebalanced, so the rebalance stopped too.

I can not get the osds active again. Not even per reboot.
At the Moment I am recovering all machines from backup, but this is really annoying at least. Ceph should be robust enogh. I can't see why the disks are missing.


root@pve2:~# pveversion -v
proxmox-ve: 7.2-1 (running kernel: 5.15.35-2-pve)
pve-manager: 7.2-4 (running version: 7.2-4/ca9d43cc)
pve-kernel-5.15: 7.2-4
pve-kernel-helper: 7.2-4
pve-kernel-5.13: 7.1-9
pve-kernel-5.11: 7.0-10
pve-kernel-5.15.35-2-pve: 5.15.35-5
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-3-pve: 5.13.19-7
pve-kernel-5.13.19-1-pve: 5.13.19-3
pve-kernel-5.11.22-7-pve: 5.11.22-12
pve-kernel-5.11.22-4-pve: 5.11.22-9
ceph: 16.2.7
ceph-fuse: 16.2.7
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve1
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-2
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-2
libpve-guest-common-perl: 4.1-2
libpve-http-server-perl: 4.1-2
libpve-storage-perl: 7.2-4
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.12-1
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.2.3-1
proxmox-backup-file-restore: 2.2.3-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-1
pve-container: 4.2-1
pve-docs: 7.2-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.4-2
pve-ha-manager: 3.3-4
pve-i18n: 2.7-2
pve-qemu-kvm: 6.2.0-10
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.4-pve1
root@pve2:~#

What has happened here and how can I recover my ceph. not neccessarily data, but a working system again.


I tried ceph-volume simple scan and simple scan /dev/nvme0n1 (latter was not successful error: Argument is not a directory or device which is required to scan)

2022-06-21T13:59:19.189009+0200 mon.pve1 (mon.0) 3233822 : cluster [DBG] osdmap e4685: 10 total, 5 up, 7 in
2022-06-21T13:59:19.189733+0200 mon.pve1 (mon.0) 3233823 : cluster [DBG] mgrmap e88: pve3(active, since 6M), standbys: pve5, pve1
2022-06-21T13:59:19.189943+0200 mon.pve1 (mon.0) 3233824 : cluster [WRN] Health check failed: 2/5 mons down, quorum pve1,pve2,pve5 (MON_DOWN)
2022-06-21T13:59:19.190926+0200 mon.pve1 (mon.0) 3233825 : cluster [DBG] osd.5 reported immediately failed by osd.3
2022-06-21T13:59:19.190981+0200 mon.pve1 (mon.0) 3233826 : cluster [INF] osd.5 failed (root=default,host=pve3) (connection refused reported by osd.3)
2022-06-21T13:59:19.190999+0200 mon.pve1 (mon.0) 3233827 : cluster [DBG] osd.5 reported immediately failed by osd.3
2022-06-21T13:59:19.191016+0200 mon.pve1 (mon.0) 3233828 : cluster [DBG] osd.5 reported immediately failed by osd.3
2022-06-21T13:59:19.191031+0200 mon.pve1 (mon.0) 3233829 : cluster [DBG] osd.5 reported immediately failed by osd.3
2022-06-21T13:59:19.191050+0200 mon.pve1 (mon.0) 3233830 : cluster [DBG] osd.5 reported immediately failed by osd.3
2022-06-21T13:59:19.191071+0200 mon.pve1 (mon.0) 3233831 : cluster [DBG] osd.5 reported immediately failed by osd.3
2022-06-21T13:59:19.191104+0200 mon.pve1 (mon.0) 3233832 : cluster [DBG] osd.5 reported immediately failed by osd.3
2022-06-21T13:59:19.191119+0200 mon.pve1 (mon.0) 3233833 : cluster [DBG] osd.5 reported immediately failed by osd.3
2022-06-21T13:59:19.191134+0200 mon.pve1 (mon.0) 3233834 : cluster [DBG] osd.5 reported immediately failed by osd.3
2022-06-21T13:59:19.191157+0200 mon.pve1 (mon.0) 3233835 : cluster [DBG] osd.6 reported immediately failed by osd.3
2022-06-21T13:59:19.191167+0200 mon.pve1 (mon.0) 3233836 : cluster [INF] osd.6 failed (root=default,host=pve4) (connection refused reported by osd.3)
2022-06-21T13:59:19.191181+0200 mon.pve1 (mon.0) 3233837 : cluster [DBG] osd.6 reported immediately failed by osd.3
2022-06-21T13:59:19.191194+0200 mon.pve1 (mon.0) 3233838 : cluster [DBG] osd.6 reported immediately failed by osd.3
2022-06-21T13:59:19.191214+0200 mon.pve1 (mon.0) 3233839 : cluster [DBG] osd.6 reported immediately failed by osd.3
2022-06-21T13:59:19.191228+0200 mon.pve1 (mon.0) 3233840 : cluster [DBG] osd.6 reported immediately failed by osd.3
2022-06-21T13:59:19.191242+0200 mon.pve1 (mon.0) 3233841 : cluster [DBG] osd.6 reported immediately failed by osd.3
2022-06-21T13:59:19.191256+0200 mon.pve1 (mon.0) 3233842 : cluster [DBG] osd.6 reported immediately failed by osd.3
2022-06-21T13:59:19.191269+0200 mon.pve1 (mon.0) 3233843 : cluster [DBG] osd.6 reported immediately failed by osd.3
2022-06-21T13:59:19.191283+0200 mon.pve1 (mon.0) 3233844 : cluster [DBG] osd.6 reported immediately failed by osd.3
2022-06-21T13:59:19.191303+0200 mon.pve1 (mon.0) 3233845 : cluster [DBG] osd.5 reported failed by osd.3
2022-06-21T13:59:19.193156+0200 mon.pve1 (mon.0) 3233846 : cluster [DBG] osd.5 reported immediately failed by osd.2
2022-06-21T13:59:19.193208+0200 mon.pve1 (mon.0) 3233847 : cluster [DBG] osd.5 reported immediately failed by osd.2
2022-06-21T13:59:19.193259+0200 mon.pve1 (mon.0) 3233848 : cluster [DBG] osd.5 reported immediately failed by osd.2
2022-06-21T13:59:19.193285+0200 mon.pve1 (mon.0) 3233849 : cluster [DBG] osd.5 reported immediately failed by osd.8
2022-06-21T13:59:19.193328+0200 mon.pve1 (mon.0) 3233850 : cluster [DBG] osd.5 reported immediately failed by osd.8
2022-06-21T13:59:19.193359+0200 mon.pve1 (mon.0) 3233851 : cluster [DBG] osd.5 reported immediately failed by osd.8
2022-06-21T13:59:19.193401+0200 mon.pve1 (mon.0) 3233852 : cluster [DBG] osd.5 reported immediately failed by osd.2
2022-06-21T13:59:19.193425+0200 mon.pve1 (mon.0) 3233853 : cluster [DBG] osd.5 reported immediately failed by osd.2
2022-06-21T13:59:19.193466+0200 mon.pve1 (mon.0) 3233854 : cluster [DBG] osd.5 reported immediately failed by osd.2
2022-06-21T13:59:19.193493+0200 mon.pve1 (mon.0) 3233855 : cluster [DBG] osd.5 reported immediately failed by osd.8
2022-06-21T13:59:19.193539+0200 mon.pve1 (mon.0) 3233856 : cluster [DBG] osd.5 reported immediately failed by osd.8
2022-06-21T13:59:19.193579+0200 mon.pve1 (mon.0) 3233857 : cluster [DBG] osd.5 reported immediately failed by osd.8
2022-06-21T13:59:19.193634+0200 mon.pve1 (mon.0) 3233858 : cluster [DBG] osd.5 reported immediately failed by osd.2
2022-06-21T13:59:19.193663+0200 mon.pve1 (mon.0) 3233859 : cluster [DBG] osd.5 reported immediately failed by osd.2
2022-06-21T13:59:19.193708+0200 mon.pve1 (mon.0) 3233860 : cluster [DBG] osd.5 reported immediately failed by osd.2
2022-06-21T13:59:19.193736+0200 mon.pve1 (mon.0) 3233861 : cluster [DBG] osd.5 reported immediately failed by osd.8
2022-06-21T13:59:19.193776+0200 mon.pve1 (mon.0) 3233862 : cluster [DBG] osd.5 reported immediately failed by osd.8
2022-06-21T13:59:19.193840+0200 mon.pve1 (mon.0) 3233863 : cluster [DBG] osd.5 reported immediately failed by osd.8
2022-06-21T13:59:19.193918+0200 mon.pve1 (mon.0) 3233864 : cluster [DBG] osd.6 reported immediately failed by osd.2
2022-06-21T13:59:19.193959+0200 mon.pve1 (mon.0) 3233865 : cluster [DBG] osd.6 reported immediately failed by osd.2
2022-06-21T13:59:19.193990+0200 mon.pve1 (mon.0) 3233866 : cluster [DBG] osd.6 reported immediately failed by osd.2
2022-06-21T13:59:19.194045+0200 mon.pve1 (mon.0) 3233867 : cluster [DBG] osd.6 reported immediately failed by osd.8
2022-06-21T13:59:19.194100+0200 mon.pve1 (mon.0) 3233868 : cluster [DBG] osd.6 reported immediately failed by osd.8
2022-06-21T13:59:19.194159+0200 mon.pve1 (mon.0) 3233869 : cluster [DBG] osd.6 reported immediately failed by osd.8
2022-06-21T13:59:19.194189+0200 mon.pve1 (mon.0) 3233870 : cluster [DBG] osd.6 reported immediately failed by osd.2
2022-06-21T13:59:19.194238+0200 mon.pve1 (mon.0) 3233871 : cluster [DBG] osd.6 reported immediately failed by osd.2
2022-06-21T13:59:19.194266+0200 mon.pve1 (mon.0) 3233872 : cluster [DBG] osd.6 reported immediately failed by osd.2
2022-06-21T13:59:19.194300+0200 mon.pve1 (mon.0) 3233873 : cluster [DBG] osd.6 reported immediately failed by osd.8
2022-06-21T13:59:19.194320+0200 mon.pve1 (mon.0) 3233874 : cluster [DBG] osd.6 reported immediately failed by osd.8
2022-06-21T13:59:19.194369+0200 mon.pve1 (mon.0) 3233875 : cluster [DBG] osd.6 reported immediately failed by osd.8
2022-06-21T13:59:19.194450+0200 mon.pve1 (mon.0) 3233876 : cluster [DBG] osd.6 reported immediately failed by osd.2
2022-06-21T13:59:19.194480+0200 mon.pve1 (mon.0) 3233877 : cluster [DBG] osd.6 reported immediately failed by osd.2
2022-06-21T13:59:19.194532+0200 mon.pve1 (mon.0) 3233878 : cluster [DBG] osd.6 reported immediately failed by osd.2
2022-06-21T13:59:19.194567+0200 mon.pve1 (mon.0) 3233879 : cluster [DBG] osd.6 reported immediately failed by osd.8
2022-06-21T13:59:19.194624+0200 mon.pve1 (mon.0) 3233880 : cluster [DBG] osd.6 reported immediately failed by osd.8
2022-06-21T13:59:19.194670+0200 mon.pve1 (mon.0) 3233881 : cluster [DBG] osd.6 reported immediately failed by osd.8
2022-06-21T13:59:19.194795+0200 mon.pve1 (mon.0) 3233882 : cluster [DBG] osd.5 reported failed by osd.2
2022-06-21T13:59:19.197543+0200 mon.pve1 (mon.0) 3233883 : cluster [WRN] Health detail: HEALTH_WARN 2/5 mons down, quorum pve1,pve2,pve5; 2 osds down; Reduced data availability: 41 pgs inactive; Degraded data redundancy: 616560/2297379 objects degraded (26.838%), 92 pgs degraded, 170 pgs undersized; 104 pgs not deep-scrubbed in time; 104 pgs not scrubbed in time; 1 daemons have recently crashed
2022-06-21T13:59:19.197556+0200 mon.pve1 (mon.0) 3233884 : cluster [WRN] [WRN] MON_DOWN: 2/5 mons down, quorum pve1,pve2,pve5
2022-06-21T13:59:19.197561+0200 mon.pve1 (mon.0) 3233885 : cluster [WRN] mon.pve3 (rank 2) addr [v2:10.20.15.5:3300/0,v1:10.20.15.5:6789/0] is down (out of quorum)
2022-06-21T13:59:19.197566+0200 mon.pve1 (mon.0) 3233886 : cluster [WRN] mon.pve4 (rank 3) addr [v2:10.20.15.6:3300/0,v1:10.20.15.6:6789/0] is down (out of quorum)
2022-06-21T13:59:19.197570+0200 mon.pve1 (mon.0) 3233887 : cluster [WRN] [WRN] OSD_DOWN: 2 osds down
2022-06-21T13:59:19.197575+0200 mon.pve1 (mon.0) 3233888 : cluster [WRN] osd.4 (root=default,host=pve3) is down
2022-06-21T13:59:19.197593+0200 mon.pve1 (mon.0) 3233889 : cluster [WRN] osd.7 (root=default,host=pve4) is down

Jun 21 16:45:02 pve2 ceph-osd[85907]: 2022-06-21T16:45:02.922+0200 7fbc5bf80f00 -1 auth: unable to find a keyring on /var/lib/ceph/osd/ceph-1/keyring: (2) No such file or directory
Jun 21 16:45:02 pve2 ceph-osd[85907]: 2022-06-21T16:45:02.922+0200 7fbc5bf80f00 -1 AuthRegistry(0x5560492cca40) no keyring found at /var/lib/ceph/osd/ceph-1/keyring, disabling cephx
Jun 21 16:45:02 pve2 ceph-osd[85907]: 2022-06-21T16:45:02.922+0200 7fbc5bf80f00 -1 auth: unable to find a keyring on /var/lib/ceph/osd/ceph-1/keyring: (2) No such file or directory
Jun 21 16:45:02 pve2 ceph-osd[85907]: 2022-06-21T16:45:02.922+0200 7fbc5bf80f00 -1 AuthRegistry(0x7fffc6dcb4c0) no keyring found at /var/lib/ceph/osd/ceph-1/keyring, disabling cephx
Jun 21 16:45:02 pve2 ceph-osd[85907]: failed to fetch mon config (--no-mon-config to skip)
 
Last edited:
Please provide the output of lsblk as well as the logs for all faulty OSDs (/var/log/ceph/ceph-osd.<X>.log).
Additionally, please provide the ceph log for one of the working hosts: /var/log/ceph/ceph.log
 
root@pve2:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme2n1 259:0 0 238.5G 0 disk
├─nvme2n1p1 259:2 0 1007K 0 part
├─nvme2n1p2 259:3 0 512M 0 part
└─nvme2n1p3 259:4 0 238G 0 part
nvme3n1 259:1 0 238.5G 0 disk
├─nvme3n1p1 259:7 0 1007K 0 part
├─nvme3n1p2 259:8 0 512M 0 part
└─nvme3n1p3 259:9 0 238G 0 part
nvme0n1 259:5 0 1.7T 0 disk
nvme1n1 259:6 0 1.7T 0 disk

root@pve2:~# tail /var/log/ceph/ceph-osd.0.log
root@pve2:~# tail /var/log/ceph/ceph-osd.1.log

Want to say, is empty


ceph.log is under my edits in the previous posting
 
Code:
2022-06-21T13:59:19.189943+0200 mon.pve1 (mon.0) 3233824 : cluster [WRN] Health check failed: 2/5 mons down, quorum pve1,pve2,pve5 (MON_DOWN)
So this not only affects the OSDs, but also your monitors?

Do all nodes in the cluster see each other (pvecm status)?
Can they reach each other over the ceph network(s)?
 
can you send result of

#ceph osd tree

?

also, seem that 1 osd have crashed,
you can see crash logs with

#ceph crash ls
#ceph crash info <idofthecrashlog>


also,check that you didn't have out of memory

#dmesg -T|grep -i oom
 
Code:
2022-06-21T13:59:19.189943+0200 mon.pve1 (mon.0) 3233824 : cluster [WRN] Health check failed: 2/5 mons down, quorum pve1,pve2,pve5 (MON_DOWN)
So this not only affects the OSDs, but also your monitors?

Do all nodes in the cluster see each other (pvecm status)?
Can they reach each other over the ceph network(s)?
The Monitors are fine again after reboot

Network is okay i guess, I tested pings on each interface
root@pve2:~# pvecm status
Cluster information
-------------------
Name: ptvceph
Config Version: 5
Transport: knet
Secure auth: on

Quorum information
------------------
Date: Tue Jun 21 17:16:46 2022
Quorum provider: corosync_votequorum
Nodes: 5
Node ID: 0x00000002
Ring ID: 1.59b7
Quorate: Yes

Votequorum information
----------------------
Expected votes: 5
Highest expected: 5
Total votes: 5
Quorum: 3
Flags: Quorate

Membership information
----------------------
Nodeid Votes Name
0x00000001 1 172.20.15.3
0x00000002 1 172.20.15.4 (local)
0x00000003 1 172.20.15.5
0x00000004 1 172.20.15.6
0x00000005 1 172.20.15.7



root@pve2:~# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 17.46597 root default
-5 3.49319 host pve1
2 nvme 1.74660 osd.2 up 1.00000 1.00000
3 nvme 1.74660 osd.3 up 1.00000 1.00000
-3 3.49319 host pve2
0 nvme 1.74660 osd.0 down 0 1.00000
1 nvme 1.74660 osd.1 down 1.00000 1.00000
-7 3.49319 host pve3
4 nvme 1.74660 osd.4 down 0 1.00000
5 nvme 1.74660 osd.5 up 1.00000 1.00000
-9 3.49319 host pve4
6 nvme 1.74660 osd.6 up 1.00000 1.00000
7 nvme 1.74660 osd.7 down 0 1.00000
-11 3.49319 host pve5
8 nvme 1.74660 osd.8 up 1.00000 1.00000
9 nvme 1.74660 osd.9 down 1.00000 1.00000

root@pve2:~# ceph crash ls
ID ENTITY NEW
2022-05-06T13:30:44.817807Z_e2516f6d-1254-4854-891d-fdc815f1feb1 osd.9 *
2022-05-17T04:33:26.580145Z_f1017dac-5640-4fba-9dc5-77cad1414607 osd.4 *
2022-06-21T11:33:29.694849Z_f30b9381-4194-477c-aa46-5ec92f35b6aa osd.7 *

root@pve2:~# ceph crash info 2022-05-06T13:30:44.817807Z_e2516f6d-1254-4854-891d-fdc815f1feb1
{
"assert_condition": "r == 0",
"assert_file": "./src/os/bluestore/BlueFS.cc",
"assert_func": "int64_t BlueFS::_read_random(BlueFS::FileReader*, uint64_t, uint64_t, char*)",
"assert_line": 1922,
"assert_msg": "./src/os/bluestore/BlueFS.cc: In function 'int64_t BlueFS::_read_random(BlueFS::FileReader*, uint64_t, uint64_t, char*)' thread 7fce74137700 time 2022-05-06T15:30:44.815731+0200\n./src/os/bluestore/BlueFS.cc: 1922: FAILED ceph_assert(r == 0)\n",
"assert_thread_name": "tp_osd_tp",
"backtrace": [
"/lib/x86_64-linux-gnu/libpthread.so.0(+0x14140) [0x7fce93ac4140]",
"gsignal()",
"abort()",
"(ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x18a) [0x564e7f4defde]",
"(KernelDevice::_aio_thread()+0x10d5) [0x564e800376c5]",
"(KernelDevice::AioCompletionThread::entry()+0xd) [0x564e8003b7dd]",
"/lib/x86_64-linux-gnu/libpthread.so.0(+0x8ea7) [0x7fce93ab8ea7]",
"clone()"
],
"ceph_version": "16.2.6",
"crash_id": "2022-05-06T13:30:44.817807Z_e2516f6d-1254-4854-891d-fdc815f1feb1",
"entity_name": "osd.9",
"io_error": true,
"io_error_code": -5,
"io_error_devname": "dm-1",
"io_error_length": 20480,
"io_error_offset": 1874801262592,
"io_error_optype": 8,
"io_error_path": "/var/lib/ceph/osd/ceph-9/block",
"os_id": "11",
"os_name": "Debian GNU/Linux 11 (bullseye)",
"os_version": "11 (bullseye)",
"os_version_id": "11",
"process_name": "ceph-osd",
"stack_sig": "e8babd764a9e3126fe27b72270b1693522622d44e7086a792f3245274e07ea06",
"timestamp": "2022-05-06T13:30:44.817807Z",
"utsname_hostname": "pve5",
"utsname_machine": "x86_64",
"utsname_release": "5.13.19-1-pve",
"utsname_sysname": "Linux",
"utsname_version": "#1 SMP PVE 5.13.19-3 (Tue, 23 Nov 2021 13:31:19 +0100)"
}

root@pve2:~# dmesg -T|grep -i oom
root@pve2:~#

is empty



I do have two 2 TB spare drives in the office. should I just brinmg them in an give them OSD 10 and 11 so the pool can refill?
 
the crash error seem to be related to allocator, maybe not finding space.

What is your osd space usage ? (I'm seeing nearfull errors too, backfill too full)

#ceph osd df ?

for the other stopped ssd, what is their last log in /var/log/ceph/ceph-osd.<osdid>.log ?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!