This is what ceph -s
looks like
root@utr-tst-vh03:~# ceph -s
cluster:
id: 67b4dbb5-1d5e-4b62-89b0-46ff1ec560fd
health: HEALTH_WARN
1 filesystem is degraded
1 MDSs report slow metadata IOs
7 osds down
2 hosts (8 osds) down
Reduced data availability: 193 pgs inactive
5 daemons have recently crashed
services:
mon: 4 daemons, quorum utr-tst-vh02,utr-tst-vh03,utr-tst-hv04,utr-tst-vh01 (age 17h)
mgr: utr-tst-hv04(active, since 17h), standbys: utr-tst-vh01
mds: 1/1 daemons up, 1 standby
osd: 16 osds: 4 up (since 25h), 11 in (since 16h)
data:
volumes: 0/1 healthy, 1 recovering
pools: 4 pools, 193 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs: 100.000% pgs unknown
193 unknown
And ceph versions
root@utr-tst-vh03:~# ceph versions
{
"mon": {
"ceph version 16.2.7 (f9aa029788115b5df5eeee328f584156565ee5b7) pacific (stable)": 4
},
"mgr": {
"ceph version 16.2.7 (f9aa029788115b5df5eeee328f584156565ee5b7) pacific (stable)": 2
},
"osd": {
"ceph version 16.2.5 (9b9dd76e12f1907fe5dcc0c1fadadbb784022a42) pacific (stable)": 1,
"ceph version 16.2.7 (f9aa029788115b5df5eeee328f584156565ee5b7) pacific (stable)": 3
},
"mds": {
"ceph version 16.2.7 (f9aa029788115b5df5eeee328f584156565ee5b7) pacific (stable)": 2
},
"overall": {
"ceph version 16.2.5 (9b9dd76e12f1907fe5dcc0c1fadadbb784022a42) pacific (stable)": 1,
"ceph version 16.2.7 (f9aa029788115b5df5eeee328f584156565ee5b7) pacific (stable)": 11
}
}
I see there are a few old ones in between....