ceph issues after a kernel panic

Peter Amiri

Active Member
Dec 1, 2016
14
0
41
57
Yesterday we suffered a kernel panic on one of our hosts. This locked up the entire cluster until we discovered which host it was and restarted the host. After that event we have had issues on our ceph cluster. Right now we have a pg that is stuck unclean. I've pasted the ceph status below. Any help to recover the ceph cluster and get the VMs to start responding would be appreciated.

root@host11:~# ceph status
cluster:
id: fbd9dc8d-6898-4159-89a8-00448f2efd0b
health: HEALTH_ERR
Reduced data availability: 1 pg inactive, 1 pg incomplete
Degraded data redundancy: 1 pg unclean
59 stuck requests are blocked > 4096 sec

services:
mon: 3 daemons, quorum host12,host14,host15
mgr: host12(active), standbys: host14, host15
osd: 261 osds: 250 up, 250 in
rgw: 1 daemon active

data:
pools: 13 pools, 13392 pgs
objects: 7989k objects, 25804 GB
usage: 78048 GB used, 190 TB / 266 TB avail
pgs: 0.007% pgs not active
13391 active+clean
1 incomplete

io:
client: 2441 kB/s rd, 985 kB/s wr, 275 op/s rd, 113 op/s wr
 
Please describe your Ceph Cluster Setup in full detail (hardware, network, config, software version, etc.)
 
We are running Proxmox 5.1 with CEPH Luminous. We have a HDD pool and a SSD pool. The SSD pool has 9 hosts each with 24 SSD OSDs in them. After the kernel panic we have 2 OSDs that are down. One of which is blocking a pg I think. The hosts are HP DL380 with 2 10G NIC.

Config:
[global]
auth client required = cephx
auth cluster required = cephx
auth service required = cephx
cluster network = 10.100.30.0/24
filestore xattr use omap = true
fsid = fbd9dc8d-6898-4159-89a8-00448f2efd0b
keyring = /etc/pve/priv/$cluster.$name.keyring
mon allow pool delete = true
osd crush update on start = false
osd journal size = 5120
osd pool default min size = 1
public network = 10.100.30.0/24

[osd]
keyring = /var/lib/ceph/osd/ceph-$id/keyring

[mon.host14]
host = host14
mon addr = 10.100.30.14:6789

[mon.host12]
host = host12
mon addr = 10.100.30.12:6789

[mon.host15]
host = host15
mon addr = 10.100.30.15:6789

Crush map:
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable chooseleaf_vary_r 1
tunable straw_calc_version 1
tunable allowed_bucket_algs 54

# devices
device 0 osd.0 class hdd
device 1 osd.1 class hdd
device 2 osd.2 class hdd
device 3 osd.3 class hdd
device 4 osd.4 class hdd
device 5 osd.5 class hdd
device 6 osd.6 class hdd
device 7 osd.7 class hdd
device 8 osd.8 class hdd
device 9 osd.9 class hdd
device 10 osd.10 class hdd
device 11 osd.11 class hdd
device 12 osd.12 class hdd
device 13 osd.13 class hdd
device 14 osd.14 class hdd
device 15 osd.15 class hdd
device 16 osd.16 class hdd
device 17 osd.17 class hdd
device 18 osd.18 class hdd
device 19 osd.19 class hdd
device 20 osd.20 class hdd
device 21 osd.21 class hdd
device 22 osd.22 class hdd
device 23 osd.23 class hdd
device 24 osd.24 class hdd
device 25 osd.25 class hdd
device 26 osd.26 class hdd
device 27 osd.27 class hdd
device 28 osd.28 class hdd
device 29 osd.29 class hdd
device 30 osd.30 class hdd
device 31 osd.31 class hdd
device 32 osd.32 class hdd
device 33 osd.33 class hdd
device 34 osd.34 class hdd
device 35 osd.35 class hdd
device 36 osd.36 class hdd
device 37 osd.37 class hdd
device 38 osd.38 class hdd
device 39 osd.39 class hdd
device 40 osd.40 class hdd
device 41 osd.41 class hdd
device 42 osd.42 class hdd
device 43 osd.43 class hdd
device 44 osd.44 class hdd
device 45 osd.45 class ssd
device 46 osd.46 class ssd
device 47 osd.47 class ssd
device 48 osd.48 class ssd
device 49 osd.49 class ssd
device 50 osd.50 class ssd
device 51 osd.51 class ssd
device 52 osd.52 class ssd
device 53 osd.53 class ssd
device 54 osd.54 class ssd
device 55 osd.55 class ssd
device 56 osd.56 class ssd
device 57 osd.57 class ssd
device 58 osd.58 class ssd
device 59 osd.59 class ssd
device 60 osd.60 class ssd
device 61 osd.61 class ssd
device 62 osd.62 class ssd
device 63 osd.63 class ssd
device 64 osd.64 class ssd
device 65 osd.65 class ssd
device 66 osd.66 class ssd
device 67 osd.67 class ssd
device 68 osd.68 class ssd
device 69 osd.69 class ssd
device 70 osd.70 class ssd
device 71 osd.71 class ssd
device 72 osd.72 class ssd
device 73 osd.73 class ssd
device 74 osd.74 class ssd
device 75 osd.75 class ssd
device 76 osd.76 class ssd
device 77 osd.77 class ssd
device 78 osd.78 class ssd
device 79 osd.79 class ssd
device 80 osd.80 class ssd
device 81 osd.81 class ssd
device 82 osd.82 class ssd
device 83 osd.83 class ssd
device 84 osd.84 class ssd
device 85 osd.85 class ssd
device 86 osd.86 class ssd
device 87 osd.87 class ssd
device 88 osd.88 class ssd
device 89 osd.89 class ssd
device 90 osd.90 class ssd
device 91 osd.91 class ssd
device 92 osd.92 class ssd
device 93 osd.93 class ssd
device 94 osd.94 class ssd
device 95 osd.95 class ssd
device 96 osd.96 class ssd
device 97 osd.97 class ssd
device 98 osd.98 class ssd
device 99 osd.99 class ssd
device 100 osd.100 class ssd
device 101 osd.101 class ssd
device 102 osd.102 class ssd
device 103 osd.103 class ssd
device 104 osd.104 class ssd
device 105 osd.105 class ssd
device 106 osd.106 class ssd
device 107 osd.107 class ssd
device 108 osd.108 class ssd
device 109 osd.109 class ssd
device 110 osd.110 class ssd
device 111 osd.111 class ssd
device 112 osd.112 class ssd
device 113 osd.113 class ssd
device 114 osd.114 class ssd
device 115 osd.115 class ssd
device 116 osd.116 class ssd
device 117 osd.117 class ssd
device 118 osd.118 class ssd
device 119 osd.119 class ssd
device 120 osd.120 class ssd
device 121 osd.121 class ssd
device 122 osd.122 class ssd
device 123 osd.123 class ssd
device 124 osd.124 class ssd
device 125 osd.125 class ssd
device 126 osd.126 class ssd
device 127 osd.127 class ssd
device 128 osd.128 class ssd
device 129 osd.129 class ssd
device 130 osd.130 class ssd
device 131 osd.131 class ssd
device 132 osd.132 class ssd
device 133 osd.133 class ssd
device 134 osd.134 class ssd
device 135 osd.135 class ssd
device 136 osd.136 class ssd
device 137 osd.137 class ssd
device 138 osd.138 class ssd
device 139 osd.139 class ssd
device 140 osd.140 class ssd
device 141 osd.141 class ssd
device 142 osd.142 class ssd
device 143 osd.143 class ssd
device 144 osd.144 class ssd
device 145 osd.145 class ssd
device 146 osd.146 class ssd
device 147 osd.147 class ssd
device 148 osd.148 class ssd
device 149 osd.149 class ssd
device 150 osd.150 class ssd
device 151 osd.151 class ssd
device 152 osd.152 class ssd
device 153 osd.153 class ssd
device 154 osd.154 class ssd
device 155 osd.155 class ssd
device 156 osd.156 class ssd
device 157 osd.157 class ssd
device 158 osd.158 class ssd
device 159 osd.159 class ssd
device 160 osd.160 class ssd
device 161 osd.161 class ssd
device 162 osd.162 class ssd
device 163 osd.163 class ssd
device 164 osd.164 class ssd
device 165 osd.165 class ssd
device 166 osd.166 class ssd
device 167 osd.167 class ssd
device 168 osd.168 class ssd
device 169 osd.169 class ssd
device 170 osd.170 class ssd
device 171 osd.171 class ssd
device 172 osd.172 class ssd
device 173 osd.173 class ssd
device 174 osd.174 class ssd
device 175 osd.175 class ssd
device 176 osd.176 class ssd
device 177 osd.177 class ssd
device 178 osd.178 class ssd
device 179 osd.179 class ssd
device 180 osd.180 class ssd
device 181 osd.181 class ssd
device 182 osd.182 class ssd
device 183 osd.183 class ssd
device 184 osd.184 class ssd
device 185 osd.185 class ssd
device 186 osd.186 class ssd
device 187 osd.187 class ssd
device 188 osd.188 class ssd
device 189 osd.189 class ssd
device 190 osd.190 class ssd
device 191 osd.191 class ssd
device 192 osd.192 class ssd
device 193 osd.193 class ssd
device 194 osd.194 class ssd
device 195 osd.195 class ssd
device 196 osd.196 class ssd
device 197 osd.197 class ssd
device 198 osd.198 class ssd
device 199 osd.199 class ssd
device 200 osd.200 class ssd
device 201 osd.201 class ssd
device 202 osd.202 class ssd
device 203 osd.203 class ssd
device 204 osd.204 class ssd
device 205 osd.205 class ssd
device 206 osd.206 class ssd
device 207 osd.207 class ssd
device 208 osd.208 class ssd
device 209 osd.209 class ssd
device 210 osd.210 class ssd
device 211 osd.211 class ssd
device 212 osd.212 class ssd
device 213 osd.213 class ssd
device 214 osd.214 class ssd
device 215 osd.215 class ssd
device 216 osd.216 class ssd
device 217 osd.217 class ssd
device 218 osd.218 class ssd
device 219 osd.219 class ssd
device 220 osd.220 class ssd
device 221 osd.221 class ssd
device 222 osd.222 class ssd
device 223 osd.223 class ssd
device 224 osd.224 class ssd
device 225 osd.225 class ssd
device 226 osd.226 class ssd
device 227 osd.227 class ssd
device 228 osd.228 class ssd
device 229 osd.229 class ssd
device 230 osd.230 class ssd
device 231 osd.231 class ssd
device 232 osd.232 class ssd
device 233 osd.233 class ssd
device 234 osd.234 class ssd
device 235 osd.235 class ssd
device 236 osd.236 class ssd
device 237 osd.237 class ssd
device 238 osd.238 class ssd
device 239 osd.239 class ssd
device 240 osd.240 class ssd
device 241 osd.241 class ssd
device 242 osd.242 class ssd
device 243 osd.243 class ssd
device 244 osd.244 class ssd
device 245 osd.245 class ssd
device 246 osd.246 class ssd
device 247 osd.247 class ssd
device 248 osd.248 class ssd
device 249 osd.249 class ssd
device 250 osd.250 class ssd
device 251 osd.251 class ssd
device 252 osd.252 class ssd
device 253 osd.253 class ssd
device 254 osd.254 class ssd
device 255 osd.255 class ssd
device 256 osd.256 class ssd
device 257 osd.257 class ssd
device 258 osd.258 class ssd
device 259 osd.259 class ssd
device 260 osd.260 class ssd

# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 region
type 10 root

# buckets
host host11 {
id -2 # do not change unnecessarily
id -33 class hdd # do not change unnecessarily
id -43 class ssd # do not change unnecessarily
# weight 32.760
alg straw
hash 0 # rjenkins1
item osd.0 weight 3.640
item osd.1 weight 3.640
item osd.2 weight 3.640
item osd.3 weight 3.640
item osd.4 weight 3.640
item osd.5 weight 3.640
item osd.6 weight 3.640
item osd.7 weight 3.640
item osd.8 weight 3.640
}
host host12 {
id -3 # do not change unnecessarily
id -34 class hdd # do not change unnecessarily
id -44 class ssd # do not change unnecessarily
# weight 32.760
alg straw
hash 0 # rjenkins1
item osd.9 weight 3.640
item osd.10 weight 3.640
item osd.12 weight 3.640
item osd.13 weight 3.640
item osd.14 weight 3.640
item osd.15 weight 3.640
item osd.16 weight 3.640
item osd.17 weight 3.640
item osd.11 weight 3.640
}
host host13 {
id -4 # do not change unnecessarily
id -35 class hdd # do not change unnecessarily
id -45 class ssd # do not change unnecessarily
# weight 32.760
alg straw
hash 0 # rjenkins1
item osd.18 weight 3.640
item osd.19 weight 3.640
item osd.20 weight 3.640
item osd.21 weight 3.640
item osd.22 weight 3.640
item osd.23 weight 3.640
item osd.24 weight 3.640
item osd.26 weight 3.640
item osd.25 weight 3.640
}
host host14 {
id -5 # do not change unnecessarily
id -36 class hdd # do not change unnecessarily
id -46 class ssd # do not change unnecessarily
# weight 32.760
alg straw
hash 0 # rjenkins1
item osd.27 weight 3.640
item osd.28 weight 3.640
item osd.29 weight 3.640
item osd.30 weight 3.640
item osd.31 weight 3.640
item osd.32 weight 3.640
item osd.33 weight 3.640
item osd.34 weight 3.640
item osd.35 weight 3.640
}
host host15 {
id -6 # do not change unnecessarily
id -41 class hdd # do not change unnecessarily
id -47 class ssd # do not change unnecessarily
# weight 32.760
alg straw
hash 0 # rjenkins1
item osd.36 weight 3.640
item osd.37 weight 3.640
item osd.38 weight 3.640
item osd.39 weight 3.640
item osd.40 weight 3.640
item osd.41 weight 3.640
item osd.42 weight 3.640
item osd.43 weight 3.640
item osd.44 weight 3.640
}
root default {
id -1 # do not change unnecessarily
id -42 class hdd # do not change unnecessarily
id -48 class ssd # do not change unnecessarily
# weight 163.800
alg straw
hash 0 # rjenkins1
item host11 weight 32.760
item host12 weight 32.760
item host13 weight 32.760
item host14 weight 32.760
item host15 weight 32.760
}
host host16 {
id -8 # do not change unnecessarily
id -25 class hdd # do not change unnecessarily
id -17 class ssd # do not change unnecessarily
# weight 10.560
alg straw
hash 0 # rjenkins1
item osd.45 weight 0.440
item osd.46 weight 0.440
item osd.47 weight 0.440
item osd.48 weight 0.440
item osd.49 weight 0.440
item osd.50 weight 0.440
item osd.51 weight 0.440
item osd.52 weight 0.440
item osd.53 weight 0.440
item osd.54 weight 0.440
item osd.55 weight 0.440
item osd.56 weight 0.440
item osd.57 weight 0.440
item osd.58 weight 0.440
item osd.59 weight 0.440
item osd.60 weight 0.440
item osd.61 weight 0.440
item osd.62 weight 0.440
item osd.63 weight 0.440
item osd.64 weight 0.440
item osd.65 weight 0.440
item osd.66 weight 0.440
item osd.67 weight 0.440
item osd.68 weight 0.440
}
host host17 {
id -9 # do not change unnecessarily
id -26 class hdd # do not change unnecessarily
id -18 class ssd # do not change unnecessarily
# weight 10.560
alg straw
hash 0 # rjenkins1
item osd.69 weight 0.440
item osd.70 weight 0.440
item osd.71 weight 0.440
item osd.72 weight 0.440
item osd.73 weight 0.440
item osd.74 weight 0.440
item osd.75 weight 0.440
item osd.76 weight 0.440
item osd.77 weight 0.440
item osd.78 weight 0.440
item osd.79 weight 0.440
item osd.80 weight 0.440
item osd.81 weight 0.440
item osd.82 weight 0.440
item osd.83 weight 0.440
item osd.84 weight 0.440
item osd.85 weight 0.440
item osd.86 weight 0.440
item osd.87 weight 0.440
item osd.88 weight 0.440
item osd.89 weight 0.440
item osd.90 weight 0.440
item osd.91 weight 0.440
item osd.92 weight 0.440
}
host host18 {
id -10 # do not change unnecessarily
id -27 class hdd # do not change unnecessarily
id -19 class ssd # do not change unnecessarily
# weight 10.560
alg straw
hash 0 # rjenkins1
item osd.93 weight 0.440
item osd.94 weight 0.440
item osd.95 weight 0.440
item osd.96 weight 0.440
item osd.97 weight 0.440
item osd.98 weight 0.440
item osd.99 weight 0.440
item osd.100 weight 0.440
item osd.101 weight 0.440
item osd.102 weight 0.440
item osd.103 weight 0.440
item osd.104 weight 0.440
item osd.105 weight 0.440
item osd.106 weight 0.440
item osd.107 weight 0.440
item osd.108 weight 0.440
item osd.109 weight 0.440
item osd.110 weight 0.440
item osd.111 weight 0.440
item osd.112 weight 0.440
item osd.113 weight 0.440
item osd.114 weight 0.440
item osd.115 weight 0.440
item osd.116 weight 0.440
}
host host19 {
id -11 # do not change unnecessarily
id -28 class hdd # do not change unnecessarily
id -20 class ssd # do not change unnecessarily
# weight 10.560
alg straw
hash 0 # rjenkins1
item osd.117 weight 0.440
item osd.118 weight 0.440
item osd.119 weight 0.440
item osd.120 weight 0.440
item osd.121 weight 0.440
item osd.122 weight 0.440
item osd.123 weight 0.440
item osd.124 weight 0.440
item osd.125 weight 0.440
item osd.126 weight 0.440
item osd.127 weight 0.440
item osd.128 weight 0.440
item osd.129 weight 0.440
item osd.130 weight 0.440
item osd.131 weight 0.440
item osd.132 weight 0.440
item osd.133 weight 0.440
item osd.134 weight 0.440
item osd.135 weight 0.440
item osd.136 weight 0.440
item osd.137 weight 0.440
item osd.138 weight 0.440
item osd.139 weight 0.440
item osd.140 weight 0.440
}
host host20 {
id -12 # do not change unnecessarily
id -29 class hdd # do not change unnecessarily
id -21 class ssd # do not change unnecessarily
# weight 10.560
alg straw
hash 0 # rjenkins1
item osd.141 weight 0.440
item osd.142 weight 0.440
item osd.143 weight 0.440
item osd.144 weight 0.440
item osd.145 weight 0.440
item osd.146 weight 0.440
item osd.147 weight 0.440
item osd.148 weight 0.440
item osd.149 weight 0.440
item osd.150 weight 0.440
item osd.151 weight 0.440
item osd.152 weight 0.440
item osd.153 weight 0.440
item osd.154 weight 0.440
item osd.155 weight 0.440
item osd.156 weight 0.440
item osd.157 weight 0.440
item osd.158 weight 0.440
item osd.159 weight 0.440
item osd.160 weight 0.440
item osd.161 weight 0.440
item osd.162 weight 0.440
item osd.163 weight 0.440
item osd.164 weight 0.440
}
host host27 {
id -16 # do not change unnecessarily
id -40 class hdd # do not change unnecessarily
id -22 class ssd # do not change unnecessarily
# weight 21.120
alg straw
hash 0 # rjenkins1
item osd.237 weight 0.880
item osd.238 weight 0.880
item osd.239 weight 0.880
item osd.240 weight 0.880
item osd.241 weight 0.880
item osd.242 weight 0.880
item osd.243 weight 0.880
item osd.244 weight 0.880
item osd.245 weight 0.880
item osd.246 weight 0.880
item osd.247 weight 0.880
item osd.248 weight 0.880
item osd.249 weight 0.880
item osd.250 weight 0.880
item osd.251 weight 0.880
item osd.252 weight 0.880
item osd.253 weight 0.880
item osd.254 weight 0.880
item osd.255 weight 0.880
item osd.256 weight 0.880
item osd.257 weight 0.880
item osd.258 weight 0.880
item osd.259 weight 0.880
item osd.260 weight 0.880
}
host host28 {
id -13 # do not change unnecessarily
id -37 class hdd # do not change unnecessarily
id -23 class ssd # do not change unnecessarily
# weight 21.120
alg straw
hash 0 # rjenkins1
item osd.165 weight 0.880
item osd.166 weight 0.880
item osd.167 weight 0.880
item osd.168 weight 0.880
item osd.169 weight 0.880
item osd.170 weight 0.880
item osd.171 weight 0.880
item osd.172 weight 0.880
item osd.173 weight 0.880
item osd.174 weight 0.880
item osd.175 weight 0.880
item osd.176 weight 0.880
item osd.177 weight 0.880
item osd.178 weight 0.880
item osd.179 weight 0.880
item osd.180 weight 0.880
item osd.181 weight 0.880
item osd.182 weight 0.880
item osd.183 weight 0.880
item osd.184 weight 0.880
item osd.185 weight 0.880
item osd.186 weight 0.880
item osd.187 weight 0.880
item osd.188 weight 0.880
}
host host29 {
id -14 # do not change unnecessarily
id -38 class hdd # do not change unnecessarily
id -24 class ssd # do not change unnecessarily
# weight 21.120
alg straw
hash 0 # rjenkins1
item osd.189 weight 0.880
item osd.190 weight 0.880
item osd.191 weight 0.880
item osd.192 weight 0.880
item osd.193 weight 0.880
item osd.194 weight 0.880
item osd.195 weight 0.880
item osd.196 weight 0.880
item osd.197 weight 0.880
item osd.198 weight 0.880
item osd.199 weight 0.880
item osd.200 weight 0.880
item osd.201 weight 0.880
item osd.202 weight 0.880
item osd.203 weight 0.880
item osd.204 weight 0.880
item osd.205 weight 0.880
item osd.206 weight 0.880
item osd.207 weight 0.880
item osd.208 weight 0.880
item osd.209 weight 0.880
item osd.210 weight 0.880
item osd.211 weight 0.880
item osd.212 weight 0.880
}
host host30 {
id -15 # do not change unnecessarily
id -39 class hdd # do not change unnecessarily
id -31 class ssd # do not change unnecessarily
# weight 21.120
alg straw
hash 0 # rjenkins1
item osd.213 weight 0.880
item osd.214 weight 0.880
item osd.215 weight 0.880
item osd.216 weight 0.880
item osd.217 weight 0.880
item osd.218 weight 0.880
item osd.219 weight 0.880
item osd.220 weight 0.880
item osd.221 weight 0.880
item osd.222 weight 0.880
item osd.223 weight 0.880
item osd.224 weight 0.880
item osd.225 weight 0.880
item osd.226 weight 0.880
item osd.227 weight 0.880
item osd.228 weight 0.880
item osd.229 weight 0.880
item osd.230 weight 0.880
item osd.231 weight 0.880
item osd.232 weight 0.880
item osd.233 weight 0.880
item osd.234 weight 0.880
item osd.235 weight 0.880
item osd.236 weight 0.880
}
root ssd-pool {
id -7 # do not change unnecessarily
id -30 class hdd # do not change unnecessarily
id -32 class ssd # do not change unnecessarily
# weight 137.280
alg straw
hash 0 # rjenkins1
item host16 weight 10.560
item host17 weight 10.560
item host18 weight 10.560
item host19 weight 10.560
item host20 weight 10.560
item host27 weight 21.120
item host28 weight 21.120
item host29 weight 21.120
item host30 weight 21.120
}

# rules
rule replicated_ruleset {
id 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
step emit
}
rule ssd_ruleset {
id 1
type replicated
min_size 1
max_size 10
step take ssd-pool
step chooseleaf firstn 0 type host
step emit
}

# end crush map
 
Have you tried
Code:
ceph health detail
to see what is the problematic pg and the OSDs to which is mapped?
You should see a line similar to:
Code:
pg xx.yy is <status>, acting [ osdX, osdY, osdZ, ... ]

If one or more of the OSDs to which the pg is mapped is working, you could even try a repair of the pg
Code:
ceph pg repair <pg>
 
Here is what I get with ceph health detail:

code:
root@host17:/var/log/ceph# ceph health detail
HEALTH_ERR noout flag(s) set; 2 osds down; Reduced data availability: 1 pg inactive, 1 pg incomplete; Degraded data redundancy: 1 pg unclean; 60 slow requests are blocked > 32 sec; 1 stuck requests are blocked > 4096 sec
OSDMAP_FLAGS noout flag(s) set
OSD_DOWN 2 osds down
osd.70 (root=ssd-pool,host=host17) is down
osd.86 (root=ssd-pool,host=host17) is down
PG_AVAILABILITY Reduced data availability: 1 pg inactive, 1 pg incomplete
pg 15.1f26 is incomplete, acting [134,159]
PG_DEGRADED Degraded data redundancy: 1 pg unclean
pg 15.1f26 is stuck unclean for 81908.342558, current state incomplete, last acting [134,159]
REQUEST_SLOW 60 slow requests are blocked > 32 sec
60 ops are blocked > 2097.15 sec
osd.134 has blocked requests > 2097.15 sec
REQUEST_STUCK 1 stuck requests are blocked > 4096 sec
1 ops are blocked > 134218 sec
osd.70 has stuck requests > 134218 sec
 
PG 15.1f26 is on the OSDs 134 and 159 (you have size=2 in this pool?) and OSD.134 have blocked request. Is OSD.134 working?
You have problems with OSD.70 too. Is it working?
 
I really don't know how to answer that. I've tried to stop and start it to see if that resolves the issues. But I still see the blocked requests. Would it be wise to reduce the size to 1 to see if we can get the ceph cluster working again?
 
Please describe your Ceph Cluster Setup in full detail (hardware, network, config, software version, etc.)
Tom,

Did you see anything in the configurations. We are dead in the water here. Our VMs are starting to become unresponsive and we cannot restart any of them. We can stop a VM but then can't get it to start up again. The CEPH cluster and most likely just OSD 134 is blocking everything from running.

-Peter
 
According to CEPH health:

Code:
ceph health detail
HEALTH_ERR Reduced data availability: 1 pg inactive, 1 pg incomplete; Degraded data redundancy: 1 pg unclean; 60 stuck requests are blocked > 4096 sec
PG_AVAILABILITY Reduced data availability: 1 pg inactive, 1 pg incomplete
    pg 15.1f26 is incomplete, acting [134,162,183]
PG_DEGRADED Degraded data redundancy: 1 pg unclean
    pg 15.1f26 is stuck unclean for 84852.193909, current state incomplete, last acting [134,162,183]
REQUEST_STUCK 60 stuck requests are blocked > 4096 sec
    60 ops are blocked > 8388.61 sec
    osd.134 has stuck requests > 8388.61 sec

pg 15.1f26 is incomplete. Does anyone know how to find out if any of the 3 replicas are complete and force Ceph to use that copy?
 
What you get from
Code:
ceph pg dump|egrep "^(PG_STAT|15.1f26)"

You could even check that osd log in /var/log/ceph/ceph-osd.134.log from the host where is that osd
 
Last edited:
Here is what I get:
Code:
root@host17:/var/log/ceph# ceph pg dump|egrep "^(PG_STAT|15.1f26)"

dumped all

PG_STAT OBJECTS MISSING_ON_PRIMARY DEGRADED MISPLACED UNFOUND BYTES      LOG  DISK_LOG STATE        STATE_STAMP                VERSION          REPORTED         UP            UP_PRIMARY ACTING        ACTING_PRIMARY LAST_SCRUB       SCRUB_STAMP                LAST_DEEP_SCRUB  DEEP_SCRUB_STAMP           

15.1f26     610                  0        0         0       0 2550589952 1584     1584   incomplete 2017-11-27 12:02:41.489469    111527'105509    115109:267231 [134,162,183]        134 [134,162,183]            134    111527'104760 2017-11-24 21:13:17.445046    111527'104760 2017-11-24 21:13:17.445046
[\CODE]
 
OSD 134 is primary for that PG, but OSD seems to have problems communicating with other OSDs

From the host where is OSD 134 (host19 according to crush map) you should check
Code:
systemctl status ceph-osd@134
and last lines from /var/log/ceph/ceph-osd.134.log
 
It looks good to me:

Code:
root@host19:~# systemctl status ceph-osd@134
● ceph-osd@134.service - Ceph object storage daemon osd.134
   Loaded: loaded (/lib/systemd/system/ceph-osd@.service; enabled; vendor preset: enabled)
  Drop-In: /lib/systemd/system/ceph-osd@.service.d
           └─ceph-after-pve-cluster.conf
   Active: active (running) since Mon 2017-11-27 12:07:58 EST; 13min ago
  Process: 1303744 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id 134 (code=exited, status=0/SUCCESS)
 Main PID: 1303749 (ceph-osd)
    Tasks: 66
   CGroup: /system.slice/system-ceph\x2dosd.slice/ceph-osd@134.service
           └─1303749 /usr/bin/ceph-osd -f --cluster ceph --id 134 --setuser ceph --setgroup ceph

Nov 27 12:07:58 host19 systemd[1]: Starting Ceph object storage daemon osd.134...
Nov 27 12:07:58 host19 systemd[1]: Started Ceph object storage daemon osd.134.
Nov 27 12:07:58 host19 ceph-osd[1303749]: starting osd.134 at - osd_data /var/lib/ceph/osd/ceph-134 /var/lib/ceph/osd/ceph-134/journal
Nov 27 12:08:03 host19 ceph-osd[1303749]: 2017-11-27 12:08:03.948012 7f2a49a36e00 -1 osd.134 115114 log_to_monitors {default=true}
 
What is the OSD log?

In the end, I think you could put out that OSD and let ceph rebalance the PGs from that OSD, at least try a ceph repair 15.1f26

I think it's better if you wait some reply from someone with more experience than me on ceph, I tried to help for what I know, but I don't know why that OSD still has stuck requests.
 
Here is how we ultimately solved this issue. We set the pool size down to 2 and then set it back to 3. Then restarted the OSDs and the cluster started to rebalance and clear itself.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!