RE: ceph health remains: HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean
I followed the install from here:
https://pve.proxmox.com/wiki/Ceph_Server
3 server cluster.
Everything is up.
I followed the install from here:
https://pve.proxmox.com/wiki/Ceph_Server
3 server cluster.
Everything is up.
Code:
cat /etc/pve/ceph.conf
[global]
auth client required = cephx
auth cluster required = cephx
auth service required = cephx
auth supported = cephx
cluster network = 10.0.0.0/24
filestore xattr use omap = true
fsid = e22d5ceb-6a2d-4fb6-b027-e9f9790e3907
keyring = /etc/pve/priv/$cluster.$name.keyring
osd journal size = 5120
osd pool default min size = 1
public network = 10.0.0.0/24
[osd]
keyring = /var/lib/ceph/osd/ceph-$id/keyring
[mon.0]
host = proxmox01
mon addr = 10.0.0.20:6789
[mon.1]
host = proxmox02
mon addr = 10.0.0.30:6789
[mon.2]
host = proxmox03
mon addr = 10.0.0.40:6789
ceph -s
cluster e22d5ceb-6a2d-4fb6-b027-e9f9790e3907
health HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean
monmap e3: 3 mons at {0=10.0.0.20:6789/0,1=10.0.0.30:6789/0,2=10.0.0.40:6789/0}, election epoch 24, quorum 0,1,2 0,1,2
osdmap e21: 3 osds: 3 up, 3 in
pgmap v51: 192 pgs, 3 pools, 0 bytes data, 0 objects
102216 kB used, 9083 MB / 9182 MB avail
192 active+degraded
ceph osd tree
# id weight type name up/down reweight
-1 0 root default
-2 0 host proxmox01
0 0 osd.0 up 1
-3 0 host proxmox02
1 0 osd.1 up 1
-4 0 host proxmox03
2 0 osd.2 up 1
netstat -an | grep 6789
tcp 0 0 10.0.0.20:6789 0.0.0.0:* LISTEN
tcp 0 0 10.0.0.20:47486 10.0.0.40:6789 TIME_WAIT
tcp 0 0 10.0.0.20:47370 10.0.0.40:6789 TIME_WAIT
tcp 0 0 10.0.0.20:49812 10.0.0.30:6789 TIME_WAIT
tcp 0 0 10.0.0.20:49895 10.0.0.30:6789 TIME_WAIT
tcp 0 0 10.0.0.20:47338 10.0.0.40:6789 TIME_WAIT
tcp 0 0 10.0.0.20:57622 10.0.0.20:6789 ESTABLISHED
tcp 0 0 10.0.0.20:47307 10.0.0.40:6789 TIME_WAIT
tcp 0 0 10.0.0.20:47291 10.0.0.40:6789 TIME_WAIT
tcp 0 0 10.0.0.20:47452 10.0.0.40:6789 TIME_WAIT
tcp 0 0 10.0.0.20:49879 10.0.0.30:6789 TIME_WAIT
tcp 0 0 10.0.0.20:47290 10.0.0.40:6789 TIME_WAIT
tcp 0 0 10.0.0.20:45242 10.0.0.20:6789 TIME_WAIT
tcp 0 0 10.0.0.20:47354 10.0.0.40:6789 TIME_WAIT
tcp 0 0 10.0.0.20:6789 10.0.0.20:57622 ESTABLISHED
tcp 0 0 10.0.0.20:6789 10.0.0.40:36518 ESTABLISHED
tcp 0 0 10.0.0.20:47295 10.0.0.40:6789 TIME_WAIT
tcp 0 0 10.0.0.20:6789 10.0.0.30:41930 ESTABLISHED
tcp 0 0 10.0.0.20:49957 10.0.0.30:6789 TIME_WAIT
pvesm status
local dir 1 2580272 136208 2444064 5.78%
lvm01 lvm 1 11759616 0 0 100.00%
ceph auth list
installed auth entries:
osd.0
key: AQA44klViOQgIhAAcU6bo4cveYBJEUnm98aWXg==
caps: [mon] allow profile osd
caps: [osd] allow *
osd.1
key: AQAc40lV2FjvLxAAuMs7iXFCI97PjD0cf5YyCQ==
caps: [mon] allow profile osd
caps: [osd] allow *
osd.2
key: AQAt40lVCAeENRAAIbkbRUxf0Mh4qZk2KXPFXQ==
caps: [mon] allow profile osd
caps: [osd] allow *
client.admin
key: AQAF4ElVSMamHRAAKnIuWdaQQEpiw+3YC1Dhsw==
caps: [mds] allow
caps: [mon] allow *
caps: [osd] allow *
client.bootstrap-mds
key: AQAH4ElV+CJmCxAAExN2m6etku0R+K/aM6sfXg==
caps: [mon] allow profile bootstrap-mds
client.bootstrap-osd
key: AQAG4ElVCDCqMRAAnvTFx1O/8Q2qOcY1c89GoQ==
caps: [mon] allow profile bootstrap-osd