Hi,
I have a 3 node ceph cluster, which was updated yesterday to Proxmox 5.1 and ceph luminous.
I followed strict the guides from the wiki, so in the end I had a healthy cluster.
than I executed
It started rearranging objects, but with time it stuck with
after I restart a node it start to work again for a couple of minutes until it stuck again.
In general I don't think it can be fixed easily so I am going to delete my pools and restore everything from backup.
Is it safe after an update to set tunables optimal?
Maybe it should be removed from the guides.
I have a 3 node ceph cluster, which was updated yesterday to Proxmox 5.1 and ceph luminous.
I followed strict the guides from the wiki, so in the end I had a healthy cluster.
than I executed
Code:
ceph osd crush tunables optimal
Code:
mon.0 mon.0 172.30.3.21:6789/0 122 : cluster [WRN] Health check update: 169806/1144269 objects misplaced (14.840%) (OBJECT_MISPLACED)
In general I don't think it can be fixed easily so I am going to delete my pools and restore everything from backup.
Is it safe after an update to set tunables optimal?
Maybe it should be removed from the guides.
Code:
cluster:
id: c4d0e591-a919-4df0-8627-d2fda956f7ff
health: HEALTH_WARN
2 nearfull osd(s)
2 pool(s) nearfull
169806/1144269 objects misplaced (14.840%)
Reduced data availability: 598 pgs inactive
Degraded data redundancy: 17713/1144269 objects degraded (1.548%), 89 pgs degraded, 89 pgs undersized
services:
mon: 3 daemons, quorum 0,1,2
mgr: ceph3(active), standbys: ceph2, ceph1
osd: 36 osds: 36 up, 36 in; 598 remapped pgs
data:
pools: 2 pools, 2048 pgs
objects: 372k objects, 1481 GB
usage: 4483 GB used, 2972 GB / 7456 GB avail
pgs: 29.199% pgs not active
17713/1144269 objects degraded (1.548%)
169806/1144269 objects misplaced (14.840%)
1450 active+clean
509 activating+remapped
89 activating+undersized+degraded+remapped