Ceph stuck at: objects misplaced (0.064%)

mohnewald

Well-Known Member
Aug 21, 2018
50
4
48
59
Hello,

i am running 5.4-15 with ceph and i am stuck here wirh 0.064% misplaced since days and i dunno why.

root@node01:~ # ceph -s
cluster:
id: 251c937e-0b55-48c1-8f34-96e84e4023d4
health: HEALTH_WARN
1803/2799972 objects misplaced (0.064%)
mon node02 is low on available space

services:
mon: 3 daemons, quorum node01,node02,node03
mgr: node03(active), standbys: node01, node02
osd: 16 osds: 16 up, 16 in; 1 remapped pgs

data:
pools: 1 pools, 512 pgs
objects: 933.32k objects, 2.68TiB
usage: 9.54TiB used, 5.34TiB / 14.9TiB avail
pgs: 1803/2799972 objects misplaced (0.064%)
511 active+clean
1 active+clean+remapped

io:
client: 131KiB/s rd, 8.57MiB/s wr, 28op/s rd, 847op/s wr

root@node01:~ # ceph health detail
HEALTH_WARN 1803/2800179 objects misplaced (0.064%); mon node02 is low on available space
OBJECT_MISPLACED 1803/2800179 objects misplaced (0.064%)
MON_DISK_LOW mon node02 is low on available space
mon.node02 has 28% avail
root@node01:~ # ceph versions
{
"mon": {
"ceph version 12.2.13 (98af9a6b9a46b2d562a0de4b09263d70aeb1c9dd) luminous (stable)": 3
},
"mgr": {
"ceph version 12.2.13 (98af9a6b9a46b2d562a0de4b09263d70aeb1c9dd) luminous (stable)": 3
},
"osd": {
"ceph version 12.2.13 (98af9a6b9a46b2d562a0de4b09263d70aeb1c9dd) luminous (stable)": 16
},
"mds": {},
"overall": {
"ceph version 12.2.13 (98af9a6b9a46b2d562a0de4b09263d70aeb1c9dd) luminous (stable)": 22
}
}

root@node02:~ # df -h
Filesystem Size Used Avail Use% Mounted on
udev 63G 0 63G 0% /dev
tmpfs 13G 1.3G 12G 11% /run
/dev/sda3 46G 31G 14G 70% /
tmpfs 63G 57M 63G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 63G 0 63G 0% /sys/fs/cgroup
/dev/sda1 922M 206M 653M 24% /boot
/dev/fuse 30M 144K 30M 1% /etc/pve
/dev/sde1 93M 5.4M 88M 6% /var/lib/ceph/osd/ceph-11
/dev/sdf1 93M 5.4M 88M 6% /var/lib/ceph/osd/ceph-14
/dev/sdc1 889G 676G 214G 77% /var/lib/ceph/osd/ceph-3
/dev/sdb1 889G 667G 222G 76% /var/lib/ceph/osd/ceph-2
/dev/sdd1 93M 5.4M 88M 6% /var/lib/ceph/osd/ceph-7
tmpfs 13G 0 13G 0% /run/user/0

root@node02:~ # ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 14.34781 root default
-2 4.25287 host node01
0 hdd 0.85999 osd.0 up 0.80005 1.00000
1 hdd 0.86749 osd.1 up 0.85004 1.00000
6 hdd 0.87270 osd.6 up 0.90002 1.00000
12 hdd 0.78000 osd.12 up 0.95001 1.00000
13 hdd 0.87270 osd.13 up 0.95001 1.00000
-3 3.91808 host node02
2 hdd 0.70000 osd.2 up 0.80005 1.00000
3 hdd 0.59999 osd.3 up 0.85004 1.00000
7 hdd 0.87270 osd.7 up 0.85004 1.00000
11 hdd 0.87270 osd.11 up 0.75006 1.00000
14 hdd 0.87270 osd.14 up 0.85004 1.00000
-4 6.17686 host node03
4 hdd 0.87000 osd.4 up 1.00000 1.00000
5 hdd 0.87000 osd.5 up 1.00000 1.00000
8 hdd 0.87270 osd.8 up 1.00000 1.00000
10 hdd 0.87270 osd.10 up 1.00000 1.00000
15 hdd 0.87270 osd.15 up 1.00000 1.00000
16 hdd 1.81879 osd.16 up 1.00000 1.00000


Any idea?

Thanks,
Miachel
 
Probably, one of the OSD's restarted. While restarting, CRUSH remapped the PG's on that OSD. I recently learned that this remapping sometimes failes to often (more than 5 times, by default) and CRUSH gives up.

Find out which OSD restarted, and try to play around with the crush weight of that OSD, that might trigger a new remapping.
 
MON_DISK_LOW mon node02 is low on available space
One thing to take cares, is the fill level of the OS disk (default DB location).

0 hdd 0.85999 osd.0 up 0.80005 1.00000
The OSD reweight distribution seems to be imbalanced. It might lead to Ceph being unable to place the object. What das a ceph osd df tree show?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!