Hi all,
I had some issues with my prox ceph cluster. I had one comp fail, no problem - swapped the hard drives into identical hardware and restarted the node. In checking everything I found a hdd in another node had failed, so I removed it as an osd and replaced it. No problem - everything rebuilt ok, but I still saw a health_warn. I did a ceph health detail and I saw:
HEALTH_WARN clock skew detected on mon.1, mon.2
mon.1 addr 172.16.6.6:6789/0 clock skew 0.0936987s > max 0.05s (latency 0.0187915s)
mon.2 addr 172.16.6.7:6789/0 clock skew 0.102262s > max 0.05s (latency 0.0181998s)
i tried to to force an ntp sync with ntpdate but the warning didn't go away. I've let it sit a few hours and plan to check on it soon, but I was wondering if there was anything else I could do to force them all to sync up properly...
I had some issues with my prox ceph cluster. I had one comp fail, no problem - swapped the hard drives into identical hardware and restarted the node. In checking everything I found a hdd in another node had failed, so I removed it as an osd and replaced it. No problem - everything rebuilt ok, but I still saw a health_warn. I did a ceph health detail and I saw:
HEALTH_WARN clock skew detected on mon.1, mon.2
mon.1 addr 172.16.6.6:6789/0 clock skew 0.0936987s > max 0.05s (latency 0.0187915s)
mon.2 addr 172.16.6.7:6789/0 clock skew 0.102262s > max 0.05s (latency 0.0181998s)
i tried to to force an ntp sync with ntpdate but the warning didn't go away. I've let it sit a few hours and plan to check on it soon, but I was wondering if there was anything else I could do to force them all to sync up properly...