I have executed the command. The health is now HEALTH_OK.
It is possible do know what we have lost?
We have 3 PGs in active+clean+scubbing+deep. I think is good (more than yestherday..). Next step, is everything is ok is to pass the min_size to 2 on the pool.
The situation have change again.
I have restore a dump of the pg on the 2 other osd should be contain pg.
Now I have ;
1/3125014 objects unfound
Do you think I can use the command :
ceph pg 1.5 mark_unfound_lost delete
The situation has changed. I don't know if is good or bad..
Ceph is stin in HEALTH_WARN . But:
pg 2.5c1 is stuck peering since forever, current state remapped+peering, last acting [72]
In my cluster, I Have 7 node with 5.4.15 PVE version.
Each node have between 5 and 14 OSD, 80 OSD for the cluster.
The state of Ceph is actually HEALTH_ERR:
Reduced data availability: 1 pg inactive, 1 pg incomplete (this is the same pg)
294 stuck requests are blocked > 4096 sec. Implicated OSD...
Hi,
After removing one OSD (on a cluster with 7 node and 80 OSD), we have Health Ceph in Warning.
1 PG is inactive and down.
What can we do to change this state?
Thanks to you.