Ceph Proxmox active+clean+inconsistent

daus2936

New Member
Jul 15, 2025
1
0
1
Hello, i need help about my proxmox ceph cluster, after scheduled PG deep scrubbing, i have an error like this on my ceph health details :
pg 2.f is active+clean+inconsistent, acting [4,3,0]
pg 2.11 is active+clean+inconsistent, acting [3,0,5]
pg 2.1e is active+clean+inconsistent, acting [1,5,3]

cepherror.png

When i deep scrub the pg 2.f using command "ceph pg deep-scrub 2.f" , i got log error like this :

Code:
root@proxmox1:~# grep 2.f /var/log/ceph/ceph.log
2025-07-15T17:39:59.427041+0700 osd.4 (osd.4) 8599 : cluster 1 osd.4 pg 2.f Deep scrub errors, upgrading scrub to deep-scrub
2025-07-15T17:39:59.427098+0700 osd.4 (osd.4) 8600 : cluster 0 2.f deep-scrub starts
2025-07-15T18:20:05.074485+0700 osd.4 (osd.4) 8602 : cluster 4 2.f shard 3 soid 2:f24ded78:::rbd_data.a9daa6174f7e08.00000000000004f8:head : candidate had a read error
2025-07-15T20:06:12.170080+0700 osd.4 (osd.4) 8604 : cluster 4 2.f deep-scrub 0 missing, 1 inconsistent objects
2025-07-15T20:06:12.170085+0700 osd.4 (osd.4) 8605 : cluster 4 2.f deep-scrub 1 errors

based on that error, is it safe if i use "ceph pg repair 2.f" on my ceph cluster?

Thank you
 
It isn't 100% safe to run a repair, but in your case is quite near to that 100% as it looks you have 2 healthy replicas. Make backups of everything and try to repair.

I would suspect on that OSD.3 as it appears in all 3 inconsistent errors you have. Check logs and even rebuild from scratch, as it looks that it may be failing.