pveceph status vs zpool status

bqq100

Member
Jun 8, 2021
21
0
21
42
I'm starting to play around with ceph pools to replace one of my zfs mirrored pools. One of my biggest questions is how to detect errors from flakey drives/cables/etc.

zpool status can provide the following information and I want to see if similar data is available for ceph pools (using pveceph status or some other command).

- Date/time of last scrub
- Amount of data repaired by the last scrub
- Read/Write/Checksum errors per drive since the last clear

Basically I want to know if ceph repairs data during a scrub, or runs into a read/write/checkum error on a drive, is there an easy way to see the issue (even if the issue has been automatically repaired and the pool is currently healthy).

Thanks!
 
Hi,

ceph does scrubbing and "deep scrubbing". A scrubbing checks the metadata for inconsistencies and a deep scrubbing has a higher IO load since it does check data checksums (data is compared between the replicas). A deep scrub should find errors like bit flips in your data.

I have never seen such an error occur myself, it should show scrub errors via the command ceph health detail. More details can be found in the log /var/log/ceph/ceph.log.