ceph health error. Is it because the third node has no osd assigned to pool?

FSNaval

Member
Jan 13, 2024
30
1
8
hello everyone and thank you beforehand for your contribution to my question.

Currently i have set up a small 3-node proxmox cluster and i have installed ceph.
in ceph, i have created two pools (using the default replicate rule of 3/2 for both pools):
  1. One pool contains nvme disks and each node is equipped with one nvme disk assigned to this pool (vmpool) (see attached osd_photo)
  2. one pool contains spinning disks and only two out of three nodes are equipped with spinning disks assigned to this pool (datapool) (see attached osd_photo)
yesterday i started putting data in the second pool (datapool) and today i received this health warning (see attached health_warning_photo). Is this warning attributed to the fact that the third node does not have any spinning disks assigned to the second pool?

ceph_performance_photo.JPEGhealth_warning_photo.JPEGosd_photo.JPG
 
Ceph does not have an independent location to place the third copy.

But inconsistent PGs mean there are copies that do not match.

Read in the Ceph documentation about troubleshooting PGs.

Thank you for your guidance. I followed your proposed documentation and managed to solve the problem.

During troubleshooting, I found that the pg resided in osd.8; do you think i should replace the drive or I should continue using it?