ceph osd df tree:
root@PVE2:~# ceph osd df tree
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME
-1 29.71176 - 16 TiB 6.9 TiB 6.9 TiB 91 MiB 31 GiB 8.9 TiB 0 0 - root...
In a 5 node cluster, I had to replace some failed SSD's and now the CEPH cluster is stuck with "Reduced data availability: 40 pgs inactive, 42 pgs incomplete"
Reduced data availability: 40 pgs inactive, 42 pgs incomplete
pg 2.57 is incomplete, acting [1,35,14] (reducing pool CephFS_data...
Is it possible to share a CEPH WAL between all the OSD's, instead of having to partition the WAL?
If I have 12 drives, I have to create 12 equal partition on the WAL, and assign each partition to an OSD. Is there a better way to assign the WAL?
I want / wanted to move CEPH to the 2nd IP subnet, but that failed. Both IP subnets can communicate. And all worked fine, till I had to reinstall Proxmox onto another drive.
So, shortly after my last reply, I added the 2nd IP subnet (rather, 192.168.11.243) to SRV3 and now all 3 nodes can see...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.