I have a big question for the ceph cluster and I need your help or your opinion.
I installed a simple 3 nodes setup with Ceph.
In one node has 2x146 GB HW RAID 1 + 18x 600 GB 10k SAS without RAID.
(Summary we have 54 OSD device and we have to buy 3 SSD for journal)
And my big...
I've had issues when I put in new journal disks and wanted to move existing disks from one journal disk to the new ones.
The issues where, I set the osd into Out mode, then Stopped the OSD, and destroyed it.
Recreating the OSD with the new DB device make the OSD never to show up!
This is a...
Good morning all,
On each of my Ceph nodes I have 2 SSD's for journals. /dev/sdj & /dev/sdk
While upgrading from Hammer -> Jewel I noticed something that I think is odd, but I'm not sure. It appears that some of my OSD's either may not have journals, or the journal is not set to one of the...