Hi,
We have a 5 node cluster running Ceph 17.2.5 (e04241aa9b639588fa6c864845287d2824cb6b55) quincy (stable) size 2/3.
Same h/w for all nodes, 7 7TB NVMe disks / node. The initial installs was performed by a company on an older version of Ceph (Octopus),for performance reasons
each NVMe was split into 4 OSDs. I've tried to find some more information about if this is a good thing today but haven't really found a clear answer and
I guess it depends on a number of reasons. Anyone with experience that could help?
We had a recent OSD crash where one OSD got filled to the max and died and would not start again so having 4 / disk got a bit more complicated.
Maybe we should convert to 1 OSD per disk but if that's the way, what would be the best approach to do it in a live cluster. 1 disk at a time in all 5 nodes or
just plow through all disks in one node after the other?
--Mats
We have a 5 node cluster running Ceph 17.2.5 (e04241aa9b639588fa6c864845287d2824cb6b55) quincy (stable) size 2/3.
Same h/w for all nodes, 7 7TB NVMe disks / node. The initial installs was performed by a company on an older version of Ceph (Octopus),for performance reasons
each NVMe was split into 4 OSDs. I've tried to find some more information about if this is a good thing today but haven't really found a clear answer and
I guess it depends on a number of reasons. Anyone with experience that could help?
We had a recent OSD crash where one OSD got filled to the max and died and would not start again so having 4 / disk got a bit more complicated.
Maybe we should convert to 1 OSD per disk but if that's the way, what would be the best approach to do it in a live cluster. 1 disk at a time in all 5 nodes or
just plow through all disks in one node after the other?
--Mats