Yes, I had one disk (per type) per node for the longest time.If you only had one OSD of each type, then your thinking would be correct, because CEPH can no longer produce its replica and would then automatically go into degraded+undersized, but there would be no standstill because the other OSDs can not receive this data according to Crush Rule. With two OSDs per node, things have to be thought differently.
Huh, and there was I thinking that adding an HDD per node to the HDD pool would actually improve operational safety...CEPH wants to keep the replica of 3 and distributes your bulk of data across three hosts. Each of your nodes must therefore keep a complete copy of the data. You are currently distributing this across 2 OSDs (HDD). If one fails, CEPH will have to pack this data from the failed OSD somewhere else and will therefore want to pack the entire fill level from, for example, OSD.1 to OSD.9. In this scenario, you can only fill the two HDDs up to a level of 42.5% each so that the other can hold all the data in the event of a failure. But you now have 167.94% of data per node, if an HDD can hold a maximum of 100%, where should the remaining 67.94% go? CEPH will run with the OSD.9 in full ratio and then pull the emergency brake and switch the pool to read-only to protect the integrity.
At the moment, I have approx. 14TB worth of data across the two HDDs per node. What you are telling me, if I understand you correctly, is that I need three 14TB drives per node, because two would not be enough (as each would get filled to 50%, whereas I should not exceed 42.5%).
In that case, I might be better off removing the 3 4TB drives and replacing the 3 14TB drives with 3 18TB drives. Then I would again only have 1 HDD per node, like I used to have before.
This being a hobby, I need to be mindful of the costs: Adding 4 (14TB) 3.5" drives to the pool would drive up my power bill considerably. Its already high as it is.
Or would it make sense to reduce the replication number?