Hello. Some of you know me. Feel free to tell me how dumb I am, please.
I need to setup DR replication between two geographically remote CEPH clusters.
If I can deliver on this, we are gonna have good DR, like Zerto used to give us on VMware.
And delivering isn't really optional. It absolutely has to work. We are gonna sell it. Probly already have.
Got the first cluster built. Starting number two ...
I need to make a CEPH cluster at a site with the gear on hand.
I have budget for disks, but I'm stuck with these hosts.
I would have to justify complete disk replacements tho.
I have 5 x Gen 14/15 Dell PowerEdge servers.
Three of them have 8 bays. No other OS boot option, so really only 7 SSD to contribute to CEPH.
Two of them have 24 bays. Full. Lots of disks.
All disks are 960gb SSD.
I know I'm gonna lose a lot of capacity.
My first cluster of 4 hosts w 7 x 960gb SSD provides 8TB (in theory) of usable space
From my limited experience with CEPH, I probably shouldn't count on more than 9TB out of this array that I'm planning.
And that really sux. Hard to do business without sufficient storage.
So, here's the dumb part.
Two of these hosts have LOTS of extra 960gb SSD.
I know its not recommended. I know its not optimal.
But what would happen if I added two extra OSD from each of these well-provisioned hosts to the CEPH array?
Right, "ur not sposed to do that".
But if I did, what should I expect?
I need to setup DR replication between two geographically remote CEPH clusters.
If I can deliver on this, we are gonna have good DR, like Zerto used to give us on VMware.
And delivering isn't really optional. It absolutely has to work. We are gonna sell it. Probly already have.
Got the first cluster built. Starting number two ...
I need to make a CEPH cluster at a site with the gear on hand.
I have budget for disks, but I'm stuck with these hosts.
I would have to justify complete disk replacements tho.
I have 5 x Gen 14/15 Dell PowerEdge servers.
Three of them have 8 bays. No other OS boot option, so really only 7 SSD to contribute to CEPH.
Two of them have 24 bays. Full. Lots of disks.
All disks are 960gb SSD.
I know I'm gonna lose a lot of capacity.
My first cluster of 4 hosts w 7 x 960gb SSD provides 8TB (in theory) of usable space
From my limited experience with CEPH, I probably shouldn't count on more than 9TB out of this array that I'm planning.
And that really sux. Hard to do business without sufficient storage.
So, here's the dumb part.
Two of these hosts have LOTS of extra 960gb SSD.
I know its not recommended. I know its not optimal.
But what would happen if I added two extra OSD from each of these well-provisioned hosts to the CEPH array?
Right, "ur not sposed to do that".
But if I did, what should I expect?
Last edited: