Hello everybody,
We are currently faced with deciding what a possible new storage concept might look like. Unfortunately, we can only "rely" on what we have found on the internet at Howtos and information about CEPH.
What do we want to achieve:
- I'd like to have a growing cluster for both virtual machines and storage
- I want a reasonable throughput, both in MB / s reading and writing, as well as of course the IOPS, especially writes
In addition, I want to use CEPHFS to have a redundant file system, which is what I like to do. as a backend in different VMs can use in parallel
What we want to start with:
For the experiment (what else is it not initially) I would like to start as follows:
- 3 servers, both 2x2640v4 Xeon, 256GB RAM, 6x10 + 2x40Gb, each 2 x 6.4TB Samsung 1725b, booted from 2 x SS300 Enterprise SSD
- all 3 servers have the role of CEPH OSD, CEPH Manager and CEPH Monitor at the beginning - but only at the beginning, in addition I would like to leave a few VM on it, otherwise completely overdose
- We have thus 6 x OSD in 3 x Nodes, more than 2 x NVMe via PCI Express I would like or I can not "obstruct", because otherwise it will be too warm / tight in Supermicro Chasis
- I want to redundantly build the network over LCAP and 2 x Juniper QFX5100 switches, i. per server there are 2 x 40 GBit for the CEPH OSD Cluster, 2 x 10 GBit for Heartbeat and Corosync, 2 x 10 GBit for the internal communication network
My questions:
- A replicaset of 2 should allow me that in case of total failure of a node still the cluster is still available and although safe and writable, so I would have at 2 x 6.4 TB per node then at 3 nodes and 2 replica replica about 12 , 8 TB of space, but of which I should fill only about 80% maximum - right?
- 2 x NVMe per node? 4, I think are certainly better for distribution I / O, but I'm afraid it will be too tight in the case and the heat is not too negligible
Later I would like to outsource the 3 Monitoring / Manager-Nodes to 3 small servers, and hang these only by 2x10 GBit on the internal network, since one should use only for the OSD network layer 40 GBit +. In my opinion, this should not be a boat leash? Separated because the monitoring nodes seem very very important to me ;-).
Thank you for your experience.
Greeting,
Ronny
We are currently faced with deciding what a possible new storage concept might look like. Unfortunately, we can only "rely" on what we have found on the internet at Howtos and information about CEPH.
What do we want to achieve:
- I'd like to have a growing cluster for both virtual machines and storage
- I want a reasonable throughput, both in MB / s reading and writing, as well as of course the IOPS, especially writes
In addition, I want to use CEPHFS to have a redundant file system, which is what I like to do. as a backend in different VMs can use in parallel
What we want to start with:
For the experiment (what else is it not initially) I would like to start as follows:
- 3 servers, both 2x2640v4 Xeon, 256GB RAM, 6x10 + 2x40Gb, each 2 x 6.4TB Samsung 1725b, booted from 2 x SS300 Enterprise SSD
- all 3 servers have the role of CEPH OSD, CEPH Manager and CEPH Monitor at the beginning - but only at the beginning, in addition I would like to leave a few VM on it, otherwise completely overdose
- We have thus 6 x OSD in 3 x Nodes, more than 2 x NVMe via PCI Express I would like or I can not "obstruct", because otherwise it will be too warm / tight in Supermicro Chasis
- I want to redundantly build the network over LCAP and 2 x Juniper QFX5100 switches, i. per server there are 2 x 40 GBit for the CEPH OSD Cluster, 2 x 10 GBit for Heartbeat and Corosync, 2 x 10 GBit for the internal communication network
My questions:
- A replicaset of 2 should allow me that in case of total failure of a node still the cluster is still available and although safe and writable, so I would have at 2 x 6.4 TB per node then at 3 nodes and 2 replica replica about 12 , 8 TB of space, but of which I should fill only about 80% maximum - right?
- 2 x NVMe per node? 4, I think are certainly better for distribution I / O, but I'm afraid it will be too tight in the case and the heat is not too negligible
Later I would like to outsource the 3 Monitoring / Manager-Nodes to 3 small servers, and hang these only by 2x10 GBit on the internal network, since one should use only for the OSD network layer 40 GBit +. In my opinion, this should not be a boat leash? Separated because the monitoring nodes seem very very important to me ;-).
Thank you for your experience.
Greeting,
Ronny
Last edited: