The write bandwith is limited by network _and_ disk latency. A write can only be acknowledged after all replicas did acknowledge.
you say that it is your final project? Are you doing it as a project for a examination as Fachinformatiker?
The design of CEPH requires pool size at least with 3. A Pool size of 2 should never be used in production.
When a pool with size 2 misses an OSD, it has to block traffic as the data protection can not be guaranteed anymore.
If you want HA, things always get more expensive, and yes Latency is much better with 10G. But I bet that you can at least gain some speed with DB/WAL on SSD. Bad that hardware availability is so bad in Brazil. Hope you can find something used.
Also try jumbo frames on you gig Network (hope your...
Reading is in CEPH always ways faster, as it can be read from the nearest OSD Node, in best case this is the local node.
Writing is a different thing, as CEPH has to mirror it to replicas in the background (on the backend network) and acknowledges to the client after the last replica...
Spinning HD's and a 1 GIgabit network are just not fast enough for the specific workload of VM's
CEPH is very latency dependant.
A setup with spinning disks and a 1 Gigabit/s network is just good for a read intensive buld storage or a experimental setup.
don't ever try it with VM's
That depends heaviily on the modes supported by both operating system and the switch side!
not all static bond modes work with every switch and can lead to very annoying errors. With many switch brands only active/passive bonding works reliable.
LACP is more dynamic and detects failures of...
No only 2 nodes replicating with is unsafe, its is per design that you need at least 3 nodes
hardcore CEPH People even recommend more Nodes .... but 3 works well.
Also 1 G Links are way to slow for decent CEPH Performance. Remember CEPH has to replicate _any_ write to all 3 nodes before it can...
It is recommended to separate CEPH backend traffic from Frontend, because of latencies.
also a 4x Bond will not help you in terms of speed as long as you do not have at least 5 nodes.
in the usual bonding method (LACP) the usage of the bond links is done on IP/MAC, so for example Node A talks...
-> CEPH with only 2 OSD Nodes should never be used for production !
for data security it is necessary to have a 3/2 replication rule! Read the CEPH docs!
-> What is your network speed? 110 MB/s fits to 1 GBit/s ?
Again CEPH was never designed for this, also the additional software layer of CEPH bogs down performance, so maybe for some fast access of temporary data a local SSD is much better. But be aware that you loose HA in this case .....
yes with 3/2 the cluster continues to work during...
i would strongly recommend to not even think about such a configuration, even not for temporary data. CEPH is definitly not designed for such a use case. It has also good reasons that you should not create a pool with size 2 min_size 1, use pool size 3 with min_size 2. Loss of data in such a...
I can't withhold to repeat this sentence:
Never ever use CEPH with _only_ two Nodes! It is not made for this !
It has really good reasons for 2+1 redundancy!
A Cluster with only two nodes will always provide you with trouble -> the thing is called split brain
A long distance could also give trouble (latencies in corosync are bad)
And now:
CEPH needs at least 3 nodes!
CEPH is very latency dependant. If you want to see a decent performance for...
There is an article about it here:
https://tracker.ceph.com/projects/ceph/wiki/Benchmark_Ceph_Cluster_Performance
Also inside VM a good testtool is dbench from the samba Suite
If you spread the bond over the 2 nic boards you can survive a defective board, but from performance point of view its the same (at least if both PCIe slots can support the data rates)
You will add more latency by daisy chaining, than by a switch
-> Switch: two hops, but modern switches can forward already when they have read the header
so in most cases latency of one hop
-> daisy chain: openvswitch with RSTP will find shortest path, in most cases 2 hops...
For a Mesh you need 4 Ports per node (not bonded!) for 5 nodes (with two ports bonded you need 8 ports per node)
it is not necessary to run MON's on separate nodes, load is minimal, I usually run them on every node
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.