For ceph this means, the more the better. As you distribute the cluster traffic among servers and spread out your disks that are taking care of read/writes objects for a certain PG. The good part with ceph is, that you can start small and grow as needed (performance & capacity).but I mean how many nodes on 10GB network it's recommended to do, since there is lot of data sync between nodes, no?
Hi,
this Calculations are "Per Node", and it's OK
but I mean how many nodes on 10GB network it's recommended to do, since there is lot of data sync between nodes, no?
For better latency you can also check out the direct attached copper cables (DAC), IMHO they are better to handle then optical.I recommend 10 gbit for the beginning. Not only for bandwith, but latency, which also matter. And (optical!) 10gbit have much less latency than copper 1 gbit.
For better latency you can also check out the direct attached copper cables (DAC), IMHO they are better to handle then optical.
I have read somewhere quite some time ago that having 1 phisical CPU with high clock is better than having 2 CPU for ceph, is this still that case ?
I have read somewhere quite some time ago that having 1 phisical CPU with high clock is better than having 2 CPU for ceph, is this still that case ?
Also how would you know if it is time to upgrade your network from 10Gb to something faster- we are maxing out at about 1.5 Gbps when looking at the network live (the ceph private and public interface). Also our disk controller is has only 12 Gbps throughput so I assume if we were to upgrade the network we would have to upgrade the controllers as well and make sure that the disks' speed when added is fast enough to use that speed (of the network and new hardware controlers).