MTU-size, CEPH and public network

Oct 17, 2008
96
6
73
48
Netherlands
Hi,

in a couple of days we'll get our 7 node hardware delivered (4 nics: BOND4LAN , BOND4CEPH).

We'll setup OSD's on 5 nodes
For the cluster network i understand it is best to use MTU 9000 for faster object replication.
The remaining 2 nodes will only connect to the CEPH client to gain access to their storage.

Here comes the question:
Can i run the ceph PUBLIC & CLUSTER networks at different MTU size through the same BOND4CEPH, at different VLAN's?
Or should i keep everything simple and at mtu1500, for RADOSGW / RBD (or other ceph-client) and don't expect too much speed difference compared to mtu9000?


Any thought is very appreciated
 
Last edited:
You can do tests. From my point of view mixing 1500/9k mtu on same interface is calling for problems. I tried something as this before even ceph was in pve and it was mess.

Network latency will have higher performance impact than 9k mtu.
 
So you would say: keep those cluster- & public-network the same framesize (if at the same NIC)
Even then, it is best to keep cluster and public at different IP's (through VLANS)?

In my test-setup i took the same IP for public & cluster but i want to use CEPH for other things than proxmox only.


Thanks again,
Martijn