Hello all,
Currently we use network bond (2x 10gb) with different vlan's for ceph, internet and between vm's network.
We use 2x Arista 7050s switches with channel groups.
We're think about expanding our network and add a dual 10gb to every node.
What will be the best option?
1) make a bond for ceph (20gb) and a bond for the rest(20gb)?
2) Make a single bond of 40gbs?
3) Or for the current setup it's not necessary to extend our network with extra network cards?
Cluster information:
6x node (each node 256gb 2x e5-2620v4)
ceph full ssd
38 osd's
osd's mix of:
19x 1TB PM863a
19x 4TB PM883
Example network config:
.15 internal network
.16 ceph network
MTU is default 1500
Currently we use network bond (2x 10gb) with different vlan's for ceph, internet and between vm's network.
We use 2x Arista 7050s switches with channel groups.
We're think about expanding our network and add a dual 10gb to every node.
What will be the best option?
1) make a bond for ceph (20gb) and a bond for the rest(20gb)?
2) Make a single bond of 40gbs?
3) Or for the current setup it's not necessary to extend our network with extra network cards?
Cluster information:
6x node (each node 256gb 2x e5-2620v4)
ceph full ssd
38 osd's
osd's mix of:
19x 1TB PM863a
19x 4TB PM883
Example network config:
.15 internal network
.16 ceph network
MTU is default 1500
Code:
root@prox-s05:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback
iface ens1f0 inet manual
iface ens1f1 inet manual
auto bond0
iface bond0 inet manual
bond-slaves ens1f0 ens1f1
bond-miimon 100
bond-mode 4
bond-downdelay 400
bond-updelay 800
auto vmbr0
iface vmbr0 inet static
address 192.168.15.95
netmask 255.255.255.0
gateway 192.168.15.251
bridge-ports bond0.15
bridge-stp off
bridge-fd 0
auto vmbr1
iface vmbr1 inet manual
bridge-ports bond0
bridge-stp off
bridge-fd 0
auto vmbr2
iface vmbr2 inet static
address 192.168.16.95
netmask 255.255.255.0
bridge-ports bond0.25
bridge-stp off
bridge-fd 0
Last edited: