Hi everyone,
I'm working on building a new Proxmox cluster with Ceph for production.
Each node I have 4 x 10G NICs. But, unfortunately, the switches are not supported stack. So I've chosen to use Balance-ALB(mode 6) for the bonding. All the 10G ports on the switches have been set only to trunk mode.
Current I have following networks for the cluster:
172.25.5.0/24 for Cluster manager vlanID: 5(Native VLAN)
172.25.7.0/24 for Ceph Cluster Network vlanid: 7
172.25.9.0/24 for Ceph Public Network vlanid: 9
so on each node, below is the network configuration from "/etc/network/interfaces"
My question:
1. Is this configuration good enough?
2. do I still need to separate the corosync network with ringX_addr
3. do I need multicast? or "echo 0 > /sys/class/net/vmbr0/bridge/multicast_snooping"?
4. Anything else I should improve for the network configuration?
Any advice and suggestions will be greatly appreciated! Thank you in advance!
I'm working on building a new Proxmox cluster with Ceph for production.
Each node I have 4 x 10G NICs. But, unfortunately, the switches are not supported stack. So I've chosen to use Balance-ALB(mode 6) for the bonding. All the 10G ports on the switches have been set only to trunk mode.
Current I have following networks for the cluster:
172.25.5.0/24 for Cluster manager vlanID: 5(Native VLAN)
172.25.7.0/24 for Ceph Cluster Network vlanid: 7
172.25.9.0/24 for Ceph Public Network vlanid: 9
so on each node, below is the network configuration from "/etc/network/interfaces"
Code:
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage part of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!
auto lo
iface lo inet loopback
iface enp5s0f0 inet manual
iface enp5s0f1 inet manual
auto enp130s0f0
iface enp130s0f0 inet manual
auto enp130s0f1
iface enp130s0f1 inet manual
auto enp131s0f0
iface enp131s0f0 inet manual
auto enp131s0f1
iface enp131s0f1 inet manual
auto enp130s0f0.9
iface enp130s0f0.9 inet manual
auto enp130s0f1.9
iface enp130s0f1.9 inet manual
auto enp131s0f0.9
iface enp131s0f0.9 inet manual
auto enp131s0f1.9
iface enp131s0f1.9 inet manual
auto enp130s0f0.7
iface enp130s0f0.7 inet manual
auto enp130s0f1.7
iface enp130s0f1.7 inet manual
auto enp131s0f0.7
iface enp131s0f0.7 inet manual
auto enp131s0f1.7
iface enp131s0f1.7 inet manual
auto bond0
iface bond0 inet manual
slaves enp130s0f0 enp130s0f1 enp131s0f0 enp131s0f1
bond_miimon 100
bond_mode balance-alb
auto bond1
iface bond1 inet static
address 172.25.9.21
netmask 255.255.255.0
gateway 172.25.9.1
slaves enp130s0f0.9 enp130s0f1.9 enp131s0f0.9 enp131s0f1.9
bond_miimon 100
bond_mode balance-alb
auto bond2
iface bond2 inet static
address 172.25.7.21
netmask 255.255.255.0
slaves enp130s0f0.7 enp130s0f1.7 enp131s0f0.7 enp131s0f1.7
bond_miimon 100
bond_mode balance-alb
auto vmbr0
iface vmbr0 inet static
address 172.25.5.21
netmask 255.255.255.0
gateway 172.25.5.1
bridge_ports bond0
bridge_stp off
bridge_fd 0
My question:
1. Is this configuration good enough?
2. do I still need to separate the corosync network with ringX_addr
3. do I need multicast? or "echo 0 > /sys/class/net/vmbr0/bridge/multicast_snooping"?
4. Anything else I should improve for the network configuration?
Any advice and suggestions will be greatly appreciated! Thank you in advance!