Hi,
I have a four node cluster with two frontend switches and two backend switches. Each node currently has one (25G) connection to each switch, for a total of four used ports per node. Each node additionally has four unused 10G ports. (but I would prefer not to use them)
I have created an LACP bond interface for each pair (frontend/backend) of ports, a VLAN on top of each bond, and a bridge on top of each VLAN. One for the PVE frontend, and the other for Ceph. Also, an additional bridge on the PVE bond for guest traffic. First of all, is this the proper way to do it? Everything works as expected. Just want to make sure I'm not making things needlessly complicated.
I have read through this thread, which has been useful, but I feel I still don't fully understand the big picture.
So, one thing I have noticed is that, should the backend/Ceph connection be broken, I am no longer able to manage the rest of the cluster via the same node's web UI. I understand this is because all of that happens over Ceph?
I'm coming from a vSphere background, where I have a similar environment. Hosts are managed via the frontend, and the hosts also communicate with each other over that same network. On the backend there are two different networks for storage and migration. Cluster control traffic remains intact even when the backend is unavailable. Furthermore, the two backend networks use mutually exclusive primary network interfaces, while using the other's primary as its own secondary.
So, is it possible to make cluster web management (ie accessing one node via another) independent of the backend networking? And what are my options (and benefits) for further separating the backend functionality? (such as using inverted primary/secondary interfaces)
I have a four node cluster with two frontend switches and two backend switches. Each node currently has one (25G) connection to each switch, for a total of four used ports per node. Each node additionally has four unused 10G ports. (but I would prefer not to use them)
I have created an LACP bond interface for each pair (frontend/backend) of ports, a VLAN on top of each bond, and a bridge on top of each VLAN. One for the PVE frontend, and the other for Ceph. Also, an additional bridge on the PVE bond for guest traffic. First of all, is this the proper way to do it? Everything works as expected. Just want to make sure I'm not making things needlessly complicated.
Code:
auto ens2f1np1
iface ens2f1np1 inet manual
auto ens3f1np1
iface ens3f1np1 inet manual
auto ens2f0np0
iface ens2f0np0 inet manual
auto ens3f0np0
iface ens3f0np0 inet manual
iface eno1np0 inet manual
iface eno2np1 inet manual
iface eno3 inet manual
iface eno4 inet manual
auto bond0
iface bond0 inet manual
bond-slaves ens2f1np1 ens3f1np1
bond-miimon 100
bond-mode 802.3ad
auto bond1
iface bond1 inet manual
bond-slaves ens2f0np0 ens3f0np0
bond-miimon 100
bond-mode 802.3ad
mtu 9000
#Ceph
auto bond0.20
iface bond0.20 inet manual
auto bond1.21
iface bond1.21 inet manual
mtu 9000
auto bond0.9
iface bond0.9 inet manual
auto vmbr0
iface vmbr0 inet static
address 10.7.20.12/24
gateway 10.7.20.1
bridge-ports bond0.20
bridge-stp off
bridge-fd 0
auto vmbr1
iface vmbr1 inet static
address 10.7.21.12/24
bridge-ports bond1.21
bridge-stp off
bridge-fd 0
mtu 9000
auto vmbr3
iface vmbr3 inet manual
bridge-ports bond0
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094
I have read through this thread, which has been useful, but I feel I still don't fully understand the big picture.
So, one thing I have noticed is that, should the backend/Ceph connection be broken, I am no longer able to manage the rest of the cluster via the same node's web UI. I understand this is because all of that happens over Ceph?
I'm coming from a vSphere background, where I have a similar environment. Hosts are managed via the frontend, and the hosts also communicate with each other over that same network. On the backend there are two different networks for storage and migration. Cluster control traffic remains intact even when the backend is unavailable. Furthermore, the two backend networks use mutually exclusive primary network interfaces, while using the other's primary as its own secondary.
So, is it possible to make cluster web management (ie accessing one node via another) independent of the backend networking? And what are my options (and benefits) for further separating the backend functionality? (such as using inverted primary/secondary interfaces)