We've been building our Proxmox clustered hypervisors in a high availability set up when it comes to networking. Our typical set up is as follows:
Cluster/Management Network: NIC1 & NIC2 LACP (Linux Bond) ==> SWITCH1 & SWITCH2
Public VLAN Network: NIC3 & NIC4 LACP (OVS Bond) ==> SWITCH1 & SWITCH2
Storage/Backup Network: NIC5 & NIC6 LACP (Linux Bond) ==> SWITCH1 & SWITCH2 ==> Block Storage
IPMI: SWITCH1
We've always thought it was prudent to keep the cluster and public traffic separate from the storage and backup networks. However, this is a lot network interfaces per hypervisor so we've been wondering if we are over doing this by separating all functionality? Assuming the interfaces are fast enough, would it be better to just run all these services over a single LACP bond to 2 different switches? If so, should we do this with an OVS bond so we can tag all of the interfaces?
I would be interested in how others approach this with keeping the network highly available.
Thanks!
Cluster/Management Network: NIC1 & NIC2 LACP (Linux Bond) ==> SWITCH1 & SWITCH2
Public VLAN Network: NIC3 & NIC4 LACP (OVS Bond) ==> SWITCH1 & SWITCH2
Storage/Backup Network: NIC5 & NIC6 LACP (Linux Bond) ==> SWITCH1 & SWITCH2 ==> Block Storage
IPMI: SWITCH1
We've always thought it was prudent to keep the cluster and public traffic separate from the storage and backup networks. However, this is a lot network interfaces per hypervisor so we've been wondering if we are over doing this by separating all functionality? Assuming the interfaces are fast enough, would it be better to just run all these services over a single LACP bond to 2 different switches? If so, should we do this with an OVS bond so we can tag all of the interfaces?
I would be interested in how others approach this with keeping the network highly available.
Thanks!