Hi all,
I am interested in setting up a ProxMox cluster with network connections that are fully redundant. As I'm researching and planning the deployment, I am interested in hearing opinions on how others have set up their environments.
The first part of my question relates to the following quote from the Network Configuration wiki article:
In most virtualization environments, I will build things with redundant LACP Trunks carrying various VLAN traffic. If the underlying links are 10 Gbe, generally, everything flows over this connection (including iSCSI storage traffic).
Also citing the following from the Separate Cluster Network wiki page:
Considering the recommendation that the cluster network not exist on an LACP Trunk and that storage should be separate, this essentially has to become a separate ethernet connection entirely, in my case likely on gigabit. For reliability, this would also be a redundant path configured as active-passive. I've found in other posts that Corosync doesn't use a ton of bandwidth.
This brings up two questions though, can the network utilized for Corosync also support management traffic, or is this also against recommendations?
Additionally, does a separate VLAN qualify as separate enough to potentially combine Corosync and Storage Traffic, assuming the underlying LACP was in active-passive mode? This would also assume that the underlying network connection was a relatively unsaturated 10 Gbe bond.
From a physical perspective, how do most of you configure networking connections for HA/Corosync, iSCSI, VM, and other network communication while ensuring high availability, conforming to proper configuration based on the documentation, and not utilizing a "million" physical interfaces to do so?
Thanks!
I am interested in setting up a ProxMox cluster with network connections that are fully redundant. As I'm researching and planning the deployment, I am interested in hearing opinions on how others have set up their environments.
The first part of my question relates to the following quote from the Network Configuration wiki article:
"If your switch support the LACP (IEEE 802.3ad) protocol then we recommend using the corresponding bonding mode (802.3ad). Otherwise you should generally use the active-backup mode. If you intend to run your cluster network on the bonding interfaces, then you have to use active-passive mode on the bonding interfaces, other modes are unsupported."
In most virtualization environments, I will build things with redundant LACP Trunks carrying various VLAN traffic. If the underlying links are 10 Gbe, generally, everything flows over this connection (including iSCSI storage traffic).
Also citing the following from the Separate Cluster Network wiki page:
"It is good practice to use a separate network for corosync, which handles the cluster communication in Proxmox VE. It is one of the most important part in an fault tolerant (HA) system and other network traffic may disturb corosync. Storage communication should never be on the same network as corosync!"
Considering the recommendation that the cluster network not exist on an LACP Trunk and that storage should be separate, this essentially has to become a separate ethernet connection entirely, in my case likely on gigabit. For reliability, this would also be a redundant path configured as active-passive. I've found in other posts that Corosync doesn't use a ton of bandwidth.
This brings up two questions though, can the network utilized for Corosync also support management traffic, or is this also against recommendations?
Additionally, does a separate VLAN qualify as separate enough to potentially combine Corosync and Storage Traffic, assuming the underlying LACP was in active-passive mode? This would also assume that the underlying network connection was a relatively unsaturated 10 Gbe bond.
From a physical perspective, how do most of you configure networking connections for HA/Corosync, iSCSI, VM, and other network communication while ensuring high availability, conforming to proper configuration based on the documentation, and not utilizing a "million" physical interfaces to do so?
Thanks!