Is this wiki article still valid with Proxmox 6?
https://pve.proxmox.com/wiki/Separate_Cluster_Network
https://pve.proxmox.com/wiki/Separate_Cluster_Network
Ok, maybe it's better to update that old wiki article, adding the link to the new documentation.
thanks
Stefano
Do we need to have redundant links?
My uncritical, virtual cluster runs without a redundant link. However, it is highly recommended to have redundancy for "real" systems.
I'm not sure if we talk at cross purposes.I was referring to the redundant corosync addresses. Tom's reply above references the redundant corosync addresses, ring0,1 etc.
ringX_addr actually specifies a corosync link address, the name "ring" is a remnant of older corosync versions that is kept for backwards compatibility. (Wiki)
nodelist {
node {
name: clusterA
nodeid: 1
quorum_votes: 1
ring0_addr: 192.168.25.144
}
node {
name: clusterB
nodeid: 2
quorum_votes: 1
ring0_addr: 192.168.25.145
}
node {
name: clusterC
nodeid: 3
quorum_votes: 1
ring0_addr: 192.168.25.146
}
}
We have 2 redundant gbit links per server (2 stacked dedicated switches for the cluster) dedicated to corosync traffic.Just to be sure: This is not recommended for production systems.
We have a cluster with 3 nodes, each with 4x gbit onboard and 2x10gbit addin card (1x HP DL 385 G10 + 2x HP DL 325 G10):Your setup is not completely clear for me. Providing redundancy at a lower level is possible and should work. Nonetheless, corosync can handle two independent networks as well.
You can, but I don't really see a case where "only" hardware redundancy would be insufficient.So now we split the network to a dedicated one for corosync and we have "only" hardware level redundancy. Should we keep the management IPs as fallback?