Separate Cluster Network

Do we need to have redundant links?

My uncritical, virtual cluster runs without a redundant link. However, it is highly recommended to have redundancy for "real" systems.
 
My uncritical, virtual cluster runs without a redundant link. However, it is highly recommended to have redundancy for "real" systems.

I was referring to the redundant corosync addresses. Tom's reply above references the redundant corosync addresses, ring0,1 etc.
BTW i changed the corosync addresses to the new links we created in /etc/pve/corosync.conf and corosync just updated them runtime without skipping a beat.
 
I was referring to the redundant corosync addresses. Tom's reply above references the redundant corosync addresses, ring0,1 etc.
I'm not sure if we talk at cross purposes.
ringX_addr actually specifies a corosync link address, the name "ring" is a remnant of older corosync versions that is kept for backwards compatibility. (Wiki)

I have only one interface defined in the VM and there is no ring1 in my corosync.conf
Code:
nodelist {
  node {
    name: clusterA
    nodeid: 1
    quorum_votes: 1
    ring0_addr: 192.168.25.144
  }
  node {
    name: clusterB
    nodeid: 2
    quorum_votes: 1
    ring0_addr: 192.168.25.145
  }
  node {
    name: clusterC
    nodeid: 3
    quorum_votes: 1
    ring0_addr: 192.168.25.146
  }
}
Just to be sure: This is not recommended for production systems.
 
Just to be sure: This is not recommended for production systems.
We have 2 redundant gbit links per server (2 stacked dedicated switches for the cluster) dedicated to corosync traffic.
Is it recommended to use another network in this case too as redundancy for corosync?
 
Your setup is not completely clear for me. Providing redundancy at a lower level is possible and should work. Nonetheless, corosync can handle two independent networks as well.
 
Your setup is not completely clear for me. Providing redundancy at a lower level is possible and should work. Nonetheless, corosync can handle two independent networks as well.
We have a cluster with 3 nodes, each with 4x gbit onboard and 2x10gbit addin card (1x HP DL 385 G10 + 2x HP DL 325 G10):
2x gbit links for management/VM traffic/livemigrate (redundant, go into 2 stacked switches)
2xgbit links for cluster (redundant, go into 2 stacked switches) in an isolated vlan on the virtualization switches
2x10Gbit for storage (redundant, go into 2 stacked switches) -0 storage is provided by a dual-board Equal Logic SAN for critical VMs and local storage from each node exported via NFS to the whole cluster for backups (storage on the 385 has 21 TB worth of RAID6 HDDs) or high performance disks (one 325 has a 6x 500 GB SSD RAID 6).
Later we will probably expand with a dual Gbit card each node, we have leftower hardware and use 4x 1Gbit for management/VM traffic/livemigrate.

Initially we had all 4x 1gbit for management/VM traffic/livemigrate/cluster but we transport a few vlans over them that reach into all of our network and it seems happenings (loops that were caught by bpduguard, but the ports were set to auto-enable again after 30 seconds) in other parts of the network affected cluster communications to the point of disintegrating the cluster (aka all nodes found themselves "alone" as far as corosync thought and all of them rebooted). This happened after we upgraded to Proxmox 6, previously we had next to no issues with Corosync 2 after we sorted out multicast.
So now we split the network to a dedicated one for corosync and we have "only" hardware level redundancy. Should we keep the management IPs as fallback? Although if these dedicated links fail i suppose, bar some admin ineptitude, the rest of the switch will fare no better...
 
So now we split the network to a dedicated one for corosync and we have "only" hardware level redundancy. Should we keep the management IPs as fallback?
You can, but I don't really see a case where "only" hardware redundancy would be insufficient.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!