After troubleshooting various network related issues last week, I found the root cause of my problem. I won't go into the whole thing, but it has lead me to decide to add a dedicated switch just for my main corosync and migration network. That said all my nodes have two NICs. I would like to incorporate the NIC I use for management and VMs as a backup corosync network. This would have been practically plug and play if I did it at the start, but I didn't. I've read the documentation here https://pve.proxmox.com/wiki/Cluster_Manager#pvecm_redundancy. What I want to know is, as you can see in my /etc/network/interfaces, my VM network is attached to vmbr0. When editing my corosync.cfg, would I still make ring1_addr: 192.168.1.8 for this node or does it get called differently since it's bound to that virtual bridge?
Code:
auto lo
iface lo inet loopback
iface enp9s0f0 inet manual
auto vmbr0
iface vmbr0 inet static
address 192.168.1.8/24
gateway 192.168.1.1
bridge-ports enp9s0f0
bridge-stp off
bridge-fd 0
iface enx026662e2a00e inet manual
auto enp9s0f1
iface enp9s0f1 inet static
address 192.168.2.6/24