After troubleshooting various network related issues last week, I found the root cause of my problem. I won't go into the whole thing, but it has lead me to decide to add a dedicated switch just for my main corosync and migration network. That said all my nodes have two NICs. I would like to incorporate the NIC I use for management and VMs as a backup corosync network. This would have been practically plug and play if I did it at the start, but I didn't. I've read the documentation here https://pve.proxmox.com/wiki/Cluster_Manager#pvecm_redundancy. What I want to know is, as you can see in my /etc/network/interfaces, my VM network is attached to vmbr0. When editing my corosync.cfg, would I still make ring1_addr: 192.168.1.8 for this node or does it get called differently since it's bound to that virtual bridge?
iface lo inet loopback
iface enp9s0f0 inet manual
iface vmbr0 inet static
iface enx026662e2a00e inet manual
iface enp9s0f1 inet static