Adding the bridge network to corosync

Dec 8, 2022
61
4
13
After troubleshooting various network related issues last week, I found the root cause of my problem. I won't go into the whole thing, but it has lead me to decide to add a dedicated switch just for my main corosync and migration network. That said all my nodes have two NICs. I would like to incorporate the NIC I use for management and VMs as a backup corosync network. This would have been practically plug and play if I did it at the start, but I didn't. I've read the documentation here https://pve.proxmox.com/wiki/Cluster_Manager#pvecm_redundancy. What I want to know is, as you can see in my /etc/network/interfaces, my VM network is attached to vmbr0. When editing my corosync.cfg, would I still make ring1_addr: 192.168.1.8 for this node or does it get called differently since it's bound to that virtual bridge?

Code:
auto lo
iface lo inet loopback

iface enp9s0f0 inet manual

auto vmbr0
iface vmbr0 inet static
        address 192.168.1.8/24
        gateway 192.168.1.1
        bridge-ports enp9s0f0
        bridge-stp off
        bridge-fd 0

iface enx026662e2a00e inet manual

auto enp9s0f1
iface enp9s0f1 inet static
        address 192.168.2.6/24
 
It would still be 192.168.1.8
Thank you for this confirmation. I assumed it was like this based on the documentation but couldn't find through any research whether it being bound to the virtual bridge complicated this.

Would it be okay if I asked another quick question to you regarding the setup of a migration network that seems to differ from the documentation and the GUI or should I post a separate thread for that question?
 
Yes, go ahead - although it might be better to create a new thread so it is easier to find for future reference.
 
Yes, go ahead - although it might be better to create a new thread so it is easier to find for future reference.
I reviewed the documentation here about the migration network https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_migration_network

At the bottom it references the setup in the datacenter.cfg file that it refences the IP of the subnet network as a whole, ie:
Code:
migration: secure,network=192.168.2.0/24
By setting it in the GUI however it sets that config file to say
Code:
migration: network=192.168.2.6/24,type=secure

This is obviously referencing this one specific node for all nodes in this config instead of just the subnet. Is it still correct this way?
 
Last edited:
Yes, the CIDR suffix /24 indicates in this case that the last digit gets masked - it means that it uses IP addresses ranging from 192.168.2.0 - 192.168.2.255
 
Yes, the CIDR suffix /24 indicates in this case that the last digit gets masked - it means that it uses IP addresses ranging from 192.168.2.0 - 192.168.2.255
Got it, so even having the 192.168.2.6/24 is perfectly acceptable due to the /24? Sorry for essentially repeating you, just want to be certain for my own understanding and education. Thank you