FW vs corosync config with 2 rings

stefws

Renowned Member
Jan 29, 2015
302
4
83
Denmark
siimnet.dk
Got a non-std corosync config here with a static nodelist and 2x rings across each their redundant switch networks:

nodelist {
node {
nodeid: 1
quorum_votes: 1
ring1_addr: n1.pve
ring0_addr: n1
}
...
}

quorum {
provider: corosync_votequorum
}

totem {
cluster_name: pve-clst
config_version: 19
ip_version: ipv4
secauth: off
rrp_mode: active
version: 2
interface {
bindnetaddr: 193.162.153.240
ringnumber: 1
broadcast: yes
mcastport: 5405
transport: udp
netmtu: 9000
}
interface {
bindnetaddr: 10.45.71.0
ringnumber: 0
broadcast: yes
mcastport: 5407
transport: udp
netmtu: 9000
}
}

root@n7:~# corosync-cfgtool -s
Printing ring status.
Local node ID 7
RING ID 0
id = 10.45.71.7
status = ring 0 active with no faults
RING ID 1
id = 193.162.153.250
status = ring 1 active with no faults

root@n7:~# pve-firewall localnet
local hostname: n7
local IP address: 10.45.71.7
network auto detect: 10.45.71.0/24
using detected local_network: 10.45.71.0/24

This imposes some issues when wanting to use the firewall, as PVEFW-HOST-IN/OUT chains says:

RETURN udp -- 10.45.71.0/24 10.45.71.0/24 udp dpts:5404:5405
RETURN udp -- 10.45.71.0/24 anywhere ADDRTYPE match dst-type MULTICAST udp dpts:5404:5405


RETURN udp -- anywhere 10.45.71.0/24 udp dpts:5404:5405
RETURN udp -- anywhere anywhere ADDRTYPE match dst-type MULTICAST udp dpts:5404:5405

Could I modify these standard created chains to allow for both our corosync rings and used ports?
 
If I wanted to swap udp ports for ring 0 and ring 1, so ring 0 (picked up as default localnet) gets default multicast udp port 5404:5405, how could I then reconfigure the corosync networks without stopping the running VMs and avoiding getting NMIs on the hypervisor nodes?

then I could at least start the pve-firewall and still have ring 0 working I assume...

Will stopping the pve-ha-crm and watchdog-mux enable me to reconfigure corosync NWs?

root@n7# service watchdog-mux status
â watchdog-mux.service - Proxmox VE watchdog multiplexer
Loaded: loaded (/lib/systemd/system/watchdog-mux.service; static)
Active: active (running) since Tue 2016-05-03 21:52:05 CEST; 14h ago
Main PID: 4471 (watchdog-mux)
CGroup: /system.slice/watchdog-mux.service
ââ4471 /usr/sbin/watchdog-mux

May 03 21:52:05 n7 watchdog-mux[4471]: Watchdog driver 'Software Watchdog', version 0
root@n7# service pve-ha-crm status
â pve-ha-crm.service - PVE Cluster Ressource Manager Daemon
Loaded: loaded (/lib/systemd/system/pve-ha-crm.service; enabled)
Active: active (running) since Tue 2016-05-03 21:52:38 CEST; 14h ago
Process: 5346 ExecStart=/usr/sbin/pve-ha-crm start (code=exited, status=0/SUCCESS)
Main PID: 5348 (pve-ha-crm)
CGroup: /system.slice/pve-ha-crm.service
ââ5348 pve-ha-crm

May 03 21:52:38 n7 pve-ha-crm[5348]: starting server
May 03 21:52:38 n7 pve-ha-crm[5348]: status change startup => wait_for_quorum
May 03 22:06:38 n7 pve-ha-crm[5348]: status change wait_for_quorum => slave
 
Might just be able to follow this example and create a custom Security Group named proxmox and add our 2x corosync rings here and follow by a custom default drop/reject policy. Will go test this...