[SOLVED] Allowing cluster communications?

Jun 8, 2016
344
75
93
48
Johannesburg, South Africa
Proxmox firewall rules are automatically generated to cover cluster communications by discovering and defining the 'local_network' ipset:
Code:
[root@kvm1a ~]# pve-firewall localnet
local hostname: kvm1a
local IP address: 192.168.241.2
network auto detect: 192.168.241.0/24
using detected local_network: 192.168.241.0/24

This automatically generates necessary in and out rules for the hosts:
Code:
Chain PVEFW-HOST-IN (1 references)
 pkts bytes target     prot opt in     out     source               destination
 376K 1607M ACCEPT     all  --  lo     *       0.0.0.0/0            0.0.0.0/0
   55  2800 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate INVALID
3785K   19G ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
 831K  152M PVEFW-smurfs  all  --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate INVALID,NEW
    0     0 RETURN     2    --  *      *       0.0.0.0/0            0.0.0.0/0
    0     0 RETURN     udp  --  *      *       192.168.241.0/24     192.168.241.0/24     udp dpts:5404:5405
    0     0 RETURN     udp  --  *      *       192.168.241.0/24     0.0.0.0/0            ADDRTYPE match dst-type MULTICAST udp dpts:5404:5405
    0     0 PVEFW-Drop  all  --  *      *       0.0.0.0/0            0.0.0.0/0
    0     0 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0
    0     0            all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* PVESIG:0VygpXTrPIKPsfJaIQjO1Rkx73Y */

Chain PVEFW-HOST-OUT (1 references)
 pkts bytes target     prot opt in     out     source               destination
 381K 1612M ACCEPT     all  --  *      lo      0.0.0.0/0            0.0.0.0/0
    4   240 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate INVALID
3559K   16G ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
    0     0 RETURN     2    --  *      *       0.0.0.0/0            0.0.0.0/0
   18  1080 RETURN     tcp  --  *      *       0.0.0.0/0            192.168.241.0/24     tcp dpt:8006
    0     0 RETURN     tcp  --  *      *       0.0.0.0/0            192.168.241.0/24     tcp dpt:22
    0     0 RETURN     tcp  --  *      *       0.0.0.0/0            192.168.241.0/24     tcp dpts:5900:5999
    0     0 RETURN     tcp  --  *      *       0.0.0.0/0            192.168.241.0/24     tcp dpt:3128
    0     0 RETURN     udp  --  *      *       0.0.0.0/0            192.168.241.0/24     udp dpts:5404:5405
50444   24M RETURN     udp  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type MULTICAST udp dpts:5404:5405
 802K  118M RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0
    0     0            all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* PVESIG:jP0LGwnIAYXdHiudtSs6tGLMv8Y */


We've had an instance where there was a failure on the Ceph replication network and VMs subsequently all froze due to their disc I/O ceasing to function. VMs were not automatically recovered on other working nodes so we subsequently run the cluster communications on the Ceph replication network, instead of the network that VM's bridged network traffic occurs on:
Code:
[root@kvm1a ~]# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
192.168.241.2 kvm1a.<hidden_domain> kvm1a pvelocalhost

# corosync network hosts
10.254.1.2 corosync-kvm1a.<hidden_domain> corosync-kvm1a
10.254.1.3 corosync-kvm1b.<hidden_domain> corosync-kvm1b
10.254.1.4 corosync-kvm1c.<hidden_domain> corosync-kvm1c

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

[root@kvm1a ~]# cat /etc/pve/corosync.conf
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: kvm1a
    nodeid: 1
    quorum_votes: 1
    ring0_addr: corosync-kvm1a
  }

  node {
    name: kvm1b
    nodeid: 2
    quorum_votes: 1
    ring0_addr: corosync-kvm1b
  }

  node {
    name: kvm1c
    nodeid: 3
    quorum_votes: 1
    ring0_addr: corosync-kvm1c
  }

}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: cluster1
  config_version: 5
  ip_version: ipv4
  secauth: on
  version: 2
  interface {
    bindnetaddr: 10.254.1.2
    ringnumber: 0
  }

}



I can't simply follow documentation (https://pve.proxmox.com/pve-docs/chapter-pve-firewall.html#pve_firewall_vm_container_configuration) to define local_network manually as I need the predefined rules for server management access. I subsequently want to manually define the necessary rules to allow inbound cluster replication traffic but am unable to define the multicast traffic rule.

What I've added thus far is:
  • Alias object called 'ceph' which defines CDIR 10.254.1.0/24
  • An accept rule using the Ceph macro
  • An accept rule for the first cluster communication traffic (udp 5404-5405)
  • Temporarily allowing all udp traffic from the network used for cluster communications as I can not define multicast traffic rules

node_firewall_rules.jpg


Question:
How do I define the following rule manually?

0 0 RETURN udp -- * * 192.168.241.0/24 0.0.0.0/0 ADDRTYPE match dst-type MULTICAST udp dpts:5404:5405
 
Got this working by defining an alias called 'multicast' with a CDIR of 224.0.0.0/4


Then defined the following rule:
ceph_corosync.jpg


Applicable sections from /etc/pve/firewall/cluster.fw:
Code:
[ALIASES]
multicast 224.0.0.0/4 # Multicast
ceph 10.254.1.0/24 # Ceph

[RULES]
IN Ceph(ACCEPT) -source ceph # Ceph - Replication Traffic
IN ACCEPT -source ceph -dest ceph -p udp -dport 5404:5405 # Corosync - Cluster Communications
IN ACCEPT -source ceph -dest multicast -p udp -dport 5404:5405 # Corosync - Multicast
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!