[SOLVED] PVE 8.4.1 - 2 hosts cluster + SDN Vxlan: communication fail across hosts when firewall enabled

albans

Renowned Member
May 7, 2015
53
1
73
Hi,

I've a 2 hosts cluster running on PVE 8.4.1.

I've 2 physical interfaces on each hosts:
- eno1: WAN with public IP
- eno2: private network

On each machine, I've created relevant bridges:
- vmbr0: using eno1 as ports/slaves
- rpn0: using eno2 as ports/slaves

The cluster is linked via the rpn0 IP addresses.
All working well so far.

Now to enable my VMs and containers to communicate across the 2 hosts, at the DC level, I create
- a SDN Zone VXnet called myzone1
- with a Vnet called vmbr1

Works like a charm, each VM and container with an interface on vmbr1 can speak to another one, doesn't matter if they are on different proxmox hosts.
In summary, you got the following
diagram-proxmox-tmp.png

vm1, lxc1 can communicate with vm2 and lxc2.

Time for the Firewall activation at DC level, to ensure all traffic is well described within proper rules.
Rules for corosync and all basic PVE services as per documentation is enabled.

All working well, EXCEPT the VM and container cannot communicate if they're on a different proxmox host.
It only works with VM and container on the same proxmox host.
vm1 and lxc1 can communicate.
vm2 and lxc2 can communicate.
vm1 cannot communicate with vm2 and lxc2 (ping for example), and vice versa.
lxc1 cannot communicated with vm2 and lxc2 (ping for example), and vice versa.

Even if I enable a "Ping for ALL" rule on top of the DC firewall, it doesn't help.
This may relate to some FORWARD rules that can't be handled at the GUI level, nevertheless I'm unclear as what needs to be added.
If I disable the firewall, everything works fine again.

Let me know your thoughts.
Thank you.
 
Last edited: