Proxmox firewall logic makes zero sense?!

Gruenschnabel

New Member
Sep 25, 2025
8
0
1
I seriously don’t understand what Proxmox is doing here, and I could use a reality check.

Here’s my exact setup:

1. Datacenter Firewall ON
Policies: IN = ACCEPT, OUT = ACCEPT, FORWARD = ACCEPT
One rule:

  • IN / ACCEPT / vmbr0.70 / tcp / myPC → 8006 (WebGUI Leftover as i had IN = REJECT before)
2. Node Firewall ON
There are no Default Policy Options i can set.
One rule:

  • IN / ACCEPT / vmbr0.70 / tcp / myPC → 8006 (WebGUI Leftover as i had IN = REJECT before on Datacenter FW)
3. VM Firewall ON
Policies: IN = ACCEPT, OUT = ACCEPT
No rules at all

Result:

  • pfSense can ping the VM
  • The VM cannot ping pfSense
  • Outbound ICMP from VM gets silently dropped somewhere inside Proxmox
Now the confusing part:

If I disable Datacenter FW + Node FW (leaving only the VM FW enabled with both policies set to ACCEPT and no rules)…
Ping works instantly.

WTF? Am i totally dumb or is Proxmox FW just trash?

What ChatGPT told me:
Even if the VM firewall is set to ACCEPT, once Datacenter-FW is enabled, it loads global chains that still affect every NIC path:

VM → VM-FW → Bridge → Node-FW → Datacenter-Forward → NIC → pfSense
If ANY chain decides to drop something, the packet dies — even with ACCEPT everywhere.

Is that really the intended behavior?

What’s the real best-practice here?
If I want some VMs/LXCs to have full network access and others to be blocked/restricted:

  • Should all of this be handled entirely on pfSense (VLANs, rules, isolation)?
  • Or should the Proxmox VM firewall be used for per-VM allow/deny rules?
  • Or both?
Thanks in advance.
 
If I disable Datacenter FW + Node FW (leaving only the VM FW enabled with both policies set to ACCEPT and no rules)…
Ping works instantly.

Disabling the firewall on Datacenter level disables everything, also the VM firewall, it's a global switch - so if you disable it globally it would make sense that traffic passes when there are issues with the firewall rules.
Are you using nftables-based firewall (host setting nftables) or iptables?


Even if the VM firewall is set to ACCEPT, once Datacenter-FW is enabled, it loads global chains that still affect every NIC path:

This is mostly related to conntrack, enabling the firewall on Datacenter level loads the conntrack module, which automatically tracks all flows on the host. Invalid flows get dropped in the forward chain then. Other than that, enabling the datacenter firewall itself has no effect, since additionally either the Node or a Guest Firewall needs to be enabled as well for rules to get generated.


VM → VM-FW → Bridge → Node-FW → Datacenter-Forward → NIC → pfSense

VM traffic does not pass the node (IN) firewall, unless the VM is talking directly to the host. The host firewall has no effect on traffic leaving the host via a bridged network interface. The forward firewall only has an effect if you're using nftables and if the host itself routes the traffic otherwise, if it's just bridged, VM traffic isn't visible in the node forward chain.



Where is the pfSense located? Is it external or a VM on the (same?) host.
What is your network configuration (on the host / inside the VM)? How does the firewall configuration look like exactly?

------------------------

Could you post the output of the following commands? You can censor global IPs, but please do so in a fashion that I'm able to see which IPs are the same and which are different.

Code:
grep -r '' /etc/pve/firewall/*.fw
cat /etc/pve/local/host.fw
ip a
ip r
iptables-save
 
  • Like
Reactions: Gruenschnabel
Thanks a lot @shanreich ! Thats a lot of really good information i didnt get from the proxmox documentation so far. I just have some more questions:


1. Can you clarify which nftables chains are loaded when the Datacenter firewall is enabled but contains no rules? Which default rules or jumps are installed, especially in the forward chain?

2. Is it expected that enabling the Datacenter firewall loads conntrack which then marks bridged VM→LAN ICMP flows as INVALID and drops them, even when all policies are ACCEPT?

3. What is the recommended configuration to have Datacenter-FW = ON, Node-FW = OFF, VM-FW = ON, without forward-chain interference on bridged traffic? My goal is to filter each VM / LXC, because they have different needs. WebGUI SSH Access is ruled by pfsense running on another hardware.

4. When using Datacenter-FW with nftables, are explicit forward-accept rules required on vmbrX for bridged VM outbound traffic?

5. Is iptables mode more stable with bridged VM traffic, given conntrack behavior in nftables?

6. Is the intended workflow for per-VM isolation to only enable VM firewall and leave Node-FW disabled, with Datacenter-FW enabled only as master switch?
 
I should've been more exact in my previous reply - there is one difference in the nftables version as compared to the iptables version - since it tries to fix the conntrack behavior of the iptables firewall for guest traffic. The behavior mentioned above w.r.t conntrack applies to the iptables firewall!

The main difference is:
With iptables, enabling the datacenter firewall creates a FORWARD rule that applies the conntrack rule to *all* traffic, even bridged.
With nftables it only affects guest traffic if the firewall is enabled explicitly for the guest.

Generally speaking, you can always check the generated firewall ruleset via:
Code:
nft list ruleset

Or, for iptables:
Code:
iptables-save

1. Can you clarify which nftables chains are loaded when the Datacenter firewall is enabled but contains no rules? Which default rules or jumps are installed, especially in the forward chain?

nftables doesn't create any rules if neither the host nor any guest firewalls are enabled.

If the host firewall is enabled additionally - without any rules - it will create the default ruleset [1] in the inet table. Additionally rules for conntrack in the input and forward chains for the host.

If any guest firewall is enabled, it will generate rules that affect only the guest traffic in the bridge table, but no rules that affect guest traffic generally. as already mentioned, conntrack in nftables is only applied to traffic from guests where the firewall is enabled!

2. Is it expected that enabling the Datacenter firewall loads conntrack which then marks bridged VM→LAN ICMP flows as INVALID and drops them, even when all policies are ACCEPT?

For the iptables firewall yes, with nftables this behavior changed intentionally (conntrack only applies to guests that have the firewall enabled, but not to guests where the firewall isn't enabled) and shouldn't happen.

If the VM traffic is routed via the host and the host has the firewall enabled, then all traffic routed via the host is affected by the forward chain of course.


3. What is the recommended configuration to have Datacenter-FW = ON, Node-FW = OFF, VM-FW = ON, without forward-chain interference on bridged traffic? My goal is to filter each VM / LXC, because they have different needs. WebGUI SSH Access is ruled by pfsense running on another hardware.

This should work with nftables, so if you encounter any issues there - I'd take a look at it. For that I'd need more information though. Setting the datacenter firewall to ON, Node to OFF, VM to ON should then only affect the VMs with enabled firewall (this includes conntrack on the host). Please see below for debugging your specific issue.


4. When using Datacenter-FW with nftables, are explicit forward-accept rules required on vmbrX for bridged VM outbound traffic?
if the traffic is bridged (not routed!) then no - the default policy is accept.


5. Is iptables mode more stable with bridged VM traffic, given conntrack behavior in nftables?
as already mentioned, with nftables conntrack should affect only guests with enabled firewall, with iptables it affects all guests automatically. So the behavior is effectively the same since you have the VM firewall enabled for all guests.

6. Is the intended workflow for per-VM isolation to only enable VM firewall and leave Node-FW disabled, with Datacenter-FW enabled only as master switch?
yes



If you want to debug why a specific packet gets dropped you can use nft monitor trace - as described in the documentation here [2]. That should give you an indication where your traffic gets dropped and why.

[1] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#pve_firewall_default_rules
[2] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#pve_firewall_nft_helpful_commands
 
  • Like
Reactions: Gruenschnabel
Thanks a lot @shanreich for the detailed explanation – that really helped clear things up. I went back through my setup and found two concrete issues:
  1. Firewall backend:
    I’m still on the legacy iptables backend. As you described, enabling the Datacenter firewall in iptables mode creates a FORWARD rule with conntrack that affects all bridged guests. That explains why things started to behave strangely as soon as DC-FW was enabled, even with ACCEPT policies.
  2. ipfilter on the VM:
    On the test VM (ID 101) I had ipfilter: 1 enabled but no proper ipfilter-net0 IPSet defined with the VM’s IP address. I fixed that now.

So the broken ping was a combination of:
  • iptables-based DC firewall (global conntrack/forward), and
  • ipfilter enabled without a populated ipfilter-net0 IPSet.

My plan now:
  • Switch the firewall backend to nftables. Is that already recommended or still tech-preview?
  • Keep Datacenter-FW = ON, Node-FW = OFF, because my pfsense will filter everything needed for the node.
  • Use VM-FW = ON per VM for per-guest policies.

This matches exactly what you described as the intended workflow, so that helps a lot.
Thanks again for the clarification and the links to the docs.