pve-firewall with nftables enabled: pending changes

Monero101

New Member
Mar 7, 2025
3
0
1
I'm having issue with pve-firewall having "pending changes" as soon as I enable nftables at the host level

Code:
pve-firewall status
Status: enabled/running (pending changes)

  • Restarting pve-firewall does not help
  • Deleting all VNet firewall rules does not help

Linux x3 6.8.12-4-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.12-4 (2024-11-06T15:04Z) x86_64 GNU/Linux

Code:
nft --version
nftables v1.0.6 (Lester Gooch #5)

Code:
systemctl status pve-firewall proxmox-firewall
● pve-firewall.service - Proxmox VE firewall
     Loaded: loaded (/lib/systemd/system/pve-firewall.service; enabled; preset: enabled)
     Active: active (running) since Fri 2025-03-07 14:59:04 CET; 8min ago
    Process: 1433741 ExecStartPre=/usr/bin/update-alternatives --set ebtables /usr/sbin/ebtables-legacy (code=exited, status=0/SUCCE>
    Process: 1433743 ExecStartPre=/usr/bin/update-alternatives --set iptables /usr/sbin/iptables-legacy (code=exited, status=0/SUCCE>
    Process: 1433744 ExecStartPre=/usr/bin/update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy (code=exited, status=0/SUC>
    Process: 1433745 ExecStart=/usr/sbin/pve-firewall start (code=exited, status=0/SUCCESS)
   Main PID: 1433748 (pve-firewall)
      Tasks: 1 (limit: 76816)
     Memory: 98.5M
        CPU: 10.872s
     CGroup: /system.slice/pve-firewall.service
             └─1433748 pve-firewall

Mar 07 14:59:03 chant3 systemd[1]: Starting pve-firewall.service - Proxmox VE firewall...
Mar 07 14:59:04 chant3 pve-firewall[1433748]: starting server
Mar 07 14:59:04 chant3 systemd[1]: Started pve-firewall.service - Proxmox VE firewall.

● proxmox-firewall.service - Proxmox nftables firewall
     Loaded: loaded (/lib/systemd/system/proxmox-firewall.service; enabled; preset: enabled)
     Active: active (running) since Fri 2025-03-07 14:59:06 CET; 8min ago
   Main PID: 1433808 (proxmox-firewal)
      Tasks: 1 (limit: 76816)
     Memory: 944.0K
        CPU: 3.315s
     CGroup: /system.slice/proxmox-firewall.service
             └─1433808 /usr/libexec/proxmox/proxmox-firewall

Mar 07 14:59:06 chant3 systemd[1]: Started proxmox-firewall.service - Proxmox nftables firewall.
 
do you use the SDN feature? then you may have to apply the changes in Datacenter -> SDN -> "Apply"
 
Here is the same, but the firewall rules are created and its working (checked with nft list ruleset).
Also, i have pve 8.2 currently, i will try to upgrade the cluster.
 
Same here on 3 nodes with 8.3.5
Even after fresh reboot
Code:
pve-firewall status
Status: enabled/running (pending changes)
 
Having the same issue

pve-manager/8.4.1/2a5fa54a8503f96d (running kernel: 6.8.12-11-pve)

Datacenter level. Port 8006 from vmbr0 doesn't get blocked.


UPDATE: if nft is disabled everything works.

1748339135424.png
Same here on 3 nodes with 8.3.5
Even after fresh reboot
Code:
pve-firewall status
Status: enabled/running (pending changes)
 
Last edited:
If you're running pve-firewall status it generates the iptables rules required for the current configuration and compares them with the current state of the iptables (ip6tables, ...) output and compares them. Because they don't match, this status output gets printed. pve-firewall is the daemon for the old perl-based firewall and as such cannot really be used with the new firewall, which runs as the proxmox-firewall daemon. So this doesn't mean that the firewall isn't working.

I started working on implementing the same subcommands for proxmox-firewall [1] - it'd make sense to add some special casing for pve-firewall here as well and either proxy them to the new daemon or at least print a different status. I'll look into implementing this.

[1] https://lore.proxmox.com/all/20250414154455.274151-1-s.hanreich@proxmox.com/
 
UPDATE: if nft is disabled everything works.

Sorry, didn't see your edit when writing my post - can you post the output of systemctl status proxmox-firewall ? Also, what is the output of ip a I'll try to reproduce the issue, shouldn't be too hard.
 
Last edited:
  • Like
Reactions: Pony2835
Issue might also be related to old VMnets / bridges. I cleaned them up and fixed the issue. They seemed to be referenced somewhere
 
Issue might also be related to old VMnets / bridges. I cleaned them up and fixed the issue. They seemed to be referenced somewhere
How to detect them?
 
Sorry, didn't see your edit when writing my post - can you post the output of systemctl status proxmox-firewall ? Also, what is the output of ip a I'll try to reproduce the issue, shouldn't be too hard.

Here you go.

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: enp87s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq master vmbr0 state DOWN group default qlen 1000
link/ether brd ff:ff:ff:ff:ff:ff
3: enp90s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
link/ether brd ff:ff:ff:ff:ff:ff
6: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether brd ff:ff:ff:ff:ff:ff
inet 192.168.1.21/24 scope global vmbr0
valid_lft forever preferred_lft forever
inet6 fe80::5a47:5d18/64 scope link
valid_lft forever preferred_lft forever
7: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether brd ff:ff:ff:ff:ff:ff
inet6 fe80::3843:6f70/64 scope link
valid_lft forever preferred_lft forever
8: vmbr1.10@vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether brd ff:ff:ff:ff:ff:ff
inet 192.168.6.25/30 scope global vmbr1.10
valid_lft forever preferred_lft forever
inet6 fe80::a0be:86fa/64 scope link
valid_lft forever preferred_lft forever
9: tap600i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UNKNOWN group default qlen 1000
link/ether brd ff:ff:ff:ff:ff:ff
systemctl status proxmox-firewall
● proxmox-firewall.service - Proxmox nftables firewall
Loaded: loaded (/lib/systemd/system/proxmox-firewall.service; enabled; preset: enabled)
Active: active (running) since Tue 2025-05-27 11:13:50 CEST; 1h 35min ago
Main PID: 1112 (proxmox-firewal)
Tasks: 1 (limit: 38114)
Memory: 3.6M
CPU: 6.817s
CGroup: /system.slice/proxmox-firewall.service
└─1112 /usr/libexec/proxmox/proxmox-firewall

May 27 11:13:50 systemd[1]: Started proxmox-firewall.service - Proxmox nftables firewall.


I would like to add, if you enable nft and see the list of nft (nft list ruleset) you will see that 8006,22 has a rule to jump to another rule and that jump rules accept it which is sequentially top of the custom rules made by the user. not sure if that's the issue (if nft works like every other firewall, i.e: follows sequence that might be the issue.)




map bridge-map {
type ifname : verdict
}

chain do-reject {
meta pkttype broadcast drop
ip saddr 224.0.0.0/4 drop
meta l4proto tcp reject with tcp reset
meta l4proto { icmp, ipv6-icmp } reject
reject with icmp host-prohibited
reject with icmpv6 admin-prohibited
drop
}

chain accept-management {
ip saddr @v4-dc/management ip saddr != @v4-dc/management-nomatch accept
ip6 saddr @v6-dc/management ip6 saddr != @v6-dc/management-nomatch accept
}


chain block-synflood {
tcp flags != syn / fin,syn,rst,ack return
jump ratelimit-synflood
drop
}

chain default-in {
iifname "lo" accept
jump allow-icmp
ct state vmap { invalid : jump invalid-conntrack, established : accept, related : accept }
meta l4proto igmp accept
tcp dport { 22, 3128, 5900-5999, 8006 } jump accept-management
udp dport 5405-5412 accept
udp dport { 135, 137-139, 445 } goto do-reject
udp sport 137 udp dport 1024-65535 goto do-reject
tcp dport { 135, 139, 445 } goto do-reject
udp dport 1900 drop
udp sport 53 drop
}
 
Last edited:
Seems like there weren't any issues with your ruleset. I think the reason why your drop rule didn't work is because the firewall generates a default ruleset (for anti lockout), which accepts traffic on ports 22 and 8006 [1]. This is based on the management IPSet, so if you want to overrule the default management IP (which is the IP that the hostname of the PVE node resolves to == the IP in the hosts file), you can create a custom management IPset [2] that contains your vmbr1.10 IP.

[1] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#pve_firewall_default_rules
[2] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_enabling_the_firewall
 
  • Like
Reactions: Pony2835
Seems like there weren't any issues with your ruleset. I think the reason why your drop rule didn't work is because the firewall generates a default ruleset (for anti lockout), which accepts traffic on ports 22 and 8006 [1]. This is based on the management IPSet, so if you want to overrule the default management IP (which is the IP that the hostname of the PVE node resolves to == the IP in the hosts file), you can create a custom management IPset [2] that contains your vmbr1.10 IP.

[1] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#pve_firewall_default_rules
[2] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_enabling_the_firewall
Yes, I just edited last post. However, anti-lock seems like to effect only in nft not in iptables.
 
Yes, I just edited last post. However, anti-lock seems like to effect only in nft not in iptables.
What is the output of pve-firewall localnet ?

Can you also post the nft list ruleset output?
 
First command shouldn't matter, second command with nft enabled please.



Local:



local hostname: test
local IP address: 192.168.1.21
network auto detect: 192.168.1.0/24
using detected local_network: 192.168.1.0/24





Ruleset:
table inet proxmox-firewall {
set v4-dc/management {
type ipv4_addr
flags interval
auto-merge
elements = { 192.168.1.0/24 }
}

set v4-dc/management-nomatch {
type ipv4_addr
flags interval
auto-merge
}

set v6-dc/management {
type ipv6_addr
flags interval
auto-merge
}

set v6-dc/management-nomatch {
type ipv6_addr
flags interval
auto-merge
}

set v4-synflood-limit {
type ipv4_addr
flags dynamic,timeout
timeout 1m
}

set v6-synflood-limit {
type ipv6_addr
flags dynamic,timeout
timeout 1m
}

map bridge-map {
type ifname : verdict
}

chain do-reject {
meta pkttype broadcast drop
ip saddr 224.0.0.0/4 drop
meta l4proto tcp reject with tcp reset
meta l4proto { icmp, ipv6-icmp } reject
reject with icmp host-prohibited
reject with icmpv6 admin-prohibited
drop
}

chain accept-management {
ip saddr @v4-dc/management ip saddr != @v4-dc/management-nomatch accept
ip6 saddr @v6-dc/management ip6 saddr != @v6-dc/management-nomatch accept
}

chain block-synflood {
tcp flags != syn / fin,syn,rst,ack return
jump ratelimit-synflood
drop
}

chain log-drop-invalid-tcp {
jump log-invalid-tcp
drop
}

chain block-invalid-tcp {
tcp flags fin,psh,urg / fin,syn,rst,psh,ack,urg goto log-drop-invalid-tcp
tcp flags ! fin,syn,rst,psh,ack,urg goto log-drop-invalid-tcp
tcp flags syn,rst / syn,rst goto log-drop-invalid-tcp
tcp flags fin,syn / fin,syn goto log-drop-invalid-tcp
tcp sport 0 tcp flags syn / fin,syn,rst,ack goto log-drop-invalid-tcp
}

chain allow-ndp-in {
icmpv6 type { nd-router-solicit, nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert, nd-redirect } accept
}

chain block-ndp-in {
icmpv6 type { nd-router-solicit, nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert, nd-redirect } drop
}

chain allow-ndp-out {
icmpv6 type { nd-router-solicit, nd-neighbor-solicit, nd-neighbor-advert } accept
}

chain block-ndp-out {
icmpv6 type { nd-router-solicit, nd-neighbor-solicit, nd-neighbor-advert } drop
}

chain block-smurfs {
ip saddr 0.0.0.0 return
meta pkttype broadcast goto log-drop-smurfs
ip saddr 224.0.0.0/4 goto log-drop-smurfs
}

chain allow-icmp {
icmp type { destination-unreachable, source-quench, time-exceeded } accept
icmpv6 type { destination-unreachable, packet-too-big, time-exceeded, parameter-problem } accept
}

chain log-drop-smurfs {
jump log-smurfs
drop
}

chain default-in {
iifname "lo" accept
jump allow-icmp
ct state vmap { invalid : jump invalid-conntrack, established : accept, related : accept }
meta l4proto igmp accept
tcp dport { 22, 3128, 5900-5999, 8006 } jump accept-management
udp dport 5405-5412 accept
udp dport { 135, 137-139, 445 } goto do-reject
udp sport 137 udp dport 1024-65535 goto do-reject
tcp dport { 135, 139, 445 } goto do-reject
udp dport 1900 drop
udp sport 53 drop
}

chain default-out {
oifname "lo" accept
jump allow-icmp
ct state vmap { invalid : jump invalid-conntrack, established : accept, related : accept }
}

chain before-bridge {
meta protocol arp accept
meta protocol != arp ct state vmap { invalid : jump invalid-conntrack, established : accept, related : accept }
}

chain host-bridge-input {
type filter hook input priority filter - 1; policy accept;
iifname vmap @bridge-map
}

chain host-bridge-output {
type filter hook output priority filter + 1; policy accept;
oifname vmap @bridge-map
}

chain input {
type filter hook input priority filter; policy accept;
jump default-in
jump ct-in
jump option-in
jump host-in
jump cluster-in
}

chain output {
type filter hook output priority filter; policy accept;
jump default-out
jump option-out
jump host-out
jump cluster-out
}

chain forward {
type filter hook forward priority filter; policy accept;
jump host-forward
jump cluster-forward
}

chain ratelimit-synflood {
}

chain log-invalid-tcp {
}

chain log-smurfs {
}

chain option-in {
jump allow-ndp-in
jump block-smurfs
}

chain option-out {
jump allow-ndp-out
}

chain cluster-in {
iifname "vmbr1.10" tcp dport 8006 accept
tcp dport 22223 accept
iifname "vmbr0" tcp dport 8006 drop
iifname "vmbr0" drop
drop
}

chain cluster-out {
accept
}

chain host-in {
}

chain host-out {
}

chain cluster-forward {
accept
}

chain host-forward {
}

chain ct-in {
}

chain invalid-conntrack {
drop
}
}