vnet firewall configuration not working

miguel75

New Member
Sep 30, 2025
7
0
1
Hello, I am having trouble configuring the Vnet Firewall.

I have set up a PoC with two hosts. There is one VM on each host, each of which has a service interface and another interface for the NAS.

Two VLANs are configured, one for the service and another for the NAS.

The intention is that the machines, regardless of where they are running, cannot be seen by the NAS interface. In fact, they should only be able to access the IP of the NFS server; they should not even be able to ping each other over that VLAN.

According to what I have been reading in the documentation and in this forum, this is achieved by configuring the Vnet Firewall.

I have included a rule in it that drops all traffic from that VLAN to the same VLAN, and I have enabled the firewall at the cluster and host level... without adding any more rules. But it doesn't work. The VMs can ping each other and other machines on that network that are outside the Proxmox environment.
For the sake of clarity, I have not yet included the rule that gives access to the NFS server

I am also not sure if every time I modify the VNET Firewall rules, I have to run a command or press a button for them to be applied.

nfttables are enabled at host level.

Any suggestions?


1759235275330.png
 
Do you have Subnets configured for the VNet? The IPsets take the configured IP ranges from the subnets.

Can you post the output of:
Code:
grep -r '' /etc/pve/sdn/*.cfg
systemctl status proxmox-firewall
 
Yes, I have configured subnets for VLAN1035 (is the NAS VLAN).

root@vlcvi:/etc/pve/sdn/firewall# grep -r '' /etc/pve/sdn/*.cfg
systemctl status proxmox-firewall
/etc/pve/sdn/subnets.cfg:subnet: NAS-172.16.128.0-17
/etc/pve/sdn/subnets.cfg: vnet VLAN1035
/etc/pve/sdn/subnets.cfg:
/etc/pve/sdn/vnets.cfg:vnet: VLAN1035
/etc/pve/sdn/vnets.cfg: zone NAS
/etc/pve/sdn/vnets.cfg: alias nas_172_16_128_0_17
/etc/pve/sdn/vnets.cfg: isolate-ports 1
/etc/pve/sdn/vnets.cfg: tag 1035
/etc/pve/sdn/vnets.cfg:
/etc/pve/sdn/vnets.cfg:vnet: VLAN15
/etc/pve/sdn/vnets.cfg: zone APP
/etc/pve/sdn/vnets.cfg: alias APP_10_130_77_0_24
/etc/pve/sdn/vnets.cfg: tag 15
/etc/pve/sdn/vnets.cfg:
/etc/pve/sdn/zones.cfg:vlan: MGMT
/etc/pve/sdn/zones.cfg: bridge vmbr0
/etc/pve/sdn/zones.cfg: ipam pve
/etc/pve/sdn/zones.cfg: mtu 1500
/etc/pve/sdn/zones.cfg:
/etc/pve/sdn/zones.cfg:vlan: NAS
/etc/pve/sdn/zones.cfg: bridge vmbr0
/etc/pve/sdn/zones.cfg: ipam pve
/etc/pve/sdn/zones.cfg: mtu 1500
/etc/pve/sdn/zones.cfg:
/etc/pve/sdn/zones.cfg:vlan: APP
/etc/pve/sdn/zones.cfg: bridge vmbr0
/etc/pve/sdn/zones.cfg: ipam pve
/etc/pve/sdn/zones.cfg: mtu 1500
/etc/pve/sdn/zones.cfg:
● proxmox-firewall.service - Proxmox nftables firewall
Loaded: loaded (/usr/lib/systemd/system/proxmox-firewall.service; enabled; preset: enabled)
Active: active (running) since Tue 2025-09-30 10:15:26 CEST; 4h 20min ago
Invocation: c1582ae58017486cbc497e0483f77efe
Main PID: 4718 (proxmox-firewal)
Tasks: 1 (limit: 629145)
Memory: 3.9M (peak: 10.1M)
CPU: 1min 2.541s
CGroup: /system.slice/proxmox-firewall.service
└─4718 /usr/libexec/proxmox/proxmox-firewall start

Sep 30 10:15:26 vlcvi-p-pve-11 systemd[1]: Started proxmox-firewall.service - Proxmox nftables firewall.
 
You seem to have configured 3 VLAN zones, that are using the same bridge. You should only configure one zone per bridge and then create all VNets in the same zone.

Can you save the following nftables commands in a file, make it executable (chmod +x filename) and then capture one ping of a VM with nft monitor trace? (see [1] for more information)

Code:
#!/usr/sbin/nft -f
table bridge tracebridge
delete table bridge tracebridge

table bridge tracebridge {
    chain trace {
        meta l4proto icmp meta nftrace set 1
    }

    chain prerouting {
        type filter hook prerouting priority -350; policy accept;
        jump trace
    }

    chain postrouting {
        type filter hook postrouting priority -350; policy accept;
        jump trace
    }
}

After you're done debugging you can delete the create table via;

Code:
nft delete table bridge tracebridge

How does your generated ruleset look like? (You can strip information like public IPs, but please try to leave the IPs of the VNets intact).

Code:
nft list ruleset

[1] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#pve_firewall_nft_helpful_commands
 
Last edited:
Regarding having multiple zones on the same bridge, this is because I only have one interface (actually in a bonding) and the different VLANs we have in our infrastructure have been extended to that interface.

I am sending you the screenshot you requested and the ruleset output.
I was doing a ping from linux1 server with NAS ip 172.16.192.231 to linux2 server with NAS ip 172.16.192.232


Code:
root@vlcvi-p-pve-11:~# nft list ruleset | more
table bridge tracebridge {
        chain trace {
                meta l4proto icmp meta nftrace set 1
        }

        chain prerouting {
                type filter hook prerouting priority -350; policy accept;
                jump trace
        }

        chain postrouting {
                type filter hook postrouting priority -350; policy accept;
                jump trace
        }
}
table inet proxmox-firewall {
        set v4-dc/management {
                type ipv4_addr
                flags interval
                auto-merge
        }

        set v4-dc/management-nomatch {
                type ipv4_addr
                flags interval
                auto-merge
        }

        set v6-dc/management {
                type ipv6_addr
                flags interval
                auto-merge
        }

        set v6-dc/management-nomatch {
                type ipv6_addr
                flags interval
                auto-merge
        }

        set v4-synflood-limit {
                type ipv4_addr
                flags dynamic,timeout
                timeout 1m
        }

        set v6-synflood-limit {
                type ipv6_addr
                flags dynamic,timeout
                timeout 1m
        }

        map bridge-map {
                type ifname : verdict
        }

        chain do-reject {
                meta pkttype broadcast drop
                ip saddr 224.0.0.0/4 drop
                meta l4proto tcp reject with tcp reset
                meta l4proto { icmp, ipv6-icmp } reject
                reject with icmp host-prohibited
                reject with icmpv6 admin-prohibited
                drop
        }

        chain accept-management {
                ip saddr @v4-dc/management ip saddr != @v4-dc/management-nomatch accept
                ip6 saddr @v6-dc/management ip6 saddr != @v6-dc/management-nomatch accept
        }

        chain block-synflood {
                tcp flags & (fin | syn | rst | ack) != syn return
                jump ratelimit-synflood
                drop
        }

        chain log-drop-invalid-tcp {
                jump log-invalid-tcp
                drop
        }

        chain block-invalid-tcp {
                tcp flags & (fin | syn | rst | psh | ack | urg) == fin | psh | urg goto log-drop-invalid-tcp
                tcp flags ! fin,syn,rst,psh,ack,urg goto log-drop-invalid-tcp
                tcp flags & (syn | rst) == syn | rst goto log-drop-invalid-tcp
                tcp flags & (fin | syn) == fin | syn goto log-drop-invalid-tcp
                tcp sport 0 tcp flags & (fin | syn | rst | ack) == syn goto log-drop-invalid-tcp
        }

        chain allow-ndp-in {
                icmpv6 type { nd-router-solicit, nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert, nd-redirect } accept
        }

        chain block-ndp-in {
                icmpv6 type { nd-router-solicit, nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert, nd-redirect } drop
        }

        chain allow-ndp-out {
                icmpv6 type { nd-router-solicit, nd-neighbor-solicit, nd-neighbor-advert } accept
        }

        chain block-ndp-out {
                icmpv6 type { nd-router-solicit, nd-neighbor-solicit, nd-neighbor-advert } drop
        }

        chain block-smurfs {
                ip saddr 0.0.0.0 return
                meta pkttype broadcast goto log-drop-smurfs
                ip saddr 224.0.0.0/4 goto log-drop-smurfs
        }

        chain allow-icmp {
                icmp type { destination-unreachable, source-quench, time-exceeded } accept
                icmpv6 type { destination-unreachable, packet-too-big, time-exceeded, parameter-problem } accept
        }

        chain log-drop-smurfs {
                jump log-smurfs
                drop
        }

        chain default-in {
                iifname "lo" accept
                jump allow-icmp
                ct state vmap { invalid : jump invalid-conntrack, established : accept, related : accept }
                meta l4proto igmp accept
                tcp dport { 22, 3128, 5900-5999, 8006 } jump accept-management
                udp dport 5405-5412 accept
                udp dport { 135, 137-139, 445 } goto do-reject
                udp sport 137 udp dport 1024-65535 goto do-reject
                tcp dport { 135, 139, 445 } goto do-reject
                udp dport 1900 drop
                udp sport 53 drop
        }

        chain default-out {
                oifname "lo" accept
                jump allow-icmp
                ct state vmap { invalid : jump invalid-conntrack, established : accept, related : accept }
        }

        chain before-bridge {
                meta protocol arp accept
                meta protocol != arp ct state vmap { invalid : jump invalid-conntrack, established : accept, related : accept }
        }

        chain host-bridge-input {
                type filter hook input priority filter - 1; policy accept;
                iifname vmap @bridge-map
        }

        chain host-bridge-output {
                type filter hook output priority filter + 1; policy accept;
                oifname vmap @bridge-map
        }

        chain input {
                type filter hook input priority filter; policy accept;
                jump default-in
                jump ct-in
                jump option-in
                jump host-in
                jump cluster-in
        }

        chain output {
                type filter hook output priority filter; policy accept;
                jump default-out
                jump option-out
                jump host-out
                jump cluster-out
        }

        chain forward {
                type filter hook forward priority filter; policy accept;
                jump host-forward
                jump cluster-forward
        }

        chain ratelimit-synflood {
        }

        chain log-invalid-tcp {
        }

        chain log-smurfs {
        }

        chain option-in {
        }

        chain option-out {
        }

        chain cluster-in {
        }

        chain cluster-out {
        }

        chain host-in {
        }

        chain host-out {
        }

        chain cluster-forward {
        }

        chain host-forward {
                meta protocol != arp ct state vmap { invalid : jump invalid-conntrack, established : accept, related : accept }
                meta protocol != arp ct state vmap { invalid : jump invalid-conntrack, established : accept, related : accept }
        }

        chain ct-in {
        }

        chain invalid-conntrack {
        }
}
table bridge proxmox-firewall-guests {
        map vm-map-in {
                typeof oifname : verdict
        }

        map vm-map-out {
                typeof iifname : verdict
        }

        map bridge-map {
                type ifname . ifname : verdict
        }

        chain allow-dhcp-in {
                udp sport . udp dport { 547 . 546, 67 . 68 } accept
        }

        chain allow-dhcp-out {
                udp sport . udp dport { 546 . 547, 68 . 67 } accept
        }

        chain block-dhcp-in {
                udp sport . udp dport { 547 . 546, 67 . 68 } drop
        }

        chain block-dhcp-out {
                udp sport . udp dport { 546 . 547, 68 . 67 } drop
        }

        chain allow-ndp-in {
                icmpv6 type { nd-router-solicit, nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert, nd-redirect } accept
        }

        chain block-ndp-in {
                icmpv6 type { nd-router-solicit, nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert, nd-redirect } drop
        }

        chain allow-ndp-out {
                icmpv6 type { nd-router-solicit, nd-neighbor-solicit, nd-neighbor-advert } accept
        }

        chain block-ndp-out {
                icmpv6 type { nd-router-solicit, nd-neighbor-solicit, nd-neighbor-advert } drop
        }

        chain allow-ra-out {
                icmpv6 type { nd-router-advert, nd-redirect } accept
        }

        chain block-ra-out {
                icmpv6 type { nd-router-advert, nd-redirect } drop
        }

        chain allow-icmp {
                icmp type { destination-unreachable, source-quench, time-exceeded } accept
                icmpv6 type { destination-unreachable, packet-too-big, time-exceeded, parameter-problem } accept
        }

        chain do-reject {
                meta pkttype broadcast drop
                ip saddr 224.0.0.0/4 drop
                meta l4proto tcp reject with tcp reset
                meta l4proto { icmp, ipv6-icmp } reject
                reject with icmp host-prohibited
                reject with icmpv6 admin-prohibited
                drop
        }

        chain pre-vm-out {
                meta protocol != arp ct state vmap { invalid : jump invalid-conntrack, established : accept, related : accept }
        }

        chain vm-out {
                type filter hook prerouting priority 0; policy accept;
                jump allow-icmp
                iifname vmap @vm-map-out
        }

        chain pre-vm-in {
                meta protocol != arp ct state vmap { invalid : jump invalid-conntrack, established : accept, related : accept }
                meta protocol arp accept
        }

        chain vm-in {
                type filter hook postrouting priority 0; policy accept;
                jump allow-icmp
                oifname vmap @vm-map-in
        }

        chain before-bridge {
                meta protocol arp accept
                meta protocol != arp ct state vmap { invalid : jump invalid-conntrack, established : accept, related : accept }
        }

        chain forward {
                type filter hook forward priority 0; policy accept;
                meta ibrname . meta obrname vmap @bridge-map
        }

        chain invalid-conntrack {
        }
}
 

Attachments

Can you check the status output of the firewall again? Did you make any changes to the firewall configuration in the meanwhile?

Code:
systemctl status proxmox-firewall

How does the configuration of your VMs look like?

Code:
qm config <vmid>

How does your /etc/network/interfaces look like?

Code:
cat /etc/network/interfaces
cat /etc/network/interfaces.d/sdn
 
I did some changes, but let it as it was at the beginning.


Code:
root@vlcvi-p-pve-11:~#  systemctl status proxmox-firewall
● proxmox-firewall.service - Proxmox nftables firewall
     Loaded: loaded (/usr/lib/systemd/system/proxmox-firewall.service; enabled; preset: enabled)
     Active: active (running) since Tue 2025-09-30 16:05:15 CEST; 3s ago
 Invocation: 56f17b347ca2404f9f17508bd70413bf
   Main PID: 162766 (proxmox-firewal)
      Tasks: 1 (limit: 629145)
     Memory: 1.6M (peak: 5.8M)
        CPU: 29ms
     CGroup: /system.slice/proxmox-firewall.service
             └─162766 /usr/libexec/proxmox/proxmox-firewall start

Sep 30 16:05:15 vlcvi-p-pve-11 systemd[1]: Started proxmox-firewall.service - Proxmox nftables firewall.


Code:
root@vlcvi-p-pve-11:~# qm config 101
agent: 1,type=virtio
bios: ovmf
boot: order=scsi0
cores: 2
cpu: x86-64-v2-AES
efidisk0: STORAGE_UNITY:vm-101-disk-0,size=4M
ipconfig0: ip=10.130.77.13/24,gw=10.130.77.1
ipconfig1: ip=172.16.192.231/17
memory: 4096
meta: creation-qemu=9.2.0,ctime=1744030636
name: linux1
nameserver: 10.130.5.105
net0: virtio=BC:24:11:DB:1A:8C,bridge=vmbr0,tag=15
net1: virtio=BC:24:11:F8:C8:3F,bridge=vmbr0,firewall=1,tag=1035
ostype: l26
scsi0: STORAGE_UNITY:vm-101-disk-1,size=80G
scsi1: STORAGE_UNITY:vm-101-cloudinit,media=cdrom,size=4M
scsihw: virtio-scsi-pci
searchdomain: poc.org
smbios1: uuid=e6038bc4-d22c-4d47-96f7-3ea02c980d89
sockets: 1
vmgenid: 315c7e10-477a-40f4-bf0e-9fa81c4bc917


Code:
root@vlcvi-p-pve-11:~# cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface eno1 inet manual

iface eno2 inet manual

iface eno3 inet manual

iface eno4 inet manual

auto eno5
iface eno5 inet manual

auto ens5f0
iface ens5f0 inet manual

auto ens5f1
iface ens5f1 inet static
        address 172.20.0.80/17
        mtu 9000

auto eno6
iface eno6 inet static
        address 172.20.128.80/17
        mtu 9000

auto bond0
iface bond0 inet manual
        bond-slaves eno5 ens5f0
        bond-miimon 100
        bond-mode 802.3ad
        bond-xmit-hash-policy layer2+3

auto vmbr0
iface vmbr0 inet static
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094

auto vmbr0.744
iface vmbr0.744 inet static
        address 10.130.38.41/24
        gateway 10.130.38.1

source /etc/network/interfaces.d/*


Code:
root@vlcvi-p-pve-11:~# cat /etc/network/interfaces.d/sdn
#version:16

auto VLAN1035
iface VLAN1035
        bridge_ports vmbr0.1035
        bridge_stp off
        bridge_fd 0
        mtu 1500
        alias nas_172_16_128_0_17

auto VLAN15
iface VLAN15
        bridge_ports vmbr0.15
        bridge_stp off
        bridge_fd 0
        mtu 1500
        alias APP_10_130_77_0_24
 
If you want to utilize the VNet, then you need to set the network devices of the VMs to VLAN1035 instead of vmbr0. Please also remove the tag then, since it is automatically handled when setting the network device to VLAN1035. It should look like this:

Code:
net1: virtio=BC:24:11:F8:C8:3F,bridge=VLAN1035,firewall=1

Also, the generated nft list ruleset seems weird, since it doesn't seem to include the VNet Firewall, but the daemon doesn't show any error.
Can you delete the tracebridge table that was created for debugging purposes?

Code:
nft delete table bridge tracebridge

Then post the generated ruleset again:

Code:
nft list ruleset
 
then you need to set the network devices of the VMs to VLAN1035 instead of vmbr0.

That was the mistake!!!! Now is working like a charm! :-D

I'm going to set up the rules now so that only the NFS IP address can be accessed.

Thank you very much!!!!! :-D