IPv6 Firewalling in PVE

4920441

Member
Dec 7, 2021
59
7
13
55
Hi,

I just set up another PVE with IPv6 routing and (simple, portbased) firewaling on the PVE host itself.

IPv4 works as expected, IPv6 not so much....

Despite having the same setup in IPv6 as in IPv4, the IPv6 firewalling does not work - everything goes trough the fw and not even a log entry occurs in /var/log/pve-firewall.log - despite having EVERY rule setup to info-log (and I see many link local addresses in the log but
DROP does only work with the local Public IPv6 Network and NOT with the routed IPv6 Networks which goes into vmbr999 in my case.

I used to handle it this way:

allow trusted ips
deny the rest

but currenty this scheme does only work with the host /64 network, but not with the routed /48 network

Even If I explicitly set up a deny/reject/drop rule on the highes prio with EXACTLY source and destination IPv6 it goes through the firewall/proxmox without hitting the drop rule or make an entry in the logfile....

Any hints what I might be doing wrong?

Since the local Proxmox PVE'own' Network works as expeceted regarding the firewall rules, but the other routed network is firewall-wise "ignored" despite set up correctly as I think, it seems to have something to do with ipv6 forwarding, when enabling

/proc/sys/net/ipv6/conf/all/forwarding

the pve fw rules are ignored?

cheers

4920441
 
The standard pve-firewall does not support rules on the forward chain - which is probably why IPv6 traffic is not seen from the rules created there. Could you try the forward chain of the nftables firewall and see if that works out for you?
 
Hi, that makes sense . But what is meant with "forward" when the rule is configured? Only the local forward table?

Can I use nftables and iptables in parallel on proxmox? Because, some features (like masquarading with dynamic IP addresses) are still not really implemented in nftables....

Cheers

4920441
 
Hi, that makes sense . But what is meant with "forward" when the rule is configured? Only the local forward table?
The forward chain in the inet table of nftables, see [1]. Those rules can be set in our Web UI and will work if you enable nftables in the Host options. Please note that nftables is still in tech preview.

You can use iptables NAT in conjunction with the nftables firewall (proxmox-firewall). If you're manually configuring masquerade, that is also already possible with nftables [2]. But if iptables NAT works for you, you can stick with it - it should be interoperable.

[1] https://wiki.nftables.org/wiki-nfta...lter_hooks_into_Linux_networking_packet_flows
[2] https://wiki.nftables.org/wiki-nftables/index.php/Quick_reference-nftables_in_10_minutes#Nat
 
The standard pve-firewall does not support rules on the forward chain - which is probably why IPv6 traffic is not seen from the rules created there. Could you try the forward chain of the nftables firewall and see if that works out for you?
I just enabled it (I think it's enough to do this on the PVE Host itself and then its 'activated'?)
Do I have to use the /etc/nft config or do you think it could work out of the webinterface with "technology preview nft"?

I just tried it out: makes no differece if nftables is activated or not - at least not in for the firewall interfaces in the webinterface - it is not even recognized and arrives unfiltered at the target ipv6

Edit: I did just generate my own nftables "include" for that. Is there a good or right way to add the rules as an inciude to my /etc/nft/custom.conf
or is it "better" to make an oldschool iptables script instead and add the rules by running the script when the interface comes up?

or like that? (never tried it, I needed nearly twenty years to get used to iptables and now they are obsolete ;-)

nft -f /etc/nftables.d/custom.nft


Thanks fot the hints!

Cheers
4920441
 
Last edited:
Maybe it's good to take a step back and check if I understood your issue correctly, judging from your initial post:

but currenty this scheme does only work with the host /64 network, but not with the routed /48 network

You are using your PVE host as a router for a /48 network - which I presume you are routing to your VMs at vmbr999? In that case you would need to create specific firewall rules in the direction 'forward' at the host level. Please note that usually you need to create two rules, one for every direction (traffic routed to VMs, traffic routed from VMs)

How does your current ruleset look like? What kind of rules do you want to create exactly?
 
Maybe it's good to take a step back and check if I understood your issue correctly, judging from your initial post:



You are using your PVE host as a router for a /48 network - which I presume you are routing to your VMs at vmbr999? In that case you would need to create specific firewall rules in the direction 'forward' at the host level. Please note that usually you need to create two rules, one for every direction (traffic routed to VMs, traffic routed from VMs)

How does your current ruleset look like? What kind of rules do you want to create exactly?

Yes, correctly. I am using the proxmox host itself as a router with very easy rules.

The /64 network which is (only) directly on the proxmox host itself, works with firewalling. If I allow a source ip it gets a connection, if i deny it (by not allowing it) packets gets dropped.
The routed /48 network which gets to vmbr999 is totally ignored - everything is allowed, even if I make a rule expiicit source and destination with logging "debug" it does *NOT* even show in the firewall logs. the traffic also gets unfiltered to the target.

Cheers,

4920441
 
Yes, that is because routed traffic that does not go directly to the host, does not count as in / out traffic - so it is never seen by in / out rules at the host traffic. In that case you will need to create rules in the forward direction.

If you check out the netfilter hook graph [1], that traffic never goes to the Input Hook (= Host in) or Output Hook (= Host Out), but only through the Forward Hook (= Host Forward). So you will need to create your rules with direction forward.


[1] https://wiki.nftables.org/wiki-nfta...lter_hooks_into_Linux_networking_packet_flows
 
I am pretty shure I made in the datacenter firewall a rule for the forwarding table which drops all which was not allowes before.

The funny thing is, I enabled nft but even after a reboot no nft rulest ist there? Is there something else to enable on the proxmox side?

Even on the ip6tables side - I think the output of the forward table(s) does not match the configuration in the gui - since the forward chain ends in to in and out chains, it seems.....

I cannot see my ipv6 drop - forward rule at all in ip6tables-save or with ip6tables -L whatever




1754676611164.png1754676770823.png
 

Attachments

  • forwardrulesproxmox.png
    forwardrulesproxmox.png
    46.9 KB · Views: 2
  • pve-drop.png
    pve-drop.png
    111.6 KB · Views: 0
What does systemctl status proxmox-firewall say?

Can you post the full ruleset please?

Code:
cat /etc/pve/firewall/cluster.fw
cat /etc/pve/nodes/<nodename>/host.fw

Please note in the second command you need to insert the hostname of your node!
 
Code:
systemctl status proxmox-firewall

Unit proxmox-firewall.service could not be found.

That's kinda odd, isn't it?



Code:
cat /etc/pve/firewall/cluster.fw

....
FORWARD DROP -dest 2000:000:231:0700::/56 -log info # Drop-Incoming foobarbla :/56

...

there is my forward drop

Code:
cat /etc/pve/nodes/hostname/host.fw
[OPTIONS]

nftables: 1
tcp_flags_log_level: info
smurf_log_level: info
log_level_out: info
log_level_in: info
log_level_forward: info

[RULES]

IN NeighborDiscovery(ACCEPT) -i vmbr999 -log info
OUT NeighborDiscovery(ACCEPT) -i vmbr999 -log info
IN DHCPv6(ACCEPT) -i vmbr999 -log info
IN ACCEPT -i vmbr999 -p udp -dport 67,68 -log info
FORWARD ACCEPT -i vmbr999 -log info
FORWARD ACCEPT -i vmbr999 -source +dc/trusted-ips -log info
|OUT ACCEPT -i vmbr999 -log info
|IN ACCEPT -i vmbr999 -source +dc/trusted-ips -log info
 
Last edited:
I think you meant status pve-firewall?

Code:
systemctl status pve-firewall
● pve-firewall.service - Proxmox VE firewall
     Loaded: loaded (/usr/lib/systemd/system/pve-firewall.service; enabled; preset: enabled)
     Active: active (running) since Fri 2025-08-08 18:37:08 CEST; 1h 51min ago
 Invocation: baf71658c1594560aee5ac7fb90b038d
    Process: 1672 ExecStartPre=/usr/bin/update-alternatives --set ebtables /usr/sbin/ebtables-legacy (code=exited, status=0/SUCCESS)
    Process: 1677 ExecStartPre=/usr/bin/update-alternatives --set iptables /usr/sbin/iptables-legacy (code=exited, status=0/SUCCESS)
    Process: 1679 ExecStartPre=/usr/bin/update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy (code=exited, status=0/SUCCESS)
    Process: 1681 ExecStart=/usr/sbin/pve-firewall start (code=exited, status=0/SUCCESS)
   Main PID: 1684 (pve-firewall)
      Tasks: 1 (limit: 154372)
     Memory: 103.5M (peak: 120.8M)
        CPU: 1min 3.153s
     CGroup: /system.slice/pve-firewall.service
             └─1684 pve-firewall

Aug 08 18:37:07 ddddddddddddddddd systemd[1]: Starting pve-firewall.service - Proxmox VE firewall...
Aug 08 18:37:08 ddddddddddddddddd pve-firewall[1684]: starting server
Aug 08 18:37:08 ddddddddddddddddd systemd[1]: Started pve-firewall.service - Proxmox VE firewall.
 
Code:
pveversion -v
proxmox-ve: 9.0.0 (running kernel: 6.14.8-2-pve)
pve-manager: 9.0.3 (running version: 9.0.3/025864202ebb6109)
proxmox-kernel-helper: 9.0.3
proxmox-kernel-6.14.8-2-pve-signed: 6.14.8-2
proxmox-kernel-6.14: 6.14.8-2
proxmox-kernel-6.8.12-13-pve-signed: 6.8.12-13
proxmox-kernel-6.8: 6.8.12-13
amd64-microcode: 3.20240820.1~deb12u1
ceph-fuse: 19.2.3-pve1
corosync: 3.1.9-pve2
criu: 4.1.1-1
dnsmasq: 2.91-1
ifupdown: residual config
ifupdown2: 3.3.0-1+pmx9
intel-microcode: 3.20250512.1~deb12u1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.1
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.3
libpve-apiclient-perl: 3.4.0
libpve-cluster-api-perl: 9.0.6
libpve-cluster-perl: 9.0.6
libpve-common-perl: 9.0.9
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.4
libpve-network-perl: 1.1.6
libpve-rs-perl: 0.10.10
libpve-storage-perl: 9.0.13
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2
lxc-pve: 6.0.4-2
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
proxmox-backup-client: 4.0.11-1
proxmox-backup-file-restore: 4.0.11-1
proxmox-backup-restore-image: not correctly installed
proxmox-kernel-helper: 9.0.3
proxmox-mail-forward: 1.0.2
proxmox-mini-journalreader: 1.6
proxmox-widget-toolkit: 5.0.5
pve-cluster: 9.0.6
pve-container: 6.0.9
pve-docs: 9.0.8
pve-edk2-firmware: not correctly installed
pve-firewall: 6.0.3
pve-firmware: 3.16-3
pve-ha-manager: 5.0.4
pve-i18n: 3.5.2
pve-qemu-kvm: 10.0.2-4
pve-xtermjs: 5.5.0-2
qemu-server: 9.0.16
smartmontools: 7.4-pve1
spiceterm: 3.4.0
swtpm: 0.8.0+pve2
vncterm: 1.9.0
zfsutils-linux: 2.3.3-pve1
 
pve-firewall is the iptables-based firewall, proxmox-firewall the nftables

Does reinstalling proxmox-firewall help?

Code:
apt install proxmox-firewall
ok... so simply ticking the "nftables tech preview" does not do the trick alone....


Despite nftable rules are loaded, they are nothing like in the gui - do I have to convert the gui rules to nft somehow?


In the datacenter firewall it says:

Forward rules only take effect when the nftables firewall is activated in the host options

is this the checkbox in the host itself or what is meant by this?
 
Last edited:
ok... so simply ticking the "nftables tech preview" does not do the trick alone....
proxmox-firewall should be automatically installed - did you upgrade your host and possibly used apt upgrade instead of apt dist-upgrade at some point?

Despite nftable rules are loaded, they are nothing like in the gui - do I have to convert the gui rules to nft somehow?
no, enabling the nftables firewall should be sufficient

is this the checkbox in the host itself or what is meant by this?
yes, the checkbox in the host options


Can you post the following (you can censor IPs if you must, but please make it so I can tell which IP prefixes are matching)

Code:
cat /etc/pve/firewall/cluster.fw
cat /etc/pve/nodes/<nodename>/host.fw
nft list ruleset

How are you testing the firewall rules? Pinging? Please indicate how you are testing the setup.
 
Since nft is now installed, I added my nft script to it, and it works fine so far. Everything which is not expclitly allowed gets blocked, also to the routed networks.

Since the nft script are much more readable than the old iptables-save thingies, I think this addon could survive the daily admin tasks.

I just add it to the network 'up' scripts and call it a day.


I am testing them with ssh since icmp(6) is globaly allowed anyway.

I did upgrade the pve from 8.4.x to 9.0.3 a couple of hours ago, but I did do a dist-upgrade and not only an "upgrade".
Maybe it has to do that I installed it on top of a vanilla debian installation?


Okay... Since I rebooted the nftables list ist way fuller and my forward drop rule is now in there and - at first sight - seems to be working now.

Let me check a few things, may the nft install followed by the reboot "converted" all iptable rules to nft




 
No it does not work after the reboot... even worse : the firewalled /64 network is now also wide despite exactly the same rules as with iptables, all set up by the gui.
My nft script did not run yet and is not merged yed after the reboot.

Code:
cat /etc/pve/firewall/cluster.fw  | grep -i forward
policy_forward: DROP
FORWARD ACCEPT -source 172.31.255.0/24 -log info # Install-Net
FORWARD ACCEPT -source 2000:000:000:0000::/56 -log info # routed /56
FORWARD ACCEPT -source 2101:000:000:0000::2 -log info
FORWARD ACCEPT -source 100.102.86.189 -log info
FORWARD DROP -dest 2000:000:000:0000::/56 -log info # Drop-Incoming
FORWARD ACCEPT -p icmp -log nolog -icmp-type any
FORWARD ACCEPT -p ipv6-icmp -log nolog

Code:
cat /etc/pve/nodes/hosthosthost/host.fw
[OPTIONS]

nftables: 1
tcp_flags_log_level: info
smurf_log_level: info
log_level_out: info
log_level_in: info
log_level_forward: info

[RULES]

IN NeighborDiscovery(ACCEPT) -i vmbr999 -log info
OUT NeighborDiscovery(ACCEPT) -i vmbr999 -log info
IN DHCPv6(ACCEPT) -i vmbr999 -log info
IN ACCEPT -i vmbr999 -p udp -dport 67,68 -log info
FORWARD ACCEPT -i vmbr999 -log info
FORWARD ACCEPT -i vmbr999 -source +dc/trusted-ips -log info
|OUT ACCEPT -i vmbr999 -log info
|IN ACCEPT -i vmbr999 -source +dc/trusted-ips -log info
Code:
nft list ruleset
table inet proxmox-firewall {
    set v4-dc/management {
        type ipv4_addr
        flags interval
        auto-merge
    }

    set v4-dc/management-nomatch {
        type ipv4_addr
        flags interval
        auto-merge
    }

    set v6-dc/management {
        type ipv6_addr
        flags interval
        auto-merge
    }

    set v6-dc/management-nomatch {
        type ipv6_addr
        flags interval
        auto-merge
    }

    set v4-synflood-limit {
        type ipv4_addr
        flags dynamic,timeout
        timeout 1m
    }

    set v6-synflood-limit {
        type ipv6_addr
        flags dynamic,timeout
        timeout 1m
    }

    map bridge-map {
        type ifname : verdict
    }

    chain do-reject {
        meta pkttype broadcast drop
        ip saddr 224.0.0.0/4 drop
        meta l4proto tcp reject with tcp reset
        meta l4proto { icmp, ipv6-icmp } reject
        reject with icmp host-prohibited
        reject with icmpv6 admin-prohibited
        drop
    }

    chain accept-management {
        ip saddr @v4-dc/management ip saddr != @v4-dc/management-nomatch accept
        ip6 saddr @v6-dc/management ip6 saddr != @v6-dc/management-nomatch accept
    }

    chain block-synflood {
        tcp flags & (fin | syn | rst | ack) != syn return
        jump ratelimit-synflood
        drop
    }

    chain log-drop-invalid-tcp {
        jump log-invalid-tcp
        drop
    }

    chain block-invalid-tcp {
        tcp flags & (fin | syn | rst | psh | ack | urg) == fin | psh | urg goto log-drop-invalid-tcp
        tcp flags ! fin,syn,rst,psh,ack,urg goto log-drop-invalid-tcp
        tcp flags & (syn | rst) == syn | rst goto log-drop-invalid-tcp
        tcp flags & (fin | syn) == fin | syn goto log-drop-invalid-tcp
        tcp sport 0 tcp flags & (fin | syn | rst | ack) == syn goto log-drop-invalid-tcp
    }

    chain allow-ndp-in {
        icmpv6 type { nd-router-solicit, nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert, nd-redirect } accept
    }

    chain block-ndp-in {
        icmpv6 type { nd-router-solicit, nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert, nd-redirect } drop
    }

    chain allow-ndp-out {
        icmpv6 type { nd-router-solicit, nd-neighbor-solicit, nd-neighbor-advert } accept
    }

    chain block-ndp-out {
        icmpv6 type { nd-router-solicit, nd-neighbor-solicit, nd-neighbor-advert } drop
    }

    chain block-smurfs {
        ip saddr 0.0.0.0 return
        meta pkttype broadcast goto log-drop-smurfs
        ip saddr 224.0.0.0/4 goto log-drop-smurfs
    }

    chain allow-icmp {
        icmp type { destination-unreachable, source-quench, time-exceeded } accept
        icmpv6 type { destination-unreachable, packet-too-big, time-exceeded, parameter-problem } accept
    }

    chain log-drop-smurfs {
        jump log-smurfs
        drop
    }

    chain default-in {
        iifname "lo" accept
        jump allow-icmp
        ct state vmap { invalid : jump invalid-conntrack, established : accept, related : accept }
        meta l4proto igmp accept
        tcp dport { 22, 3128, 5900-5999, 8006 } jump accept-management
        udp dport 5405-5412 accept
        udp dport { 135, 137-139, 445 } goto do-reject
        udp sport 137 udp dport 1024-65535 goto do-reject
        tcp dport { 135, 139, 445 } goto do-reject
        udp dport 1900 drop
        udp sport 53 drop
    }

    chain default-out {
        oifname "lo" accept
        jump allow-icmp
        ct state vmap { invalid : jump invalid-conntrack, established : accept, related : accept }
    }

    chain before-bridge {
        meta protocol arp accept
        meta protocol != arp ct state vmap { invalid : jump invalid-conntrack, established : accept, related : accept }
    }

    chain host-bridge-input {
        type filter hook input priority filter - 1; policy accept;
        iifname vmap @bridge-map
    }

    chain host-bridge-output {
        type filter hook output priority filter + 1; policy accept;
        oifname vmap @bridge-map
    }

    chain input {
        type filter hook input priority filter; policy accept;
        jump default-in
        jump ct-in
        jump option-in
        jump host-in
        jump cluster-in
    }

    chain output {
        type filter hook output priority filter; policy accept;
        jump default-out
        jump option-out
        jump host-out
        jump cluster-out
    }

    chain forward {
        type filter hook forward priority filter; policy accept;
        jump host-forward
        jump cluster-forward
    }

    chain ratelimit-synflood {
    }

    chain log-invalid-tcp {
    }

    chain log-smurfs {
    }

    chain option-in {
    }

    chain option-out {
    }

    chain cluster-in {
    }

    chain cluster-out {
    }

    chain host-in {
    }

    chain host-out {
    }

    chain cluster-forward {
    }

    chain host-forward {
        meta protocol != arp ct state vmap { invalid : jump invalid-conntrack, established : accept, related : accept }
        meta protocol != arp ct state vmap { invalid : jump invalid-conntrack, established : accept, related : accept }
    [...]
        meta protocol != arp ct state vmap { invalid : jump invalid-conntrack, established : accept, related : accept }
    }

    chain ct-in {
    }

    chain invalid-conntrack {
    }
}
table bridge proxmox-firewall-guests {
    map vm-map-in {
        typeof oifname : verdict
    }

    map vm-map-out {
        typeof iifname : verdict
    }

    map bridge-map {
        type ifname . ifname : verdict
    }

    chain allow-dhcp-in {
        udp sport . udp dport { 547 . 546, 67 . 68 } accept
    }

    chain allow-dhcp-out {
        udp sport . udp dport { 546 . 547, 68 . 67 } accept
    }

    chain block-dhcp-in {
        udp sport . udp dport { 547 . 546, 67 . 68 } drop
    }

    chain block-dhcp-out {
        udp sport . udp dport { 546 . 547, 68 . 67 } drop
    }

    chain allow-ndp-in {
        icmpv6 type { nd-router-solicit, nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert, nd-redirect } accept
    }

    chain block-ndp-in {
        icmpv6 type { nd-router-solicit, nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert, nd-redirect } drop
    }

    chain allow-ndp-out {
        icmpv6 type { nd-router-solicit, nd-neighbor-solicit, nd-neighbor-advert } accept
    }

    chain block-ndp-out {
        icmpv6 type { nd-router-solicit, nd-neighbor-solicit, nd-neighbor-advert } drop
    }

    chain allow-ra-out {
        icmpv6 type { nd-router-advert, nd-redirect } accept
    }

    chain block-ra-out {
        icmpv6 type { nd-router-advert, nd-redirect } drop
    }

    chain allow-icmp {
        icmp type { destination-unreachable, source-quench, time-exceeded } accept
        icmpv6 type { destination-unreachable, packet-too-big, time-exceeded, parameter-problem } accept
    }

    chain do-reject {
        meta pkttype broadcast drop
        ip saddr 224.0.0.0/4 drop
        meta l4proto tcp reject with tcp reset
        meta l4proto { icmp, ipv6-icmp } reject
        reject with icmp host-prohibited
        reject with icmpv6 admin-prohibited
        drop
    }

    chain pre-vm-out {
        meta protocol != arp ct state vmap { invalid : jump invalid-conntrack, established : accept, related : accept }
    }

    chain vm-out {
        type filter hook prerouting priority 0; policy accept;
        jump allow-icmp
        iifname vmap @vm-map-out
    }

    chain pre-vm-in {
        meta protocol != arp ct state vmap { invalid : jump invalid-conntrack, established : accept, related : accept }
        meta protocol arp accept
    }

    chain vm-in {
        type filter hook postrouting priority 0; policy accept;
        jump allow-icmp
        oifname vmap @vm-map-in
    }

    chain before-bridge {
        meta protocol arp accept
        meta protocol != arp ct state vmap { invalid : jump invalid-conntrack, established : accept, related : accept }
    }

    chain forward {
        type filter hook forward priority 0; policy accept;
        meta ibrname . meta obrname vmap @bridge-map
    }

    chain invalid-conntrack {
    }
}


And I could SWEAR nft list ruleset showed my drop rule forwarding/incoming a couple of minutes ago directly after the reboot!
 
cat /etc/pve/firewall/cluster.fw | grep -i forward

Can you please post this file unfiltered? It seems like there is an invalid rule somewhere in your firewall config, which is why the nft configuration cannot be applied. Does the +dc/trusted-ips IPSet exist?