Confused regarding guest isolation on cluster

etfz

New Member
Aug 29, 2025
23
1
3
Hi,

My objective is simple; I want to restrict traffic between guests on a cluster within the same subnet (VNet), while allowing explicit exceptions and upstreams traffic.

I see that an isolation option exists for VNets, with the following footnote:

Port isolation is local to each host. Use theVNET Firewall to further isolate traffic inthe VNET across nodes. For example, DROP by default and only allow traffic fromthe IP subnet to the gateway and vice versa.

I'm not real sure what that means. Are they only going to be isolated from one another if they run on the same host? I need to use VNet firewall in order to isolate guests on different hosts? Why isn't either enough?

Anyway, I tried enabling the isolate option on the VNet, applying the SDN configuration and restarting my two guests through PVE, but they can still ping each other. They are running on the same host. Do I also need to enable one or more of the plethora of other firewall options?
 
I'm not real sure what that means. Are they only going to be isolated from one another if they run on the same host? I need to use VNet firewall in order to isolate guests on different hosts? Why isn't either enough?
Port isolation uses the isolated flag for bridge ports (for more information see [1]). This can only work locally because of that. If you want to do this across host, or have more fine-grained control over traffic between guests on the same VNet, then you can use the VNet firewall.

Anyway, I tried enabling the isolate option on the VNet, applying the SDN configuration and restarting my two guests through PVE, but they can still ping each other. They are running on the same host. Do I also need to enable one or more of the plethora of other firewall options?

Could you post your SDN and network configuration for further debugging?

Code:
ip -details a
ip r

cat /etc/network/interfaces
cat /etc/network/interfaces.d/sdn

cat /etc/pve/sdn/zones.cfg
cat /etc/pve/sdn/vnets.cfg
cat /etc/pve/sdn/subnets.cfg

+ the network configuration of both guests:

Code:
qm config <VMID>

[1] https://man7.org/linux/man-pages/man8/bridge.8.html
 
Port isolation uses the isolated flag for bridge ports (for more information see [1]). This can only work locally because of that. If you want to do this across host, or have more fine-grained control over traffic between guests on the same VNet, then you can use the VNet firewall.
I understand. Then, do I still need port isolation if I use the VNet firewall?
Could you post your SDN and network configuration for further debugging?
Certainly. This is from the host on which both guests are currently running.

ip -details a (see attachment)

ip r
Code:
default via 10.7.20.1 dev vmbr0 proto kernel onlink
10.7.9.0/24 dev vmbr2 proto kernel scope link src 10.7.9.202
10.7.20.0/24 dev vmbr0 proto kernel scope link src 10.7.20.12
10.7.21.0/24 dev vmbr1 proto kernel scope link src 10.7.21.12

cat /etc/network/interfaces
Code:
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

auto ens2f1np1
iface ens2f1np1 inet manual

auto ens3f1np1
iface ens3f1np1 inet manual

auto ens2f0np0
iface ens2f0np0 inet manual

auto ens3f0np0
iface ens3f0np0 inet manual

iface eno1np0 inet manual

iface eno2np1 inet manual

iface eno3 inet manual

iface eno4 inet manual

auto bond0
iface bond0 inet manual
        bond-slaves ens2f1np1 ens3f1np1
        bond-miimon 100
        bond-mode 802.3ad

auto bond1
iface bond1 inet manual
        bond-slaves ens2f0np0 ens3f0np0
        bond-miimon 100
        bond-mode 802.3ad
        mtu 9000
#Ceph

auto bond0.20
iface bond0.20 inet manual

auto bond1.21
iface bond1.21 inet manual
        mtu 9000

auto bond0.9
iface bond0.9 inet manual

auto vmbr0
iface vmbr0 inet static
        address 10.7.20.12/24
        gateway 10.7.20.1
        bridge-ports bond0.20
        bridge-stp off
        bridge-fd 0

auto vmbr1
iface vmbr1 inet static
        address 10.7.21.12/24
        bridge-ports bond1.21
        bridge-stp off
        bridge-fd 0
        mtu 9000

auto vmbr3
iface vmbr3 inet manual
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094

auto vmbr2
iface vmbr2 inet static
        address 10.7.9.202/24
        bridge-ports bond0.9
        bridge-stp off
        bridge-fd 0
#Temp: K1 BACKUP

source /etc/network/interfaces.d/sdn

cat /etc/network/interfaces.d/sdn
Code:
#version:24

auto BACKUP
iface BACKUP
        bridge_ports vmbr3.9
        bridge_stp off
        bridge_fd 0
        alias Backup

auto STAGING
iface STAGING
        bridge_ports vmbr3.19
        bridge_stp off
        bridge_fd 0

auto vlan23
iface vlan23
        bridge_ports vmbr3.23
        bridge_stp off
        bridge_fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094

auto vlan25
iface vlan25
        bridge_ports vmbr3.25
        bridge_stp off
        bridge_fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094

cat /etc/pve/sdn/zones.cfg
Code:
vlan: vlanzone
        bridge vmbr3
        ipam pve

cat /etc/pve/sdn/vnets.cfg
Code:
vnet: vlan25
        zone vlanzone
        isolate-ports 1
        tag 25
        vlanaware 1

vnet: vlan23
        zone vlanzone
        tag 23
        vlanaware 1

vnet: BACKUP
        zone vlanzone
        alias Backup
        tag 9

vnet: STAGING
        zone vlanzone
        alias STAGING
        isolate-ports 1
        tag 19

vnet: cstage
        zone vlanzone
        alias Customer staging
        tag 26

cat /etc/pve/sdn/subnets.cfg
Code:

qm config 100
Code:
agent: 1
bios: ovmf
boot: order=scsi0;scsi1
cores: 2
cpu: host
efidisk0: ceph-prod:vm-100-disk-0,efitype=4m,size=1M
hotplug: network,disk,usb,cpu,memory
machine: pc-i440fx-10.1
memory: 4096
meta: creation-qemu=10.1.2,ctime=1770807980
name: PVE2
net0: virtio=00:50:56:9e:3a:32,bridge=STAGING
numa: 1
ostype: win11
sata4: cephfs:iso/virtio-win-0.1.271.iso,media=cdrom,size=709474K
scsi0: ceph-prod:vm-100-disk-1,discard=on,iothread=1,size=50G,ssd=1
scsi1: ceph-prod:vm-100-disk-2,discard=on,iothread=1,size=100M,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=421efda3-9821-d5e2-c494-fc7250f3da96
sockets: 2
vmgenid: 6f1018ff-5990-42dc-a164-dfb61959b204

qm config 114
Code:
agent: 1
bios: ovmf
boot: order=scsi0
cores: 2
cpu: x86-64-v2-AES
efidisk0: ceph-prod:vm-114-disk-0,efitype=4m,ms-cert=2023w,pre-enrolled-keys=1,size=528K
machine: pc-i440fx-10.1
memory: 4096
meta: creation-qemu=10.1.2,ctime=1770819373
name: PVE
net0: virtio=BC:24:11:68:2F:B7,bridge=STAGING,firewall=1
ostype: win11
scsi0: ceph-prod:vm-114-disk-1,size=50G
scsi1: ceph-prod:vm-114-disk-2,size=1G
scsihw: pvscsi
smbios1: uuid=5cb1332b-6baf-4fbd-bb9a-f8a9cabc20d1
sockets: 2
vmgenid: 3c9d52b4-da49-4da5-8ba9-a38715d532bf
 

Attachments

Problem seems to be that the fwpr interface still has the isolated property set to off.
Are you sure that you properly restarted the VM 114? For VM 100 it is set correctly (but it doesn't use a firewall bridge, so that could potentially be the cause of this).

If this still persists after rebooting the VM I'd have to check more closely if I can reproduce this on my end and check for potential causes.
 
Which is the fwpr interface, and how do you determine that the property is off?

I positively restarted it, but I don't remember the order in which I did everything. I disabled the firewall option on both guests (but it warned me about it not being enabled on the datacenter level, so I assumed it wouldn't do anything) and have restarted again. It currently seems to be working as desired, as long as both guests run on the same host.

Can you confirm what kind of changes requires rebooting the guests, and does it need to be a PVE initiated reboot?

When running on different hosts they can still reach each other.

/etc/pve/sdn/firewall/STAGING.fw
Code:
[OPTIONS]

policy_forward: DROP
enable: 1

[RULES]

|FORWARD ACCEPT -source +sdn/STAGING-gateway -log nolog
 
Which is the fwpr interface, and how do you determine that the property is off?

For the firewall to work, when using pve-firewall (or OVS with the new proxmox-firewall) we create firewall bridges. You can see them in the ip a output, as well as the isolated property which is set to off:

Code:
40: fwpr114p0@fwln114i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master STAGING state UP group default qlen 1000

    link/ether 52:5b:b2:95:3e:70 brd ff:ff:ff:ff:ff:ff promiscuity 1 allmulti 1 minmtu 68 maxmtu 65535

    veth

    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on port_id 0x8003 port_no 0x3 designated_port 32771 designated_cost 0 designated_bridge 8000.98:3:9b:8c:be:dd designated_root 8000.98:3:9b:8c:be:dd hold_timer    0.00 message_age_timer    0.00 forward_delay_timer    0.00 topology_change_ack 0 config_pending 0 proxy_arp off proxy_arp_wifi off mcast_router 1 mcast_fast_leave off mcast_flood on bcast_flood on mcast_to_unicast off neigh_suppress off neigh_vlan_suppress off group_fwd_mask 0 group_fwd_mask_str 0x0 vlan_tunnel off isolated off locked off mab off numtxqueues 32 numrxqueues 32 gso_max_size 65536 gso_max_segs 65535 tso_max_size 524280 tso_max_segs 65535 gro_max_size 65536 gso_ipv4_max_size 65536 gro_ipv4_max_size 65536

Can you confirm what kind of changes requires rebooting the guests, and does it need to be a PVE initiated reboot?

Changing firewall implementations (nftables <-> iptables) requires a restart or migration of all guests, since it changes how the network interfaces need to be set up in order for the firewall to work. You can check via ip a which network interfaces are created.

Can you also check the status of the firewall?

Code:
systemctl status proxmox-firewall

How does the generated ruleset look like on both hosts?

Code:
nft list ruleset

Since you are using the forward chain, I assume you are using proxmox-firewall, since the older pve-firewall does not support the forward direction? Can you make sure that it is enabled on all hosts (it needs to be configured on a per-host basis). And after you have made sure that it is enabled on all guests make sure to either reboot the host or restart all guests / migrate them away and back.
 
The forward rule I added was just a test, but I assume I'm going to need to use the forward chain in order to make exceptions for certain guests?
Is proxmox-firewall what the nftables setting is? I have not done anything with that. I can see that the wiki notes that as not being suitable for production, which is very much what this is going to be. Do I need it even just to isolate guests from each other, and would they still be able to reach the gateway?
 
The forward rule I added was just a test, but I assume I'm going to need to use the forward chain in order to make exceptions for certain guests?

Yes, 'Isolate Ports' is all or nothing, with the firewall you have more fine-grained control (at the overhead cost of having to run a firewall).

Is proxmox-firewall what the nftables setting is? I have not done anything with that. I can see that the wiki notes that as not being suitable for production, which is very much what this is going to be. Do I need it even just to isolate guests from each other, and would they still be able to reach the gateway?

It is, and currently it is only possible to utilize the VNet firewall with the nftables firewall (see [1]). It is currently marked as tech-preview, so running it in production is at your own peril. Sorry, I just assumed that was what you were running and simply didn't think of asking initially.


[1] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_directions_amp_zones
 
I understand. So, just to be abundantly clear, I can't isolate guests across hosts without nftables?

Can you say anything regarding what's at stake? Like, what's the worst that could happen? What are some things that could happen? Are there any known issues? Is there any roadmap regarding making it a stable feature?

I feel like the web interface should have some kind of indication of these features not having any effect.
 
I understand. So, just to be abundantly clear, I can't isolate guests across hosts without nftables?

There's the possibility of using the old firewall in conjunction with the VM-level firewall and configure the firewall for each VM separately.

Can you say anything regarding what's at stake? Like, what's the worst that could happen? What are some things that could happen? Are there any known issues? Is there any roadmap regarding making it a stable feature?

The nftables firewall is a relatively young reimplementation of the old iptables-based firewall. It's not quite as well tested and there's always the possibility of undiscovered bugs when using proxmox-firewall. Mid-term it should replace the old iptables implementation, but I cannot give you an exact date for when this will happen.

I feel like the web interface should have some kind of indication of these features not having any effect.
When creating a rule with direction forward, the dialogue in the Web UI already shows a warning:

1771243285257.png
 
Right, but the default forward policy also requires nftables, or?

I'm going to evaluate nftables. I enabled it on all hosts, rebooted all running guests, and also migrated them back and forth, but... I don't know that it's taken effect. Guests can still reach each other, for one. Do I also need to restart the hosts?

Code:
$ sudo pve-firewall status
Status: disabled/running

Code:
$ sudo iptables-save
# Generated by iptables-save v1.8.11 on Mon Feb 16 14:15:11 2026
*raw
:PREROUTING ACCEPT [471342279:689851449416]
:OUTPUT ACCEPT [467237639:690466672194]
COMMIT
# Completed on Mon Feb 16 14:15:11 2026
# Generated by iptables-save v1.8.11 on Mon Feb 16 14:15:11 2026
*filter
:INPUT ACCEPT [464788154:687555396034]
:FORWARD ACCEPT [1390633:757769288]
:OUTPUT ACCEPT [467102926:690409687459]
COMMIT
# Completed on Mon Feb 16 14:15:11 2026


Code:
$ sudo systemctl status proxmox-firewall
● proxmox-firewall.service - Proxmox nftables firewall
     Loaded: loaded (/usr/lib/systemd/system/proxmox-firewall.service; enabled; preset: enabled)
     Active: active (running) since Wed 2026-02-04 15:19:02 CET; 1 week 4 days ago
 Invocation: 2d925ddd392e41339a6035eaafedba57
   Main PID: 2686 (proxmox-firewal)
      Tasks: 1 (limit: 629145)
     Memory: 2.3M (peak: 5.8M)
        CPU: 14min 39.418s
     CGroup: /system.slice/proxmox-firewall.service
             └─2686 /usr/libexec/proxmox/proxmox-firewall start

Feb 04 15:19:02 hv2 systemd[1]: Started proxmox-firewall.service - Proxmox nftables firewall.

Code:
$ sudo nft list ruleset
 
It seems like there are no rules generated (judging from the nft list ruleset output)

Is the pve-firewall service still running?
Did you enable the firewall on datacenter level? Otherwise no rule will be generated at all.
 
I understand. So, just to be abundantly clear, I can't isolate guests across hosts without nftables?
It might not be as convenient, but you could try to use the already running ebtables and add rules which allow (arp-) traffic from/to the gateway only and drop any other. ebtables is local and rules must be deployed per node, but it could be automated using a script in if-up.d.
 
  • Like
Reactions: etfz
It seems like there are no rules generated (judging from the nft list ruleset output)

Is the pve-firewall service still running?
Did you enable the firewall on datacenter level? Otherwise no rule will be generated at all.
I should get this out of the way; I have not touched any settings pertaining to firewall functionality prior to this thread.

Yes, pve-firewall appears to be running. I did not enable datacenter level firewall. I have done so now, and can see that rules are generated. All forwarding traffic appears to be blocked, so how do I now allow traffic leaving the VNet? I suspect I need to explicitly define the IP addressing for each VNet, which would be annoying. I want to keep all that in the router. Is there no smart functionality for this?

Also, I allowed all input traffic from my own machine on the datacenter level, but using the web UI console still does not work. I had to set the default policy to accept for that to work. What rule can I add to make that work? Edit: Appears to work only when the guest is running on the PVE node on which I am currently logged in. Edit2: Actually, I'm not sure whether this is the case, either. It stopped working after a while.
 
Last edited: