Problem with the VNet Firewall

Don_Sandman

New Member
Sep 5, 2024
17
0
1
Hello Everyone,

i currently want to try out the SDN Feature and the complimentary VNet Firewall
whilst the SDN seems to be working perfectly the VNet Firewall doesnt block

The specific problem I encounter is that I have 2 SDN VNets defined: VMNet and mgmNet
I want to be able to manage the VMs in the VMNet VNet via the mgmNet, but the traffic to mgmNet must be blocked.

Well I am able to ping my VM in the mgmNet from my VMNet LXC

The config is the following:

Both VNets are in the same Zone, both have Isolate Ports turned on and the nftables firewall is active
I have also restarted all my guests to apply for the new (ntables based) firewall

whilst doing this i also discovered that my old firewall rules I implemented (with iptables) are no longer working

I've tried to the traffic by either using a CIDR Subnet and the predefined Alias


I also get an errer that the firewall cant find my aliasses, i suspect that neither the current rules nor the aliasses got transferred over when switching to nftables

TL;DR:
can't get VNet Firewall working
 

Attachments

  • Screenshot 2024-11-23 193236.png
    Screenshot 2024-11-23 193236.png
    22.7 KB · Views: 70
  • Screenshot 2024-11-23 193551.png
    Screenshot 2024-11-23 193551.png
    13.6 KB · Views: 61
I have now reverted back to the iptables solution and the firewall rules do apply again

I'd really love to use the SDN Feature but since i cannot reliably create firewall rules it is not worth it for me
 
If you want to create firewall rules for traffic between VNets, you have to do this at the host level, in the forward direction. VNet Firewall is exclusively for traffic inside the VNet, not traffic going out of the VNet (e.g. to another VNet).

With regards to the Aliases: From Proxmox VE 7 to 8 the format of the aliases has been changed, and we introduced a scope for aliases (e.g. from alias_xyz to dc/allias_xyz). Usually the issue here is that there is still some old alias lurking around in the firewall config. You can fix this by simply saving the respective alias again, then a scope should be generated with that alias and the error should vanish.
 
  • Like
Reactions: Don_Sandman
With Host level you mean on the node right?
i've tried that and setting it under Datacenter and pings are still going through
Tried with both no interface and vmbr0 which it is apparently using according to
Code:
/etc/network/interfaces.d/sdn

the FireWall Rules work inside a VNet though, thank you for the clarification on that
 
Last edited:
I have also tried it with the Forward Policy setting set to DROP, the firewall on my node: Source "+sdn/VMNet-all" set to DROP, FW on host: Destination "+sdn/mgmNet-all" DROP and the Firewall for the Container set to REJECT Destination "+sdn/mgmNet-all"

in this time, i have figured out that the forward policy means forwarding any packets outside of the local VNet
 
Here is my Firewall setup on the Host machine
sadly none of those rules work, my next step would be to check if a second zone would do the trick.

But since i do not want a new Zone for every Network i want to seperate i want to have it somewhat cleanly and have the least zones i can (logically) have

Im also wondering what Interface to Put in when doing the rules on the Host since Proxmox doesn't like FireWall Rules with no Interface
 

Attachments

  • Screenshot 2024-11-25 201621.png
    Screenshot 2024-11-25 201621.png
    22.3 KB · Views: 49
Last edited:
How does the output of the following commands look like? can you attach it to your post?

Code:
journalctl -u proxmox-firewall > firewall.log
nft list ruleset > ruleset.nft
 
Of course, I should have noticed earlier - sorry.

You cannot define an interface for the FORWARD direction, it should be sufficient to just use the IPSets and nothing else. If you want to prevent traffic from both VNets to each other then you would need to define two rules:

FORWARD DROP VMNet -> mgmNet
FORWARD DROP mgmNet -> VMNet

It also seems like you are using an alias somewhere that is not defined anymore:

Code:
Nov 25 20:50:51 pve proxmox-firewall[1085]: proxmox_firewall: error updating firewall rules: could not find alias dc/vmbr0001-interface

Is this still lingering around in the firewall rules somewhere?
 
no worries, i figured it out quite quickly

for the Alias: yes, thank you for noticing. It was actually present on an old LXC container that i still had
after removing the rules the alias was in, it is working perfectly now and new rules do apply :D

i created 2 rules
(forward ACCEPT "+sdn/mgmNet-all" "+sdn/VMNet-all")
(forward DROP "+sdn/VMNet-all" "+sdn/mgmNet-all")

sadly i cannot ping any VM thats in the VMNet from a VM in the mgmNet
i'll also have to redo the VM i used for testing since the one i had randomly shut down all Guests and reloaded everything
 
I too am having a similar issue, I cannot restrict access between VNETs regardless of where I put the rules. I am using nftables as well.
 
I too am having a similar issue, I cannot restrict access between VNETs regardless of where I put the rules. I am using nftables as well.
Can you post your SDN configuration, as well as the firewall rules for your VNets:

Code:
cat /etc/network/interfaces.d/sdn
cat /etc/pve/sdn/firewall/*.fw
cat /etc/pve/firewall/cluster.fw
cat /etc/pve/local/host.fw

It would also be interesting what kind of traffic is passing through the firewall.
 
I can do that, but it might be more productive to help me understand how the firewalls at various levels "should" be configured to accomplish what I am looking for.


I'm hoping to use the SDN capability in the future for a production multi-tenant data center.

I have configured eVPN and that seems to work to provide connectivity to all tenants(vNETs) within the cluster.

However typically we would want all tenants firewalled away from each other by default and only rarely allow connectivity to each other by a specific firewall rule. Tenants normally have access to the internet, but we normally NAT them at the physical firewall (I'm considering using SNAT on Proxmox here but unsure on that design choice yet)

From reading through all the documentation I have found so far, I'm unsure how to configure the above. I've tried many different combinations without getting the connectivity I'm looking for.

What should the firewall at the DC be set to?
What should the firewall on the VMs be set to? (Ideally I would like to leave it off and control traffic from one central GUI rather than have to hunt down the rules on a per VM basis)
What should the firewall on the hosts be set to?
What should the firewall on the VNET Firewall be set to?
What has to be done to make firewall rules changes at any of these levels effective ? i.e. reboot, wait 10 seconds, instant etc?

Does NfTables completely replace iptables in this or are both firewalls active?

My test looks like this

I have configured two VNETS TESTC1 and TESTC2, each with one fictitious subnet and a windows VM in each.
I have created a VNET firewall rule allowing them to talk but just on TCP, allowing me to test with ICMP and TCP to verify I can granularly control traffic between VNETs.
I read somewhere that VNET firewall was only for controlling intra-VNET traffic and host rules were required, so I tried that too.

My results were usually either all traffic allowed through or none. So I would like to backup and get to a position where you would expect it to work and troubleshoot from there. Thank you for your time.
 
What should the firewall at the DC be set to?
The setting needs to be on if you want to use any part of the firewall - it is like a global on/off switch for the firewall. The Datacenter firewall are rules for *all* hosts in the cluster, so they get applied to every host in the cluster.

What should the firewall on the VMs be set to? (Ideally I would like to leave it off and control traffic from one central GUI rather than have to hunt down the rules on a per VM basis)
It is for setting rules for the VM - it depends on what you're trying to achieve. If you want to create specific rules for a single VM, then you need to turn it on for that VM. You could also do it on the VNet firewall level, but that might be a bit more cumbersome.

What should the firewall on the hosts be set to?
Depends on what you're trying to achieve. If you want to firewall incoming traffic to the host, then you need to turn it on.

What should the firewall on the VNET Firewall be set to?
The VNet firewall is for all traffic inside a bridge. That means everything going from one interface on a bridge to another on the same bridge - any combination of host / guest. But the main use case is of course guest <-> guest. If you are using the host as a gateway for that VNet (Simple or EVPN zone), then forwarded traffic will not get picked up by the VNet firewall (it's only for traffic *inside* that VNet). You will need to create rules on the forward chain of the host.

What has to be done to make firewall rules changes at any of these levels effective ? i.e. reboot, wait 10 seconds, instant etc?
Changes should get picked up almost immediately (~10 seconds). Sometimes if there are already established connections, then there is a conntracking entry for that connection and traffic will pass nevertheless. So you can flush the conntrack table if you want to make sure (beware that this will kill almost any stateful connection on that host, so I'd not do it on a production system).

Does NfTables completely replace iptables in this or are both firewalls active?
It's either / or, nftables is the newer implementation that is currently in preview. It will include more features and is slated to be the default firewall for a future release after it gained some exposure.

I have configured two VNETS TESTC1 and TESTC2, each with one fictitious subnet and a windows VM in each.
I have created a VNET firewall rule allowing them to talk but just on TCP, allowing me to test with ICMP and TCP to verify I can granularly control traffic between VNETs.
I read somewhere that VNET firewall was only for controlling intra-VNET traffic and host rules were required, so I tried that too.

What kind of zone is it? EVPN, I assume? For traffic that is forwarded by the host (everytime the host acts as a router) you need to create rules in the forward chain of the host. If traffic goes from one VNet to another, then this is the case. forward is bi-directional, so you need to allow traffic *both* ways, it is not sufficient to allow it into one direction, since otherwise the responses cannot go through. ICMP needs to be enabled separately from TCP, since they're different protocols.
 
Thanks,

Yes using eVPN.

With DC FW on, no host rules, vNET firewall options forward policy set to drop, fw off on the vm my pings still go through. Is the default to allow all traffic between VNETs ? I attached FW rules for this test.

I verified that if I put FORWARD/DROP rules at the host or DC level then it drops the traffic, but it seems like it should be implicit Deny between VNETs?
 

Attachments

No, there is also a forward policy on the host layer which defaults to ACCEPT. If you ping from one VNet to *another* VNet, then the PVE host is routing that traffic, so you need to create the forward rules in the host firewall.

traffic *inside* the *same* VNet uses the VNet firewall
traffic going from one VNet to *another* VNet uses the forward chain on the host, because the host is routing that traffic
 
No, there is also a forward policy on the host layer which defaults to ACCEPT. If you ping from one VNet to *another* VNet, then the PVE host is routing that traffic, so you need to create the forward rules in the host firewall.

traffic *inside* the *same* VNet uses the VNet firewall
traffic going from one VNet to *another* VNet uses the forward chain on the host, because the host is routing that traffic
Thank you. Is there a way to set the forwarding between VNETs to default drop rather than have to create drop rules for every customer to every customer (thats a lot of rules)
 
You can set the default policy on host-level to drop and only allow the specific flows you want to allow, but note that this will also disrupt traffic going outside (for instance if you have a simple zone that NATs traffic that traffic would get dropped too). If you have a test setup and can experiment as you like, then go for it and try it, but I'd recommend not doing this on a production cluster without fully understanding the implications.
 
Ok Thank you. Thank You. Thank You .... I confirm that placing a FORWARD/DROP rule in the DC level and then placing FORWARD/ACCEPT rules above it accomplishes this task. Of course I'm not sure if there is other traffic being inadvertently dropped that would normally be part of the FORWARD chain ?? Not sure how I would find out what traffic proxmox puts in that forward chain normally.