Firewall rules cascading

Whatever

Renowned Member
Nov 19, 2012
391
63
93
Am I correct, that if I define firewall rule explicitly on datacenter or host level that it should be automatically assign to any VM on the host?

What I've tried:

Datacenter: Enable firewall = Yes
Datacenter policy: ACCEPT / ACCEPT
Datacenter rule: ACCEPT any ICMP traffic

Host: Enable firewall = Yes

VM: Enable firewall = Yes
VM policy: DROP / ACCEPT
VM interface: Use Firewall

As a result: no ping to VM

Tried to create the same rule on the Host level - no ping to VM

But if I create the same rule on VM level - ping works as expected

Whats wrong?
 
My original post was actually wrong, forgive me. VM rules don't take precedence over DC rules. They are independent.

VMs only apply their own set of rules.
Datacenter rules apply to nodes. Node level rules apply to only the node.

Security groups, IPsets and aliases defined at datacenter level can be used at VM level.

--- old wrong post ---
VM rules take precedence over host rules, which take precedence over datacenter rules.
The idea is that when you go deeper down the hierarchy the settings get more specific. This way you can "generally allow traffic except for a specific VM's network interface".

Usually you want to block everything and allow specific services for specific VMs. If different VMs have different services you can simply allow them on the VM level. If the datacenter rules were to override the VM rules you'd have to allow all services of all VMs on datacenter level and block all unrequired services on each VM explicitly again.

So in your case the VM policy is to drop incoming traffic, and that's where the chain ends.
 
Last edited:
Wolfgang,

Thank for the clarification. But to be honest I would expect slightly different behavior:
As far as I don't define any rules on VM level but it's strictly defined on host/datacenter one (and VM inherits this rule) so the base VM policy (DROP) should be applied with the lower priority, shouldn't be?
 
Last edited:
Even more!

By default VM policy is defined as: DROP / ACCEPT
So, with your logic the default policy should be defined as ACCEPT / ACCEPT

Seems something is wrong...
 
Wolfgang,

I've performed the following test:


Datacenter: Enable firewall = Yes
Datacenter policy: DROP / ACCEPT
Datacenter rule: ACCEPT any ICMP traffic


Host: Enable firewall = Yes


VM: Enable firewall = Yes
VM policy: DROP / ACCEPT
VM interface: Use Firewall


As a result:
ping to hosts - OK
no ping to VM


Conclusion: on VM level VM base INPUT/OUTPUT policy is always applied (if there are no rules on VM level) without any respect to host/datacenter levels
 
Last edited:
I was mistaken in my post above and updated it. Sorry for that.
 
Well, Wolfgang how to be in case when:
I've hundreds of VMs disrtibuted accross my cluster and would like to assign the same firewall ruleset? To assign
IPsets to each VM is too painful and I'm not even talking about managing them....

Any plan to somehow inherit rules on VM level?
 
Wiki says:
For each zone, you can define firewall rules for incoming and/or outgoing traffic. Note that the zones "cascade": e.g. a rule set at the host level will affect all the vms on that host.

and it's quite logical... even if I don't read this in wiki I would expect such behavior
 
My original post was actually wrong, forgive me. VM rules don't take precedence over DC rules. They are independent.

VMs only apply their own set of rules.
Datacenter rules apply to nodes. Node level rules apply to only the node.

Security groups, IPsets and aliases defined at datacenter level can be used at VM level.

--- old wrong post ---
VM rules take precedence over host rules, which take precedence over datacenter rules.
The idea is that when you go deeper down the hierarchy the settings get more specific. This way you can "generally allow traffic except for a specific VM's network interface".

Usually you want to block everything and allow specific services for specific VMs. If different VMs have different services you can simply allow them on the VM level. If the datacenter rules were to override the VM rules you'd have to allow all services of all VMs on datacenter level and block all unrequired services on each VM explicitly again.

So in your case the VM policy is to drop incoming traffic, and that's where the chain ends.

-------------------------------------------------------------------------------------------------

Wolfgang according to your corrected post you indicated that datacenter rules apply to nodes but not also automatically to the vms on those nodes. I have been configuring a test environment and this does not seem to be the case. Can you please clarify and confirm whether the rules set at a higher level automatically cascade down and how they can be overridden:

At DC level:
- Firewall on.
- DROP / ACCEPT
- Create a security group and add one rule to the group that allows access to port 8006 on an internal link IP
- Expectation: A common security rule set is created that can easily be applied to multiple nodes without having to manually create it per node.

At node level:
- Firewall on.
- DROP / ACCEPT
- Enable the security group created at DC level
- Expectation: All ports on all IPs configured for the node will be blocked except for the one port on the internal nic/IP that is associated with the enabled security group. Whatever was rules were set for this node will only apply to the node's network layer itself and not also to the network layer of the VMs running on this node.

At VM level:
- Firewall off.
- No rules configured
- Expectation: All open ports (that have been opened by the applications installed on the VM) on all IPs associated with the VM will be accessible from outside. So I would be able to access port 22 on the VM or ping it and the associated service will work and respond.

Having configured everything as per the above in both 3.4 and 4.1, the results are different to expectations. At DC and node level everything is working as expected but at VM level its not. Even though the VM applications are opening their ports (doing a netstat inside the VM), these services are not accessible and cannot be reached from outside the VM. It appears that whatever rules have been set for the node also apply to the VMs even though the VM firewalls are all off at a Proxmox configuration level.
Is this correct? I don't think it should be like this. The node firewall config should be totally independent of the VM's and traffic on port 22 should reach the VM even though the node has been configured not to allow it through for itself.

What would happen if you have competing rules set at a node and VM level? If I drop port 22 at node but specifically accept it at VM, will it come through?

Unless I understand your wiki page for the firewall completely wrong, then I am correct in my assumptions and it should work according to my expectations detailed above, but its not...

Werner
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!