My original post was actually wrong, forgive me. VM rules don't take precedence over DC rules. They are independent.
VMs only apply their own set of rules.
Datacenter rules apply to nodes. Node level rules apply to only the node.
Security groups, IPsets and aliases defined at datacenter level can be used at VM level.
--- old wrong post ---
VM rules take precedence over host rules, which take precedence over datacenter rules.
The idea is that when you go deeper down the hierarchy the settings get more specific. This way you can "generally allow traffic except for a specific VM's network interface".
Usually you want to block everything and allow specific services for specific VMs. If different VMs have different services you can simply allow them on the VM level. If the datacenter rules were to override the VM rules you'd have to allow all services of all VMs on datacenter level and block all unrequired services on each VM explicitly again.
So in your case the VM policy is to drop incoming traffic, and that's where the chain ends.
-------------------------------------------------------------------------------------------------
Wolfgang according to your corrected post you indicated that datacenter rules apply to nodes but not also automatically to the vms on those nodes. I have been configuring a test environment and this does not seem to be the case. Can you please clarify and confirm whether the rules set at a higher level automatically cascade down and how they can be overridden:
At DC level:
- Firewall on.
- DROP / ACCEPT
- Create a security group and add one rule to the group that allows access to port 8006 on an internal link IP
- Expectation: A common security rule set is created that can easily be applied to multiple nodes without having to manually create it per node.
At node level:
- Firewall on.
- DROP / ACCEPT
- Enable the security group created at DC level
- Expectation: All ports on all IPs configured for the node will be blocked except for the one port on the internal nic/IP that is associated with the enabled security group. Whatever was rules were set for this node will only apply to the node's network layer itself and not also to the network layer of the VMs running on this node.
At VM level:
- Firewall off.
- No rules configured
- Expectation: All open ports (that have been opened by the applications installed on the VM) on all IPs associated with the VM will be accessible from outside. So I would be able to access port 22 on the VM or ping it and the associated service will work and respond.
Having configured everything as per the above in both 3.4 and 4.1, the results are different to expectations. At DC and node level everything is working as expected but at VM level its not. Even though the VM applications are opening their ports (doing a netstat inside the VM), these services are not accessible and cannot be reached from outside the VM. It appears that whatever rules have been set for the node also apply to the VMs even though the VM firewalls are all off at a Proxmox configuration level.
Is this correct? I don't think it should be like this. The node firewall config should be totally independent of the VM's and traffic on port 22 should reach the VM even though the node has been configured not to allow it through for itself.
What would happen if you have competing rules set at a node and VM level? If I drop port 22 at node but specifically accept it at VM, will it come through?
Unless I understand your wiki page for the firewall completely wrong, then I am correct in my assumptions and it should work according to my expectations detailed above, but its not...
Werner