I'm experiencing a problem with the blacklist IPSet - I can't seem to make it apply to VMs.
As I understand it from the documentation, in general, Firewall rules configured at the Datacentre level will apply to all Nodes, but won't apply to VMs.
One exception is that an IPSet called blacklist will apply to all Nodes and to all VMs
Is this right? If it is, then I'm doing something wrong, because while I can get a blacklist IPSet to apply perfectly well to Nodes, it doesn't seem to apply to VMs running on the nodes.
I'm wondering if I've missed a step, or an item in the documentation, that might explain it?
The test setup is as follows:
Proxmox 5.1-41
At the Datacentre level:
Firewall enabled
policy_in changed to ALLOW (just for testing purposes to prevent lockouts)
An IPSet called blacklist (all lower case) is manually created, then 192.168.1.16 added to it.
Still at the Datacentre level, I added a firewall rule: Direction IN, Action DROP, (and then I select +blacklist from the Source dropdown.
The above rule is then moved to below a "GROUP management" rule item, which allows 192.168.1.10 (admin PC, in case I lock myself out despite my other precautions).
For testing, I created a VM on 192.168.1.60 on Node 192.168.1.100.
From 192.168.1.16, I can ping and SSH to the VM at 192.168.1.60, but can't see the node at 192.168.1.100 at all.
On removing 192.168.1.16 from the blacklist IPSet, I can then ping and ssh to 192.168.1.100.
So obviously the IP is in the blacklist correctly, and it is applying to the Node. It just isn't applying to the VM.
I'm currently using Bridged networking. Does this make a difference?
Initially I had the Firewall at the VM level disabled. I thought maybe that was the problem, but enabling it made no difference.
The Firewall is Enabled at the Node level.
Obviously I'm testing from and to IP that are all within the local network, rather than outside. I can see how that might have an impact, except that the blacklist did work by blocking access to the Node.
Any suggestions/pointers/clarifications would be very much appreciated! I'm sure I'm just doing something stupid, but I don't know what it is
Here's the actual cluster.fw file contents:
[OPTIONS]
enable: 1
policy_in: ACCEPT
[IPSET blacklist] # Applies to all
192.168.1.16 # test blacklist source
[RULES]
GROUP management
IN DROP -source +blacklist
[group management] # Management
IN ACCEPT -source 192.168.1.10 # Allow one machine to get in in case of trouble!
As I understand it from the documentation, in general, Firewall rules configured at the Datacentre level will apply to all Nodes, but won't apply to VMs.
One exception is that an IPSet called blacklist will apply to all Nodes and to all VMs
Is this right? If it is, then I'm doing something wrong, because while I can get a blacklist IPSet to apply perfectly well to Nodes, it doesn't seem to apply to VMs running on the nodes.
I'm wondering if I've missed a step, or an item in the documentation, that might explain it?
The test setup is as follows:
Proxmox 5.1-41
At the Datacentre level:
Firewall enabled
policy_in changed to ALLOW (just for testing purposes to prevent lockouts)
An IPSet called blacklist (all lower case) is manually created, then 192.168.1.16 added to it.
Still at the Datacentre level, I added a firewall rule: Direction IN, Action DROP, (and then I select +blacklist from the Source dropdown.
The above rule is then moved to below a "GROUP management" rule item, which allows 192.168.1.10 (admin PC, in case I lock myself out despite my other precautions).
For testing, I created a VM on 192.168.1.60 on Node 192.168.1.100.
From 192.168.1.16, I can ping and SSH to the VM at 192.168.1.60, but can't see the node at 192.168.1.100 at all.
On removing 192.168.1.16 from the blacklist IPSet, I can then ping and ssh to 192.168.1.100.
So obviously the IP is in the blacklist correctly, and it is applying to the Node. It just isn't applying to the VM.
I'm currently using Bridged networking. Does this make a difference?
Initially I had the Firewall at the VM level disabled. I thought maybe that was the problem, but enabling it made no difference.
The Firewall is Enabled at the Node level.
Obviously I'm testing from and to IP that are all within the local network, rather than outside. I can see how that might have an impact, except that the blacklist did work by blocking access to the Node.
Any suggestions/pointers/clarifications would be very much appreciated! I'm sure I'm just doing something stupid, but I don't know what it is
Here's the actual cluster.fw file contents:
[OPTIONS]
enable: 1
policy_in: ACCEPT
[IPSET blacklist] # Applies to all
192.168.1.16 # test blacklist source
[RULES]
GROUP management
IN DROP -source +blacklist
[group management] # Management
IN ACCEPT -source 192.168.1.10 # Allow one machine to get in in case of trouble!