PVE firewall with NAT not working

5k7

New Member
Sep 28, 2022
16
0
1
Hello,

I set fresh Proxmox on dedicated server with Hetzner. I have only one IPv4 and IPv6 subnet. Everything is working fine means:
- 3 interfaces: eno1, vmbr0 routed and vmbr1 with NAT.
- I can reach VM directly through ipv6
- VM can reach ipv4 network through NAT

/etc/network/interfaces

Code:
### LOOPBACK ###
auto lo
iface lo inet loopback
iface lo inet6 loopback

### IPv4 ###
# Main IPv4 from Host
auto eno1
iface eno1 inet static
  address <MAIN IP>
  netmask 255.255.255.255 
  gateway <GATEWAY_IP>
  pointopoint <GATEWAY_IP>

### IPv6 ###
# Main IPv6
iface eno1 inet6 static
  address <ipv6 addr from subnet>::2
  netmask 128
  gateway <gateway>
  up sysctl -p


### VM-Routed IPv4
auto vmbr0
iface vmbr0 inet static
  address <MainIP>
  netmask 255.255.255.255
  bridge_ports none
  bridge_stp off
  bridge_fd 0

#VM-Routed IPv6
iface vmbr0 inet6 static
  address <ipv6>::3
  netmask 64
  up ip -6 route add <ipv6>::/64 dev vmbr0

### Private NAT used by Proxmox
auto vmbr1
iface vmbr1 inet static
  address  10.10.10.1
  netmask  255.255.255.0
  bridge_ports none
  bridge_stp off
  bridge_fd 0
  bridge_maxwait 0
  post-up   iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
  post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE

VM configuration:

Code:
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet6 static
        address <ipv6>::4/64
        gateway <ipv6>::3

auto eth1
iface eth1 inet dhcp

For now at least at point I am everything is working like expected. Problem is when I turn on firewall I cannot longer use IPv4 from VM. I have firewall enabled only at datacenter level with entries for SSH and 8006 port for GUI. IPv6 seems to be working fine.

Should I add some special entry for vmbr1 with NAT to keep connection initiated from VM?
 
See 3.3.6 in the docs: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysadmin_network_configuration
Code:
In some masquerade setups with firewall enabled, conntrack zones might be needed for outgoing connections. Otherwise the firewall could block outgoing connections since they will prefer the POSTROUTING of the VM bridge (and not MASQUERADE).
And the solution mentioned in the docs:
Code:
post-up   iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1

See if that fixes the issue for you.
 
  • Like
Reactions: crazymdma
Indeed that fixed the problem. Now everything is working like a charm. Thank you very much.
 
See 3.3.6 in the docs: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysadmin_network_configuration
Code:
In some masquerade setups with firewall enabled, conntrack zones might be needed for outgoing connections. Otherwise the firewall could block outgoing connections since they will prefer the POSTROUTING of the VM bridge (and not MASQUERADE).
And the solution mentioned in the docs:
Code:
post-up   iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1

See if that fixes the issue for you.


Does anyone know how to fix issue this but for hairpin NAT? I've searched everywhere but couldn't find anything :(
 
Does anyone know how to fix issue this but for hairpin NAT? I've searched everywhere but couldn't find anything :(
you mean a normal snat?dnat ?
Code:
echo 1 > /proc/sys/net/ipv4/ip_forward
iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1


#iptables -t nat -A POSTROUTING -s '10.11.250.0/24' -o eth0 -j MASQUERADE
iptables -w -t nat -A POSTROUTING -o eth0   -s 10.11.250.20  -j SNAT --to-source 10.10.250.20

iptables -w -t nat -A PREROUTING -i eth0  -p tcp -m tcp   -d 10.10.250.20  --dport 80 -j DNAT --to-destination 10.11.250.20
iptables -w -t nat -A PREROUTING -i eth0  -p tcp -m tcp   -d 10.10.250.20  --dport 22 -j DNAT --to-destination 10.11.250.20

this works, ofc
Code:
up ip add 10.11.250.20 dev eth0
in interfaces

in this example we only forward 2 ports to the vm,. ofc you can forward all ports at once if you so desire
same tgime we allow all ports outgoing
 
Last edited:
Unfortunately a simple SNAT/DNAT only works if the vNIC firewall is disabled, independently from the LXC container firewall status.
 
Unfortunately a simple SNAT/DNAT only works if the vNIC firewall is disabled, independently from the LXC container firewall status.
cannot speak of lcx container but it does work with the VM firewall if you put prerouting into the fwbr chain (line2)
 
cannot speak of lcx container but it does work with the VM firewall if you put prerouting into the fwbr chain (line2)
Unfortunately it doesn't work for me if the vNIC has the firewall enabled (but DISABLED in the firewall panel)

Code:
auto lo
iface lo inet loopback

iface enp41s0 inet manual

auto vmbr0
iface vmbr0 inet static
    address publicIP/26
    gateway publicGW
    bridge-ports enp41s0
    bridge-stp off
    bridge-fd 0
    post-up iptables -t nat -A PREROUTING -i vmbr0 -p tcp -m multiport --dports 443,80 -j DNAT --to 192.168.20.200
    post-down iptables -t nat -D PREROUTING -i vmbr0 -p tcp -m multiport --dports 443,80 -j DNAT --to 192.168.20.200
    post-up iptables -t nat -A PREROUTING -i vmbr0 -p udp -m multiport --dports 443,80 -j DNAT --to 192.168.20.200
    post-down iptables -t nat -D PREROUTING -i vmbr0 -p udp -m multiport --dports 443,80 -j DNAT --to 192.168.20.200
    post-up iptables -t nat -A PREROUTING -i vmbr0 -p tcp -m multiport --dports 25,465,587,143,993,110,995,4190 -j DNAT --to 192.168.20.3
    post-down iptables -t nat -D PREROUTING -i vmbr0 -p tcp -m multiport --dports 25,465,587,143,993,110,995,4190 -j DNAT --to 192.168.20.3

    #https://forum.proxmox.com/threads/pve-firewall-with-nat-not-working.115896/
    post-up   iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
    post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1




auto vmbr1
iface vmbr1 inet static
    address 192.168.20.0/24
    bridge-ports none
    bridge-stp off
    bridge-fd 0

    post-up ip route add 192.168.1.0/24 via 192.168.20.7
    post-down ip route del 192.168.1.0/24 via 192.168.20.7
    post-up ip route add 192.168.6.0/24 via 192.168.20.7
    post-down ip route del 192.168.6.0/24 via 192.168.20.7
    post-up ip route add 192.168.8.0/24 via 192.168.20.7
    post-down ip route del 192.168.8.0/24 via 192.168.20.7



    post-up iptables -t nat -A POSTROUTING -s '192.168.20.0/24' -o vmbr0 -j MASQUERADE && iptables -t nat -A PREROUTING -d publicIP -p tcp --dport 443 -j DNAT --to 192.168.20.200:443
    post-down iptables -t nat -D POSTROUTING -s '192.168.20.0/24' -o vmbr0 -j MASQUERADE && iptables -t nat -D PREROUTING -d publicIP -p tcp --dport 443 -j DNAT --to 192.168.20.200:443
 
if firewall is enabled in the vnic but disabled in the firewall panel means the firewall is OFF,

if you still having blocking issues it isnt the firewall at all. and i know what it is as i just made a post about it.
when you activate the vnic firewall checkmark, regardless if firewall on or off, the MTU will be set forcefully to either mtu value entered OR 1500 if nothing is entered.
if you set mtu to 1 the bridge mtu will be used.

thats relevant because if you mtu is to high then issues on some services will look like firewall blocks

also i woudl prefer SNAT vs masquerade, its faster and will alllow to relzable know your outgoing ip, masquerade will not
 
Last edited:
if firewall is enabled in the vnic but disabled in the firewall panel means the firewall is OFF,

if you still having blocking issues it isnt the firewall at all. and i know what it is as i just made a post about it.
when you activate the vnic firewall checkmark, regardless if firewall on or orr, the MTU will be set forcefully to either mtu value entered OR 1500 if nothing is entered.
if you set mtu to 1 the bridge mtu will be used.

thats relevant because if you mtu is to high then issues on some services will look like firewall blocks
MTU has not been modified in both VMs and containers. I do believe there is still something wrong with how the routing is configured, as the need for the
Code:
post-up   iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
is proof of this problem existing
 
oh hold this is not the proper nat setup

youre supposed to give the pubip to your network adapter
then make vmbr0 a private network

then you can nat to it
 
MTU has not been modified in both VMs and containers. I do believe there is still something wrong with how the routing is configured, as the need for the
Code:
post-up   iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
is proof of this problem existing
no its not proof of a problem

the additional command is nessesary to ad prerouting to the fwbr chain, which normaly is not needed there so it isnt in.
if you do nat you need it in so presourting can get filtered by fwbr which in turn allows to set rules via the gui

you dont need to add it to fwbr but then you need to set rules outside the gui.


iam not shure if nat will work if your main adapter is a bridge. its would be cleaner and simplier to set your primary adapter as the ionterface having the public ip, then nat to the vmbr private network as intended. in any case its not so good idea to put your private network onto the public adapter but i would guess your hosting privder will block the leakage anyway
 
ok i dunno why i did this but i did
here a working config

Code:
auto lo
iface lo inet loopback



auto enp41s0
iface enp41s0 inet static
    address publicIP/26
    gateway publicGW

        post-up    iptables -w -t nat -A PREROUTING -i enp41s0 -p udp -m tcp -m multiport -d publicIP --dports 443,80 -j DNAT --to-destination 192.168.20.200
        post-up       iptables -w -t nat -A PREROUTING -i enp41s0 -p udp -m udp -m multiport -d publicIP --dports 443,80 -j DNAT --to-destination 192.168.20.200
        post-up       iptables -w -t nat -A PREROUTING -i enp41s0 -p udp -m tcp -m multiport -d publicIP --dports 25,465,587,143,993,110,995,4190 -j DNAT --to-destination 192.168.20.3
        post-up    iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
        
        post-down  iptables -w -t nat -D PREROUTING -i enp41s0 -p udp -m tcp -m multiport -d publicIP --dports 443,80 -j DNAT --to-destination 192.168.20.200
        post-down  iptables -w -t nat -D PREROUTING -i enp41s0 -p udp -m udp -m multiport -d publicIP --dports 443,80 -j DNAT --to-destination 192.168.20.200
        post-down  iptables -w -t nat -D PREROUTING -i enp41s0 -p udp -m tcp -m multiport -d publicIP --dports 25,465,587,143,993,110,995,4190 -j DNAT --to-destination 192.168.20.3
        post-down  iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1






auto vmbr0
iface vmbr1 inet static
    address 192.168.20.0/24
    bridge-ports none
    bridge-stp off
    bridge-fd 0

    
    post-up       ip route add 192.168.0.0/16 via 192.168.20.7
    post-down     ip route del 192.168.0.0/16 via 192.168.20.7
    
    #with this even your routed subnets at 1,6,8 should find their way out and its not masquerade and thats a good thing
    post-up     iptables -w -t nat -A POSTROUTING -o enp41s0 -s 192.168.0.0/16 -j SNAT --to-source publicIP
    post-down     iptables -w -t nat -D POSTROUTING -o enp41s0 -s 192.168.0.0/16 -j SNAT --to-source publicIP
    
    
    
    #this one route simplifies your setup for now, you only need to split into 3 routes if you have some 192.168.x.x elsewhere to route to>
    
    #post-up     ip route add 192.168.1.0/24 via 192.168.20.7
    #post-up     ip route add 192.168.6.0/24 via 192.168.20.7
    #post-up     ip route add 192.168.8.0/24 via 192.168.20.7
  

    #post-down     ip route del 192.168.1.0/24 via 192.168.20.7
    #post-down     ip route del 192.168.6.0/24 via 192.168.20.7
    #post-down     ip route del 192.168.8.0/24 via 192.168.20.7

    #here you make masquerade we dont like that, and you nat a second time
    #post-up iptables -t nat -A POSTROUTING -s '192.168.20.0/24' -o vmbr0 -j MASQUERADE && iptables -t nat -A PREROUTING -d publicIP -p tcp --dport 443 -j DNAT --to 192.168.20.200:443
    #post-down iptables -t nat -D POSTROUTING -s '192.168.20.0/24' -o vmbr0 -j MASQUERADE && iptables -t nat -D PREROUTING -d publicIP -p tcp --dport 443 -j DNAT --to 192.168.20.200:443
 
It is actually how it’s described in the “
Masquerading (NAT)” section of this guide: https://community.hetzner.com/tutorials/install-and-configure-proxmox_ve/
no it is not
reread again you confused routed with nat there

the nat section even at hetzner (famous for some esoteric shit) it is basicalkly the same as i wrote you

they write main interface has the ip, then bridge tget the private subnet, then you nat as it should be.
if you need more than one ip you simply add ips to the main interface and nat some more

i made you a complete setup, replace public ip with your realip and this will work

take it or leave it
 
ok i dunno why i did this but i did
here a working config

Code:
auto lo
iface lo inet loopback



auto enp41s0
iface enp41s0 inet static
    address publicIP/26
    gateway publicGW

        post-up    iptables -w -t nat -A PREROUTING -i enp41s0 -p udp -m tcp -m multiport -d publicIP --dports 443,80 -j DNAT --to-destination 192.168.20.200
        post-up       iptables -w -t nat -A PREROUTING -i enp41s0 -p udp -m udp -m multiport -d publicIP --dports 443,80 -j DNAT --to-destination 192.168.20.200
        post-up       iptables -w -t nat -A PREROUTING -i enp41s0 -p udp -m tcp -m multiport -d publicIP --dports 25,465,587,143,993,110,995,4190 -j DNAT --to-destination 192.168.20.3
        post-up    iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
       
        post-down  iptables -w -t nat -D PREROUTING -i enp41s0 -p udp -m tcp -m multiport -d publicIP --dports 443,80 -j DNAT --to-destination 192.168.20.200
        post-down  iptables -w -t nat -D PREROUTING -i enp41s0 -p udp -m udp -m multiport -d publicIP --dports 443,80 -j DNAT --to-destination 192.168.20.200
        post-down  iptables -w -t nat -D PREROUTING -i enp41s0 -p udp -m tcp -m multiport -d publicIP --dports 25,465,587,143,993,110,995,4190 -j DNAT --to-destination 192.168.20.3
        post-down  iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1






auto vmbr0
iface vmbr1 inet static
    address 192.168.20.0/24
    bridge-ports none
    bridge-stp off
    bridge-fd 0

   
    post-up       ip route add 192.168.0.0/16 via 192.168.20.7
    post-down     ip route del 192.168.0.0/16 via 192.168.20.7
   
    #with this even your routed subnets at 1,6,8 should find their way out and its not masquerade and thats a good thing
    post-up     iptables -w -t nat -A POSTROUTING -o enp41s0 -s 192.168.0.0/16 -j SNAT --to-source publicIP
    post-down     iptables -w -t nat -D POSTROUTING -o enp41s0 -s 192.168.0.0/16 -j SNAT --to-source publicIP
   
   
   
    #this one route simplifies your setup for now, you only need to split into 3 routes if you have some 192.168.x.x elsewhere to route to>
   
    #post-up     ip route add 192.168.1.0/24 via 192.168.20.7
    #post-up     ip route add 192.168.6.0/24 via 192.168.20.7
    #post-up     ip route add 192.168.8.0/24 via 192.168.20.7
 

    #post-down     ip route del 192.168.1.0/24 via 192.168.20.7
    #post-down     ip route del 192.168.6.0/24 via 192.168.20.7
    #post-down     ip route del 192.168.8.0/24 via 192.168.20.7

    #here you make masquerade we dont like that, and you nat a second time
    #post-up iptables -t nat -A POSTROUTING -s '192.168.20.0/24' -o vmbr0 -j MASQUERADE && iptables -t nat -A PREROUTING -d publicIP -p tcp --dport 443 -j DNAT --to 192.168.20.200:443
    #post-down iptables -t nat -D POSTROUTING -s '192.168.20.0/24' -o vmbr0 -j MASQUERADE && iptables -t nat -D PREROUTING -d publicIP -p tcp --dport 443 -j DNAT --to 192.168.20.200:443
This still does not work btw
 
the config i gave you does work, at least for VM, again i cant speak for lcx at all, i dont use lcx container. but does work def for vm

ofc network config needs to be changed on the vm for vmbr0