Hey,
I have a dedicated server with 1 public IPv4 running Proxmox VE, set up using the Masquerading technique described in the documentation (https://pve.proxmox.com/wiki/Networ...ith_tt_span_class_monospaced_iptables_span_tt) with port forwarding. All guests have static IP addresses.
I use a guest LXC container as reverse proxy to handle domains and SSL/TLS.
As long as I keep certain firewalls disabled, everything works really well.
My problem is that once I enable the firewall on the network device of a guest VM (
The usual flow of network traffic is roughly this:
High-level firewall settings in the working setup:
The webservice on VM 102 can also connect to "itself" by using <domain.tld>. However, once I enable the network device firewall for it, this does not work anymore. It can ping itself, it can ping the reverse proxy CT, and it can ping the Proxmox VE node both using the internal (192.168.1.1) and external IPv4. But e.g.
(Stripped) /etc/network/interfaces:
Routing table on the Proxmox VE node:
/etc/pve/firewall/cluster.fw
/etc/pve/firewall/100.fw
/etc/pve/firewall/102.fw
I'm running
At this point I'm really reaching a limit of my understanding of what's happening exactly. Maybe someone finds a flaw in my configuration?
I have a dedicated server with 1 public IPv4 running Proxmox VE, set up using the Masquerading technique described in the documentation (https://pve.proxmox.com/wiki/Networ...ith_tt_span_class_monospaced_iptables_span_tt) with port forwarding. All guests have static IP addresses.
I use a guest LXC container as reverse proxy to handle domains and SSL/TLS.
As long as I keep certain firewalls disabled, everything works really well.
My problem is that once I enable the firewall on the network device of a guest VM (
firewall=1
), that guest VM cannot connect to any service that is being forwarded or reverse-proxied (e.g. itself, using the external address). I added rules that should allow it, so I suspect that something in my iptables configuration is wrong.The usual flow of network traffic is roughly this:
- Port 443 on <domain.tld> / <external-ip> is being forwarded to 192.168.1.100:443 (CT 100)
- nginx in CT 100 handles domains and TLS termination, reverse-proxies to 192.168.1.102:8080 (VM 102)
- webservice in VM 102 is listening on port 8080, plain HTTP only
High-level firewall settings in the working setup:
- Datacenter: firewall is enabled, allows incoming connections on port 443 (and ICMP)
- Proxmox VE node: firewall is enabled, allows incoming connections on port 443 (and ICMP)
- Reverseproxy Container (100): network device firewall is enabled, firewall in options is enabled, allows incoming connections on port 443 (and ICMP)
- Webservice VM (102): network device firewall is disabled, firewall in options in enabled, allows incoming connections on port 8080 (and ICMP)
The webservice on VM 102 can also connect to "itself" by using <domain.tld>. However, once I enable the network device firewall for it, this does not work anymore. It can ping itself, it can ping the reverse proxy CT, and it can ping the Proxmox VE node both using the internal (192.168.1.1) and external IPv4. But e.g.
curl https://<domain.tld>
times out, so I assume the packages are discarded or mis-routed.(Stripped) /etc/network/interfaces:
Code:
source /etc/network/interfaces.d/*
auto lo
iface lo inet loopback
iface lo inet6 loopback
auto eno1
iface eno1 inet static
address <external-ip>/27
gateway <gateway>
up route add -net <network> netmask 255.255.255.224 gw <gateway> dev eno1
iface eno1 inet6 static
address <external-ipv6>/128
gateway fe80::1
auto vmbr0
iface vmbr0 inet static
address 192.168.1.1/24
bridge-ports none
bridge-stp off
bridge-fd 0
iface vmbr0 inet6 static
address <external-ipv6>/64
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up echo 1 > /proc/sys/net/ipv4/conf/eno1/proxy_arp
post-up echo 1 > /proc/sys/net/ipv6/conf/all/forwarding
post-up iptables -t nat -A POSTROUTING -s '192.168.1.0/24' -o eno1 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '192.168.1.0/24' -o eno1 -j MASQUERADE
post-up iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1
post-up iptables -A PREROUTING -t nat -d <external-ip> -p tcp --dport 80 -j DNAT --to 192.168.1.100
post-down iptables -D PREROUTING -t nat -d <external-ip> -p tcp --dport 80 -j DNAT --to 192.168.1.100
post-up iptables -A PREROUTING -t nat -d <external-ip> -p tcp --dport 443 -j DNAT --to 192.168.1.100
post-down iptables -D PREROUTING -t nat -d <external-ip> -p tcp --dport 443 -j DNAT --to 192.168.1.100
Routing table on the Proxmox VE node:
Code:
% ip route
default via <gateway> dev eno1 proto kernel onlink
<network>/27 via <gateway> dev eno1
<network>/27 dev eno1 proto kernel scope link src <external-ip>
192.168.1.0/24 dev vmbr0 proto kernel scope link src 192.168.1.1
/etc/pve/firewall/cluster.fw
Code:
[OPTIONS]
enable: 1
[IPSET internal]
192.168.1.0/24 # internal VM bridge IPv4
[RULES]
IN HTTPS(ACCEPT) -log nolog
IN SSH(ACCEPT) -log nolog
IN Ping(ACCEPT) -log nolog
IN ACCEPT -p tcp -dport 8006 -log nolog # Proxmox
[group vm-default]
IN SSH(ACCEPT) -log nolog
IN Ping(ACCEPT) -log nolog
/etc/pve/firewall/100.fw
Code:
[OPTIONS]
enable: 1
log_level_in: nolog
[RULES]
GROUP vm-default
IN HTTPS(ACCEPT) -log nolog
IN HTTP(ACCEPT) -log nolog
/etc/pve/firewall/102.fw
Code:
[OPTIONS]
enable: 1
[RULES]
GROUP vm-default
IN ACCEPT -source +internal -p tcp -dport 8080 -log nolog
I'm running
pve-manager/7.1-10/6ddebafe (running kernel: 5.13.19-5-pve)
.At this point I'm really reaching a limit of my understanding of what's happening exactly. Maybe someone finds a flaw in my configuration?