[SOLVED] configuring nat in proxmox 6.2-6

lethargos

Well-Known Member
Jun 10, 2017
134
6
58
74
Helllo,

I'm trying to configure SNAT for my virtual machines, but I'm unable to for some reason I don't really understand.
This is how I've configured the interfaces file:
Code:
auto vmbr1
iface vmbr1 inet static
    address 10.10.111.1
    netmask 255.255.255.0
    bridge-ports none
    bridge-stp off
    bridge-fd 0

    post-up iptables -t nat -A POSTROUTING -s '10.10.111.0/24' -o vmbr0 -j MASQUERADE
    post-down iptables -t nat -D POSTROUTING -s '10.10.111.0/24' -o vmbr0 -j MASQUERADE
On my windows machine I've got:

10.10.111.10/24

iptables looks correct to my mind:
Code:
root@pve1:~# iptables -t nat -vnL --line-numbers
Chain PREROUTING (policy ACCEPT 22394 packets, 4195K bytes)
num   pkts bytes target     prot opt in     out     source               destination

Chain INPUT (policy ACCEPT 1138 packets, 70431 bytes)
num   pkts bytes target     prot opt in     out     source               destination

Chain OUTPUT (policy ACCEPT 1081 packets, 67494 bytes)
num   pkts bytes target     prot opt in     out     source               destination

Chain POSTROUTING (policy ACCEPT 1081 packets, 67494 bytes)
num   pkts bytes target     prot opt in     out     source               destination
1      230 15281 MASQUERADE  all  --  *      vmbr0   10.10.111.0/24       0.0.0.0/0

And vmbr0 is the main interface facing the internet:
Code:
root@pve1:~# ip a show vmbr0
3: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 70:71:bc:83:38:14 brd ff:ff:ff:ff:ff:ff
    inet 192.168.111.16/24 brd 192.168.111.255 scope global dynamic vmbr0

So what I find really weird about it is that the return traffic isn't being received by the virtual machine. So when run a tcpdump on vmbr0 while pinging an external server from the virtual machines, I get the reply packages, but the VM never sees it. I would have expected the SNAT rule to be stateful, of course, but something is blocking the traffic and I'm not sure what.

While running ping form on the VM:
Code:
root@pve1:~# tcpdump -i vmbr0 icmp -nn -vvv
tcpdump: listening on vmbr0, link-type EN10MB (Ethernet), capture size 262144 bytes
12:03:55.332153 IP (tos 0x0, ttl 127, id 13584, offset 0, flags [none], proto ICMP (1), length 60)
    192.168.111.16 > public_ip: ICMP echo request, id 1, seq 38, length 40
12:03:55.335972 IP (tos 0x0, ttl 58, id 55309, offset 0, flags [none], proto ICMP (1), length 60)
    public_ip > 192.168.111.16: ICMP echo reply, id 1, seq 38, length 40
12:04:00.029916 IP (tos 0x0, ttl 127, id 13585, offset 0, flags [none], proto ICMP (1), length 60)
    192.168.111.16 > public_ip: ICMP echo request, id 1, seq 39, length 40
12:04:00.033527 IP (tos 0x0, ttl 58, id 55541, offset 0, flags [none], proto ICMP (1), length 60)
    public_ip > 192.168.111.16: ICMP echo reply, id 1, seq 39, length 40
12:04:05.029182 IP (tos 0x0, ttl 127, id 13586, offset 0, flags [none], proto ICMP (1), length 60)


Any ideas how I can further debug this?
Thanks!

EDIT:
I forgot to mention that ipv4 forwarding is also activated:

Code:
root@pve1:~# sysctl -a | grep ip_forward
net.ipv4.ip_forward = 1
 
Last edited:
Hello lethargos, I get exactly the same issue with 6.2-6. I follow all tutorials and Proxmox VE Administration Guide but still not working. The servers are up to date. I have reinstalled server 3 times but the problem still continues. I really don't know what to do.

Thanks.
 
  • Like
Reactions: lethargos
Hello...As a workaround (perhaps) I'm using a CT with public and private ip as a gateway and works fine so far. This could help
 
I've thought of that myself too, but I'm also very interested in DNAT, which I haven't yet tested. I want to expose services running in the virtual machines.
 
So is this how it is? Proxmox simply won't support SNAT anymore in this version? We just have to deal with this? I find it a little bit astonishing, to say the least.
 
Did you also add the rules mentioned in the Admin Guide:

Note: In some masquerade setups with firewall enabled, conntrack zones might be needed for outgoing connections. Otherwise the firewall could block outgoing connections since they will prefer the POSTROUTING of the VM bridge (and not MASQUERADE).

Adding these lines in the /etc/network/interfaces can fix this problem:
Code:
post-up   iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1
 
Yes, I've already tried that, and it's not working. That's supposed to be needed if the proxmox firewall is activated, as far as I understand. But that's not the case, and I've already showed you. Again my network configuration:

FILTER:
Code:
root@pve1:~# iptables -vnL
Chain INPUT (policy ACCEPT 10M packets, 3394M bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain FORWARD (policy ACCEPT 482 packets, 31753 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain OUTPUT (policy ACCEPT 39M packets, 2181G bytes)
 pkts bytes target     prot opt in     out     source               destination

RAW:
Code:
root@pve1:~# iptables -t raw -vnL
Chain PREROUTING (policy ACCEPT 14441 packets, 1751K bytes)
 pkts bytes target     prot opt in     out     source               destination
  307 29071 CT         all  --  fwbr+  *       0.0.0.0/0            0.0.0.0/0            CT zone 1

Chain OUTPUT (policy ACCEPT 13096 packets, 2351K bytes)
 pkts bytes target     prot opt in     out     source               destination

NAT:
Code:
root@pve1:~# iptables -t nat -vnL

Chain PREROUTING (policy ACCEPT 206K packets, 32M bytes)

 pkts bytes target     prot opt in     out     source               destination


Chain INPUT (policy ACCEPT 5244 packets, 328K bytes)

 pkts bytes target     prot opt in     out     source               destination


Chain OUTPUT (policy ACCEPT 7759 packets, 471K bytes)

 pkts bytes target     prot opt in     out     source               destination


Chain POSTROUTING (policy ACCEPT 7759 packets, 471K bytes)

 pkts bytes target     prot opt in     out     source               destination

  107  6912 MASQUERADE  all  --  *      vmbr0   10.10.111.0/24       0.0.0.0/0

network interfaces:
Code:
root@pve1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp5s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
    link/ether 70:71:bc:83:38:14 brd ff:ff:ff:ff:ff:ff
3: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 70:71:bc:83:38:14 brd ff:ff:ff:ff:ff:ff
    inet 192.168.111.16/24 brd 192.168.111.255 scope global dynamic vmbr0
       valid_lft 251577sec preferred_lft 251577sec
    inet6 2a02:a58:8229:8900:7271:bcff:fe83:3814/64 scope global dynamic mngtmpaddr
       valid_lft 89698sec preferred_lft 86098sec
    inet6 fe80::7271:bcff:fe83:3814/64 scope link
       valid_lft forever preferred_lft forever
4: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether a2:03:8b:73:f5:dc brd ff:ff:ff:ff:ff:ff
    inet 10.10.111.1/24 scope global vmbr1
       valid_lft forever preferred_lft forever
    inet6 fe80::8c02:ccff:fe99:ec9c/64 scope link
       valid_lft forever preferred_lft forever

bridge config for the current vm that I'm testing now:

Bash:
root@pve1:~# brctl show
vmbr1        8000.a2038b73f5dc    no        fwpr102p0

On the ubuntu VM (now I'm testing it with ubuntu, but I don't think that should matter) I've got the following network settings:
Code:
IP: 10.10.111.111/24
GW: 10.10.111.1
DNS: 192.168.111.1 # same as on the host

As I've already said, the vm can ping its gateway and the icmp also reaches external servers, but the response isn't reaching vmbr1, only vmbr0 (the internet facing interface).

The proxmox is behind a router which in turn uses NAT, if that makes any difference. It shouldn't to my mind.
Moreover, the vms work perfectly if they're part of the internet-facing bridge, i.e. vmbr0 and they get dhcp from the router.
 
Yes, I've already tried that, and it's not working. That's supposed to be needed if the proxmox firewall is activated, as far as I understand. But that's not the case, and I've already showed you.

you didn't mention that fact, you only posted the NAT table which is unused by PVE in any case..

Again my network configuration:

FILTER:
Code:
root@pve1:~# iptables -vnL
Chain INPUT (policy ACCEPT 10M packets, 3394M bytes)
pkts bytes target     prot opt in     out     source               destination

Chain FORWARD (policy ACCEPT 482 packets, 31753 bytes)
pkts bytes target     prot opt in     out     source               destination

Chain OUTPUT (policy ACCEPT 39M packets, 2181G bytes)
pkts bytes target     prot opt in     out     source               destination

RAW:
Code:
root@pve1:~# iptables -t raw -vnL
Chain PREROUTING (policy ACCEPT 14441 packets, 1751K bytes)
pkts bytes target     prot opt in     out     source               destination
  307 29071 CT         all  --  fwbr+  *       0.0.0.0/0            0.0.0.0/0            CT zone 1

Chain OUTPUT (policy ACCEPT 13096 packets, 2351K bytes)
pkts bytes target     prot opt in     out     source               destination

NAT:
Code:
root@pve1:~# iptables -t nat -vnL

Chain PREROUTING (policy ACCEPT 206K packets, 32M bytes)

pkts bytes target     prot opt in     out     source               destination


Chain INPUT (policy ACCEPT 5244 packets, 328K bytes)

pkts bytes target     prot opt in     out     source               destination


Chain OUTPUT (policy ACCEPT 7759 packets, 471K bytes)

pkts bytes target     prot opt in     out     source               destination


Chain POSTROUTING (policy ACCEPT 7759 packets, 471K bytes)

pkts bytes target     prot opt in     out     source               destination

  107  6912 MASQUERADE  all  --  *      vmbr0   10.10.111.0/24       0.0.0.0/0

network interfaces:
Code:
root@pve1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp5s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
    link/ether 70:71:bc:83:38:14 brd ff:ff:ff:ff:ff:ff
3: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 70:71:bc:83:38:14 brd ff:ff:ff:ff:ff:ff
    inet 192.168.111.16/24 brd 192.168.111.255 scope global dynamic vmbr0
       valid_lft 251577sec preferred_lft 251577sec
    inet6 2a02:a58:8229:8900:7271:bcff:fe83:3814/64 scope global dynamic mngtmpaddr
       valid_lft 89698sec preferred_lft 86098sec
    inet6 fe80::7271:bcff:fe83:3814/64 scope link
       valid_lft forever preferred_lft forever
4: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether a2:03:8b:73:f5:dc brd ff:ff:ff:ff:ff:ff
    inet 10.10.111.1/24 scope global vmbr1
       valid_lft forever preferred_lft forever
    inet6 fe80::8c02:ccff:fe99:ec9c/64 scope link
       valid_lft forever preferred_lft forever

bridge config for the current vm that I'm testing now:

Bash:
root@pve1:~# brctl show
vmbr1        8000.a2038b73f5dc    no        fwpr102p0

On the ubuntu VM (now I'm testing it with ubuntu, but I don't think that should matter) I've got the following network settings:
Code:
IP: 10.10.111.111/24
GW: 10.10.111.1
DNS: 192.168.111.1 # same as on the host

As I've already said, the vm can ping its gateway and the icmp also reaches external servers, but the response isn't reaching vmbr1, only vmbr0 (the internet facing interface).

The proxmox is behind a router which in turn uses NAT, if that makes any difference. It shouldn't to my mind.
Moreover, the vms work perfectly if they're part of the internet-facing bridge, i.e. vmbr0 and they get dhcp from the router.

does it work if you ping a VM on vmbr0 from one on vmbr1? what does the ARP cache say about your ping target and source? can you double check that forwarding is enabled?
 
Hello,

No, ping between VMs which belong to different bridges does not work. I can only ping the gateway (10.10.111.1) from a VM under nat.
On the host:
Code:
root@pve1:~# arp -a
? (10.10.111.111) at 0e:f3:46:5f:bf:b1 [ether] on vmbr1
? (192.168.111.22) at 4a:9e:c2:96:5f:eb [ether] on vmbr0
? (192.168.111.2) at b8:e8:56:09:01:74 [ether] on vmbr0
? (192.168.111.21) at 4e:bb:2f:83:fb:94 [ether] on vmbr0
? (192.168.111.1) at f4:79:60:42:7e:3c [ether] on vmbr0

On the VM (in vmbr1):
Code:
? (10.10.111.1) at 5a:a9:c7:ad:9f:5e [ether] on ens18
So nothing surprising here, I'd say.

Code:
root@pve1:~# ip link show vmbr1
4: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 5a:a9:c7:ad:9f:5e brd ff:ff:ff:ff:ff:ff

Yes, you're right about the firewall/iptables information, of course.
 
Hey all,

I thought I'd give my 2 cents here as I had been struggling for this for a while and below is what worked for me:
- Assume that the Proxmox host is 192.168.0.100 and a container say 192.168.0.101 using vmbr0

I added the following two lines on the host in /etc/networks/interfaces and rebooted the host pve node:

post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '192.168.0.0/24' -o vmbr0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '192.168.0.0./24' -o vmbr0 -j MASQUERADE

I hope this helps someone
 
Well, I can confirm that these are actually your 2 cents, given the fact that the configuration is identical to mine and you're just repeating what I had already posted, of course, with a different subnet (the second rule is wrong anyway, because you've got a dot after the IP) and the fact that ip_forwarding is permanently on through sysctl.
So if you think repeating what's already been posted actually helps, I can guarantee you it doesn't. It might help though if the thread is somehow bumped and draws any more attention, so in this sense only you might have helped.
 
Last edited:
After 5 months I'm still none the wiser. If anyone has any relevant ideas, I'd be happy to read them. I really don't understand why the packets are not being routed.
With tcpdump run on vmbr0 (internet facing interface), I can see both icmp packets going in both directions. So the problem is the internal routing of the host - the packets are not being forwarded to vmbr1.
Code:
root@pve1:~# ip route
default via 192.168.111.1 dev vmbr0
10.10.111.0/24 dev vmbr1 proto kernel scope link src 10.10.111.1
192.168.111.0/24 dev vmbr0 proto kernel scope link src 192.168.111.16
 
The problem was that dhcp was activated on vmbr0 (internet-facing bridge). For some reason, Linux (I'm guessing it's not a proxmox thing, I haven't yet tested on a "plain" linux) has a problem with NAT with dhcp for some weird reason.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!