Trouble accessing web server in a VM from the internet with a single NIC setup.

MrAdequate

New Member
Jul 4, 2024
8
0
1
I have a bit of a puzzle here.

I'm trying to make a web server running in a VM internet accessible.
Doesn't have to be pretty. I don't even need SSL. Just need to see the the "Hello World" page on port 80.
Once I get that far I'm pretty sure I can build upon it myself.

The server in a colo, to which I do not have physical access.
I am working with a single physical network card, which is 10G. We'll call this eno123
I have a single publicly routable IPv4 address for this server. We'll call this 150.0.0.150.
Maybe I can get more in the future, but not many, and I need to get this working with one for now.
The interface with this address must be tagged vlan 6 (named eno123.6)

I have a private subnet, which we'll call 10.0.0.0/24
This is tagged vlan 10, running on the same physical network card (named eno123.10)

The Proxmox node has a bridge, which we'll call vmbr0, which has bridge port eno123.10 and is assigned IP 10.0.0.14.
If I change the bridge port, I can no longer access the node and have to resort to iDRAC to recover.

vmbr0 is NOT vlan-aware. Yes I've tried turning it on. Doesn't seem to make a difference.

There is a VM called http-server (10.0.0.5/24) which has apache2 running on port 80.

ufw within the VM is set to allow all
VM Firewall rules are set to ACCEPT HTTP on all. Firewall is on (and checked for the virtual network card as well)
Node Firewall rules are set to ACCEPT HTTP on all. Firewall is on.
Datacenter Firewall rules are set to ACCEPT HTTP on all. Firewall is on.

If I boot up another VM that has a desktop environment (which we'll call ubuntu-desktop), I can see the "Hello World" page on 10.0.0.5:80, so VMs can talk to each other no problem.

I can set up basic masquerading to give VMs internet access for downloading updates and whatnot (iptables -t nat -A POSTROUTING -s '10.0.0.0/24' -o eno123.6 -j MASQUERADE)

I have an account on a server running on bare metal in the same rack with which I access the Proxmox UI via SSH tunnel.
This account is limited, and can basically just be used for tunneling.
The tunnel requires vlan 10, which I cannot change.
Using this old bare metal server I can ping VMs running in the Proxmox server. I am certain it is running on vlan 10, so this should imply that the VMs are also on vlan 10.
I can also tunnel to a VM through this old bare metal server and open the "Hello World" page in a local browser on 127.0.0.1.
This would imply that any device on the subnet can access http-server (assuming they're tagged vlan 10).

I need, at some point, to use NAT to direct internet traffic to the VM. In the future yes, this is best done with a reverse proxy, but any future reverse proxy will *also* be a VM running in this Proxmox node, so it'll have the same issue.

This is what I've tried:

iptables -t nat -A PREROUTING -i eno123.6 -p tcp -d 150.0.0.150 --dport 80 -j DNAT --to-destination 10.0.0.5:80
iptables -t nat -A POSTROUTING -p tcp -d 10.0.0.5 --dport 80 -j SNAT --to-source 150.0.0.150

Didn't work.

I've obviously tried a lot of variations. Spent about two days messing about with different combinations of various things turned on/of. No joy, or else I wouldn't be here.

I'm wondering if there's some limitation in Proxmox that makes this particular setup non-viable. Maybe something to do with routing from vlan to vlan with a single physical interface. This one physical connection is having to do a lot of stuff so I can see how that might be the case.

Will I need to get another physical network card in this thing? Or is there some trick I'm not seeing here? Is there some additional secret firewall on Proxmox VE 8.2 that I'm unaware of?

Things I haven't yet tried:
-NATing from 150.0.0.150 to 10.0.0.14, and *then* from 10.0.0.14 to 10.0.0.5 (and back) - seemed kinda silly so I thought I'd ask first.
-Begging the colo for help.
-Just buying another physical network card & cable and hoping there's an empty port in the switch.
 
Here's /etc/network/interfaces

Code:
auto lo
iface lo inet loopback

auto eno123
iface eno123 inet manual

auto eno123.10
iface eno123.10 inet manual

auto eno123.6
iface eno123.6 inet static
        address 150.0.0.150/28
        gateway 150.0.0.145

auto vmbr0
iface vmbr0 inet static
        address 10.0.0.14/24
        bridge-ports eno123.10
        bridge-stp off
        bridge-fd 0

post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -D POSTROUTING -s '10.0.0.0/24' -o eno123.6 -j MASQUERADE
post-up iptables -t nat -A POSTROUTING -s '10.0.0.0/24' -o eno123.6 -j MASQUERADE

Nothing fancy in there. The reason the -D is in post-up is because anything post-down never seems to execute, so this way if there's a masquerade rule it gets pruned and then re-added, so I don't end up with a pile of duplicate rules.
 
I have added something like this on Proxmox Host in the Cloud at Colo.
154.X.X.X/32 being the ip of the Colo server rented, and 10.10.x.x the internal one used for single VMs or in my case for the precise IP of the Nginx-Proxy-Manager. This allows me to have several Webservers. Easy to set inside Nginx-Proxy-Manager which then forward the request based on the domain name or port of the parrallel VMs/CTs on the same 10.10.x.x network.

*nat
-A PREROUTING -d 154.X.X.X/32 -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -m multiport --dports 80,81,443,8080 -j DNAT --to-destination 10.10.X.X
-A POSTROUTING -s 10.10.X.0/24 -o vmbr0 -j MASQUERADE

*raw
-A PREROUTING -i fwbr+ -j CT --zone 1

*filter
-A INPUT -s 10.10.X.0/24 -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -i vmbr0 -p tcp -m tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT
-A INPUT -i vmbr0 -p tcp -m tcp --sport 443 -m state --state ESTABLISHED -j ACCEPT
-A INPUT -i vmbr0 -p tcp -m tcp --dport 80 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
-A INPUT -i vmbr0 -p tcp -m tcp --dport 443 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
-A OUTPUT -o vmbr0 -p tcp -m tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
-A OUTPUT -o vmbr0 -p tcp -m tcp --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT
-A OUTPUT -o vmbr0 -p tcp -m tcp --sport 80 -m conntrack --ctstate ESTABLISHED -j ACCEPT
-A OUTPUT -o vmbr0 -p tcp -m tcp --sport 443 -m conntrack --ctstate ESTABLISHED -j ACCEPT
-A OUTPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT
-A OUTPUT -o vmbr0 -p udp -m udp --dport 53 -j ACCEPT
-A OUTPUT -j PVEFW-OUTPUT

COMMIT
 
Last edited:
and on the /etc/network/interfaces
All VMS / CTs use the vmbr99, to communicate.

auto vmbr0
iface vmbr0 inet static
address 154.x.x.x/21
gateway 154.X.x.1
bridge-ports ens18
bridge-stp off
bridge-fd 0

auto vmbr99
iface vmbr99 inet static
address 10.10.X.1/24
bridge-ports none
bridge-stp off
bridge-fd 0

post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '10.10.X.0/24' -o vmbr0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.10.X.0/24' -o vmbr0 -j MASQUERADE

post-up iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1
 
If you get stuck add "hetzner" to you search string. It was an early cloud provider many started deploying proxmox on. Several post on this forum and others when adding "hetzner" precisely about setting iptables and networks correctly on cloud Proxmox.
 
I have added something like this on Proxmox Host in the Cloud at Colo.
154.X.X.X/32 being the ip of the Colo server rented, and 10.10.x.x the internal one used for single VMs or in my case for the precise IP of the Nginx-Proxy-Manager. This allows me to have several Webservers. Easy to set inside Nginx-Proxy-Manager which then forward the request based on the domain name or port of the parrallel VMs/CTs on the same 10.10.x.x network.

*nat
-A PREROUTING -d 154.X.X.X/32 -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -m multiport --dports 80,81,443,8080 -j DNAT --to-destination 10.10.X.X
-A POSTROUTING -s 10.10.X.0/24 -o vmbr0 -j MASQUERADE

*raw
-A PREROUTING -i fwbr+ -j CT --zone 1

*filter
-A INPUT -s 10.10.X.0/24 -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -i vmbr0 -p tcp -m tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT
-A INPUT -i vmbr0 -p tcp -m tcp --sport 443 -m state --state ESTABLISHED -j ACCEPT
-A INPUT -i vmbr0 -p tcp -m tcp --dport 80 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
-A INPUT -i vmbr0 -p tcp -m tcp --dport 443 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
-A OUTPUT -o vmbr0 -p tcp -m tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
-A OUTPUT -o vmbr0 -p tcp -m tcp --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT
-A OUTPUT -o vmbr0 -p tcp -m tcp --sport 80 -m conntrack --ctstate ESTABLISHED -j ACCEPT
-A OUTPUT -o vmbr0 -p tcp -m tcp --sport 443 -m conntrack --ctstate ESTABLISHED -j ACCEPT
-A OUTPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT
-A OUTPUT -o vmbr0 -p udp -m udp --dport 53 -j ACCEPT
-A OUTPUT -j PVEFW-OUTPUT

COMMIT
Implemented these rules as a if-up.d script. Verified that the rules are in the table. No change.

The main difference between this setup and mine is the use of vlans over a single physical interface. I'm wondering if there's some limitation in Proxmox that makes this not possible.
 
Started to use tcpdump to see what was going on behind the scenes. Traffic *is* being routed inwards to the VM.

Clearly nothing seems to be going back *out*.

There is some sort of checksum error on what are supposed to be the outgoing packets, and this checksum error is happening both eno123.6 and ens18 (which is what http-server is calling it's virtual network card).

So the traffic is getting to http-server and then apparently getting messed up in some way I don't understand yet.
iptables is dutifully routing the garbage packets back out into the internet, where I assume they are getting discarded at some point.

Not sure if this will be meaningful to anyone, but here's a sample exchange of packets captured on eno123.6:

Code:
11:39:03.925994 IP (tos 0x48, ttl 112, id 1502, offset 0, flags [DF], proto TCP (6), length 52)
    <my home IP>.52981 > 150.0.0.150.80: Flags [S], cksum 0xf717 (correct), seq 383072524, win 64240, options [mss 1460,nop,wscale 8,nop,nop,sackOK], length 0
11:39:03.926253 IP (tos 0x0, ttl 63, id 0, offset 0, flags [DF], proto TCP (6), length 52)
    150.0.0.150.80 > <my home IP>.52981: Flags [S.], cksum 0x5e07 (incorrect -> 0x0ff9), seq 200465180, ack 383072525, win 64240, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0
 
Last edited:
Have you removed and disabled all vlan, to get it going?
My options for DataCenter are
and for ProxmoxhostPVE
 

Attachments

  • Datacenter.JPG
    Datacenter.JPG
    36.5 KB · Views: 3
  • proxmoxhostPVE.JPG
    proxmoxhostPVE.JPG
    60.3 KB · Views: 4
I cannot disable the vlans.

vlan 6 is mandated by the colo, and vlan 10 is what the aforementioned old bare metal server is using, so without that I cannot tunnel into the node and have to resort to iDRAC to fix it.

Your theory is that these packets are within frames that are being improperly vlan tagged? Any idea as to how I could actually see the vlan tags on the frames?

My datacenter & node firewall options are identical to your screenshots.
 
OK, some updates.

I managed to finagle another switch port out of the colo, so now I have a separate physical interface for internet traffic. We'll call that eno124.6. So that's one less potential cause for problems.

tcpdump inspection of each interface shows that packets are indeed coming in from the internet over eno124.6, being routed through vmbr0 into the http-server VM.

Running tcpdump within http-server shows that the packets are arriving from the internet and are being responded to. The packets are indeed going back out through eno124.6.

They are not arriving back at the requester though. Another oddity is that all packets appear to be zero length.

I'm guessing that something about the iptables rules are buggering up the packets on the way out (or possibly on the way in) and then the switch is dropping them.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!