[TUTORIAL] Using NGINX as Reverse Proxy Externally

hillefied

New Member
Mar 8, 2022
9
5
3
42
I've spent a considerable amount of time getting this to work exactly like I want with the exception of having a CA certificate to get rid of the Security Risk splash page in my browser.

In my Proxmox, I have multiple servers, but for the sake of simplifying this tutorial, I'll discuss the following: DDNS server, PRTG Monitoring server, a simple web server, and NGINX server. I'll conclude what all proxy passthrough I actually do.

PRTG by Paessler is a great way to monitor SOHO, Medium, Large, and Multi-site Enterprise infrastructure. It's free to use for up to 100 sensors. Check them out at https://www.paessler.com/prtg (not sponsored).

My residential internet service has a dynamically changing public IP address. Using a DDNS server is pretty straight forward. I setup an LXC Debian 10 container, update apt, and install ddclient. Your domain host will have configuration settings and credentials for ddclient. It is a means of DNS resolving an FQDN for my hosted domain (.com) with a dynamic IP that my home ISP gives me. If you have a static IP, this isn't necessary. Their website is a great resource for configuring. https://ddclient.net/.

Next up is PRTG. PRTG is only an example server that I'll be using NGINX to proxy to. You can use 2 different web servers to test, but PRTG is a good example of configuring SSL (443) settings in NGINX. PRTG can only be run in Windows and I have mine running in a Windows Server 2016, but Windows 10 Pro works just as well. This is also a Proxmox VM. Once you've gone through the setup and configuring of PRTG based on their guides, we can move onto NGINX.

Now .. for the sake of sanity .. I'll be going over 2 things regarding NGINX. One is self-signed certs (or CA certs) and the other is the site configuration file. There are tons of guides out there to simply stand up an NGINX server and get you to the point of creating and editing the site configuration.

In my setup, I always edit /etc/nginx/sites-enabled/"site-config".conf. "site-config" will be whatever your site config file is named. Nano is my preferred editor.

root@nginx-server:~# nano /etc/nginx/sites-enabled/site-config.conf

The following is the only thing I need for my FQDN to resolve to server(s) from the internet. I do not use upstream directive as this works with or without it. This is possible because my NGINX server is my web host for all internet traffic coming into my IP. So my router has ports 80 and 443 directing all traffic to NGINX and NGINX then directs traffic to whatever server is the FQDN is requesting. I'll explain:

First up, the directive below tells any traffic on port 80 that uses a wildcard (*) sub-domain name to my domain or simply my domain to redirect (return 301) to www.mydomain.com.
server { listen 80; server_name *.mydomain.com mydomain.com; return 301 http://www.mydomain.com; }

If I go to foobar.mydomain.com or bar1.mydomain.com .. it will redirect to www.mydomain.com. This is important so that traffic always ends up at a webpage instead of given an error .. like 404. Of course you can have this go to a webpage designed for 404 traffic, also.

Next up, we repeat the previous directive, changing the listening port to 443 ssl. This way, traffic on port 443 ends up where you want it.
server { listen 443 ssl; server_name *.mydomain.com mydomain.com; return 301 http://www.mydomain.com; }

Now the web server. My webserver is just a personal site and I haven't obtained a CA cert so it's just for my own personal fun. It is on port 80. The previous directives are leading traffic to this point.
server { listen 80; server_name www.mydomain.com; location / { proxy_pass http://10.10.10.10/; } }

We're telling port 80 traffic that is literally going to http://www.mydomain.com to proxy through NGINX AND because the previous two directives were leading all traffic to the same address, http://www.mydomain.com, all traffic will load the location of the web server. the location directive in is leading to an internal address of a webserver, http://10.10.10.10/ (your web server's internal IP will likely be different).

Now that there's a grasp on the flow, we can expand on our wildcard and use a sub-domain to route to another server. This one is a little tricky since we're using a self-sign certificate that can be applied to domain computers through group policy so domain joined computers can reach the server's web page without a security risk splash page. Will not be covered here.

First, we need to tell port 80 traffic going to monitoring(sub-domain).mydomain.com to forward elsewhere, specifically to port 443 (or https://) in the return 301 directive.
server { listen 80 default_server; server_name monitoring.mydomain.com; return 301 https://monitoring.mydomain.com$request_uri; }

Even though we have a directive up top that says wildcard subdomains go to www; because we are listing another server with a specific sub-domain, it doesn't qualify as a wildcard anymore. It now has a FQDN to direct to.

Here is the HTTPS where we use SSL self-signed certs listed in the parent directory of /etc/nginx/.
server { listen 443 ssl default_server; server_name monitoring.mydomain.com; #ssl on; ssl_certificate monitoring.mydomain.com.cert; ssl_certificate_key monitoring.mydomain.com.key; ssl_session_timeout 5m; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; location / { proxy_pass https://10.10.20.10/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } }


Some of the keen among you may have questions about servers which require specific ports. This is probably one of the greatest uses of an NGINX reverse proxy server. It allows you to pass 80 and 443 traffic (which is standard internet based) to any server on any port that you have setup internally. Proxmox is a good example as it is on port 8006 by default.

You know the drill by now. We're going to redirect port 80 traffic to port 443 (https://).
server { listen 80; server_name proxmox.mydomain.com; return 301 https://proxmox.mydomain.com$request_uri; }

Now that is being redirected, we'll tell it what location to load and some extra configurations to ensure popup windows can resolve appropriately. Make note that we are directing 443 traffic to load port 8006 in the proxy_pass directive. This means when you load https://proxmox.mydomain.com, you won't see port 8006 in your address. That's because NGINX is handling it.
server { listen 443 ssl; server_name proxmox.mydomain.com; #ssl on; location / { proxy_pass https://10.10.90.10:8006$request_uri; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } }

I know that the directive ssl on is commented out, but for whatever reason, if it isn't in there it doesn't work.

From here, you can stand up numerous servers and still be able to access them behind one single public IP address. I also manage my own internal DNS in a Windows Server Domain environment so my domain is the same as my hosted domain. In my DNS server configuration, all of the sub-domain names (ie, monitoring, proxmox, www, etc.) are all pointing to my NGINX server's internal IP address. This way, it doesn't matter if I'm trying to get to the server internally or externally, I only need one web address.
 
Last edited:
Thanks for the great write-up! Is there any specific reason why you decided to use a self-signed TLS certificate instead of - for instance - Let's Encrypt?
 
  • Like
Reactions: hillefied
Thanks for the great write-up! Is there any specific reason why you decided to use a self-signed TLS certificate instead of - for instance - Let's Encrypt?
That's an excellent question. There are 2 reasons.

First, most of my servers that I access are only for me or other trusted individuals whom are firmly aware of the domain they are connecting to not having a CA cert. The web server .. well that I do wish had a CA cert so I have been wanting to get one to apply, just haven't dove into it yet.

The second reason is I've had some issues installing some of the dependencies needed for CertBot in a LXC because of some limitations in container to hardware I/O. I am actively trying to get this squared away, but as I've said, it hasn't been a priority. If you know of any resources to help out, I would greatly appreciate it.

Oh .. There's also another tidbit I can include that can help protect your NGINX server. You can add the "allow" and "deny" directives to limit connection source IPs to your internal servers. Since I primarily connect to my home-built servers from my work which has a static IP, I simply at the directive like so:

server { listen 443 ssl; server_name proxmox.mydomain.com; #ssl on; location / { proxy_pass https://10.10.90.10:8006$request_uri; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } allow 70.80.90.100; allow 192.168.0.0/16; deny all; }

The "allow" directive only allows connections from the source IP(s) listed while the "deny" directive can reject everything else. Very much like a firewall access rule. Be sure the "deny all" is last.
 
Thanks for the great write-up! Is there any specific reason why you decided to use a self-signed TLS certificate instead of - for instance - Let's Encrypt?
Finally got a decent process down and now all of my subdomains are encrypted using a wildcard CA from Let's Encrypt. The reason why I was having issues is because the LXC for my web server I was using was Ubuntu Server 21.04 which apparently is EoL so the certbot libraries wouldn't update. I had to move my site files off of the LXC, burn the LXC, and rebuild my apache server on a Debian 10 LS LXC. After moving my files back over to it, going through the process of enabling SSL on Apache, updating my NGINX server to handle all reverse proxy for my subdomains, I can now use that web server certificate on NGINX and all of my subdomains now have SSL certs from a reputable CA, Let's Encrypt.

It was a bit of a complex setup since I'm using a wildcard, but it is all working perfectly.

If I need to go through that process, I'll post. +1 if you need it!
 
  • Like
Reactions: Stormrunner
I use docker with jc21/nginx-proxy-manager on a separate VM. Easy as pie and just as tasty.

Willing to accept other methods work just as well though.
 
  • Like
Reactions: hillefied
FYI. I use Opnsense in a vm. You get a firewall and It comes with nginx, spam, ddclient and lets encrypt plugins. Setup al via gui so very straightforward to organise routing and proxy pass.
 
Last edited:
FYI. I use Opnsense in a vm. You get a firewall and It comes with nginx, spam, ddclient and lets encrypt plugins. Setup al via gui so very straightforward to organise routing and proxy pass.
I have PfSense which can perform several different functions including those you mention with plugins, but I use it strictly for a VPN concentrator where I use NPS and RADIUS to authenticate connections against a user database in a domain controller. It may sound like overkill, but I like to have separate servers serving only 1, maybe 2, functions instead of having 1 server trying to perform several functions. This allows any single failure of a specific function to be rebuilt very quickly.

Call me paranoid or Mr. Overkill .. but I think it's mostly justifiable. Below is a list of my servers and their functions.

2 separate Debian LXCs updating DDNS (Google Domains) for my domain and subdomains (wildcard) from my dynamic ISP.
2 separate Debian LXCs for Minecraft servers (Bedrock & Java).
1 Debian VM strictly for Certbot udpates for Let's Encrypt.
1 Debian LXC for my Apache webserver (Self-Hosted Website).
1 Debian LXC for NGINX server (exclusively serving as a single web server which reverse-proxies to several internal servers so I can "subdomain" several web servers over port 443 "HTTPS"). As an example, Proxmox has a web server for administering VMs and VM Host configurations. I use NGINX to proxy traffic requests from https://proxmox.mydomain.com externally to my Proxmox web interface internally. All web requests for any subdomain under my domain AND my domain itself go through NGINX first over port 80/443 and NGINX does the rest.
1 Debian LXC for CSRs (certificate signing request) using OpenSSL typically for self-signed certs. This has been rendered moot with the recent Certbot server.

If any one of these fails, I can simply rebuild it. Most of the LXCs are so lightweight that I probably run 10 more without much affect on my VM Host system. If I consolidated, I'd run the risk of losing more than I bargained for.
 
How do you use your docker? What service or server does it run?
YAML:
---
version: '3.8'
services:
  app:
    image: 'jc21/nginx-proxy-manager:latest'
    container_name: npm
    restart: unless-stopped
    ports:
      - '80:80' # Public HTTP Port
      - '443:443' # Public HTTPS Port
      - '81:81' # Admin Web Port
    volumes:
      - ./data:/data
      - ./letsencrypt:/etc/letsencrypt
      - ./logs:/data/logs
    healthcheck:
      test: ["CMD", "/bin/check-health"]
      interval: 5s

There is my docker file. I connect to the manager on port 81 to add the proxy hosts and letsencrypt. since docker isn't recommended on proxmox I have a vm running docker with a static IP (in my case dhcp with a reservation). I add a static dns entry on my local router pointing proxmox to whatever the ip address is.

Then whenever I type in proxmox in to a web browser it will go to the host running nginx proxy manager which will read the host proxmox and sent it to the appropriate place. In this case http://fred:8006 with a valid ssl certificate.
 
YAML:
---
version: '3.8'
services:
  app:
    image: 'jc21/nginx-proxy-manager:latest'
    container_name: npm
    restart: unless-stopped
    ports:
      - '80:80' # Public HTTP Port
      - '443:443' # Public HTTPS Port
      - '81:81' # Admin Web Port
    volumes:
      - ./data:/data
      - ./letsencrypt:/etc/letsencrypt
      - ./logs:/data/logs
    healthcheck:
      test: ["CMD", "/bin/check-health"]
      interval: 5s

There is my docker file. I connect to the manager on port 81 to add the proxy hosts and letsencrypt. since docker isn't recommended on proxmox I have a vm running docker with a static IP (in my case dhcp with a reservation). I add a static dns entry on my local router pointing proxmox to whatever the ip address is.

Then whenever I type in proxmox in to a web browser it will go to the host running nginx proxy manager which will read the host proxmox and sent it to the appropriate place. In this case http://fred:8006 with a valid ssl certificate.
Hi there, this is exactly what I have, but for some reason it was working when I installed proxmox, but then I started getting:

502 Bad Gateway​



openresty

My Nginx Proxy Manager is running on Docker in a Raspberry Pi 3.

My IP address for Proxmox is 192.168.3.10 so i have setup:

Scheme: http IP:192.168.3.10 Forward port: 8006

Any tip? Thank you!
 
Hi thanks for the howto, here is my config, all is working also the novnc.
you can setup certbot for ssl
# certbot certonly //for the first certifikat generation

server {
listen [ip]; #listen ipadress#
server_name proxy1.domain; #your domain#
#ssl on;
location / {
proxy_pass https://192.168.1.150:8006$request_uri; #local ip from the proxmox#
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location ^~ /.well-known/ {
try_files $uri /;
}
listen [ip]:443 ssl;
ssl_certificate /etc/letsencrypt/live/proxy1.domain/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/proxy1.domain/privkey.pem;

}

}
 
PRTG by Paessler is a great way to monitor SOHO, Medium, Large, and Multi-site Enterprise infrastructure. It's free to use for up to 100 sensors. Check them out at https://www.paessler.com/prtg (not sponsored).
I am a huge advocate of PRTG and have used it for well over 20 years now (we've got something over 30,000 sensors monitored with it!) but I'm still profoundly irritated that they've not got a core server install for Linux yet. Part of it is that I think the Core is still written in Borland Delphi (or whatever it's called these days) which is a solidly Windows-oriented dev platform. As a company they've done countless re-hashes of the UI, but I do wish they'd just bite the bullet and write a new core server which is a bit more modern and a bit less Windows!

As it happens, I do need a monitoring system for home, but I'm afraid that it won't be PRTG because of the Windows nature of it - just can't be bothered to maintain it. If, however, your environment is still predominantly Windows, I can definitely recommend it.
 
Is there any solution without manually change specific tools and services on the Proxmox Nodes? I wont change anything, cause i only have bad experiences with changes like this and updates between major versions and some others.

simply reverse proxy to 8006 port with ssl dont work without issues.

thanks
 
I was able to achieve reverse proxy using HaltDOS CE with the following steps ..

1. Installed HaltDOS CE from Linode Marketplace. I used deb11. As a FYI the LetsEncrypt SSL management is broken in the verify/install step. I emailed support and they replied this has been patched, but you need to manually install the dpkg to correct this:

curl -s -k -o hd-community-controller.deb https://binary.haltdos.com/community/waf/gui/hd-community-controller-x86_64.deb
dpkg -i hd-community-controller-x86_64.deb

2. Create server health monitors in haltdos for all proxmox nodes port 8006, tcp

3. Create servers in haltdos, referencing health monitors, set as backup true

4. in waf - operational, scroll to the bottom of the page and add your domain alias for your proxmox nodes -- ie proxmox.example.com

5. in waf - rules - redirect, create a rule for https://proxmox.example.com/$2 to https://proxmox.host.here:8006/$2 with response 302

You can test by adding the IP / fqdn to your /etc/hosts, or in windows c:\windows\system32\drivers\etc\hosts, before deploying the actual DNS A record. Fire up a browser and point to your reverse proxy fqdn.

--edit-- 6. Using @linhu recommendations above corrected the novnc console connectivity issue. To set those in haltdos I navigated to rules - headers. I created two separate rules.

Rule Action = Add Header
Attribute Name = Upgrade
Attribute Header = $http_upgrade

and..

Rule Action = Add Header
Attribute Name = Connection
Attribute Header = upgrade

Enable websocket support in haltdos waf - operational.

I'm probably still missing something as I'm occasionally getting 'failed to connect' on the console screen between refreshes.

-Matt
 
Last edited:
Thanks for this write up and contributions. Seems I have a few more rabbit holes to explore. Any updates/tweaks ince the latest updates of PVE?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!