I've spent a considerable amount of time getting this to work exactly like I want with the exception of having a CA certificate to get rid of the Security Risk splash page in my browser.
In my Proxmox, I have multiple servers, but for the sake of simplifying this tutorial, I'll discuss the following: DDNS server, PRTG Monitoring server, a simple web server, and NGINX server. I'll conclude what all proxy passthrough I actually do.
PRTG by Paessler is a great way to monitor SOHO, Medium, Large, and Multi-site Enterprise infrastructure. It's free to use for up to 100 sensors. Check them out at https://www.paessler.com/prtg (not sponsored).
My residential internet service has a dynamically changing public IP address. Using a DDNS server is pretty straight forward. I setup an LXC Debian 10 container, update apt, and install ddclient. Your domain host will have configuration settings and credentials for ddclient. It is a means of DNS resolving an FQDN for my hosted domain (.com) with a dynamic IP that my home ISP gives me. If you have a static IP, this isn't necessary. Their website is a great resource for configuring. https://ddclient.net/.
Next up is PRTG. PRTG is only an example server that I'll be using NGINX to proxy to. You can use 2 different web servers to test, but PRTG is a good example of configuring SSL (443) settings in NGINX. PRTG can only be run in Windows and I have mine running in a Windows Server 2016, but Windows 10 Pro works just as well. This is also a Proxmox VM. Once you've gone through the setup and configuring of PRTG based on their guides, we can move onto NGINX.
Now .. for the sake of sanity .. I'll be going over 2 things regarding NGINX. One is self-signed certs (or CA certs) and the other is the site configuration file. There are tons of guides out there to simply stand up an NGINX server and get you to the point of creating and editing the site configuration.
In my setup, I always edit /etc/nginx/sites-enabled/"site-config".conf. "site-config" will be whatever your site config file is named. Nano is my preferred editor.
The following is the only thing I need for my FQDN to resolve to server(s) from the internet. I do not use upstream directive as this works with or without it. This is possible because my NGINX server is my web host for all internet traffic coming into my IP. So my router has ports 80 and 443 directing all traffic to NGINX and NGINX then directs traffic to whatever server is the FQDN is requesting. I'll explain:
First up, the directive below tells any traffic on port 80 that uses a wildcard (*) sub-domain name to my domain or simply my domain to redirect (return 301) to www.mydomain.com.
If I go to foobar.mydomain.com or bar1.mydomain.com .. it will redirect to www.mydomain.com. This is important so that traffic always ends up at a webpage instead of given an error .. like 404. Of course you can have this go to a webpage designed for 404 traffic, also.
Next up, we repeat the previous directive, changing the listening port to 443 ssl. This way, traffic on port 443 ends up where you want it.
Now the web server. My webserver is just a personal site and I haven't obtained a CA cert so it's just for my own personal fun. It is on port 80. The previous directives are leading traffic to this point.
We're telling port 80 traffic that is literally going to http://www.mydomain.com to proxy through NGINX AND because the previous two directives were leading all traffic to the same address, http://www.mydomain.com, all traffic will load the location of the web server. the location directive in is leading to an internal address of a webserver, http://10.10.10.10/ (your web server's internal IP will likely be different).
Now that there's a grasp on the flow, we can expand on our wildcard and use a sub-domain to route to another server. This one is a little tricky since we're using a self-sign certificate that can be applied to domain computers through group policy so domain joined computers can reach the server's web page without a security risk splash page. Will not be covered here.
First, we need to tell port 80 traffic going to monitoring(sub-domain).mydomain.com to forward elsewhere, specifically to port 443 (or https://) in the return 301 directive.
Even though we have a directive up top that says wildcard subdomains go to www; because we are listing another server with a specific sub-domain, it doesn't qualify as a wildcard anymore. It now has a FQDN to direct to.
Here is the HTTPS where we use SSL self-signed certs listed in the parent directory of /etc/nginx/.
Some of the keen among you may have questions about servers which require specific ports. This is probably one of the greatest uses of an NGINX reverse proxy server. It allows you to pass 80 and 443 traffic (which is standard internet based) to any server on any port that you have setup internally. Proxmox is a good example as it is on port 8006 by default.
You know the drill by now. We're going to redirect port 80 traffic to port 443 (https://).
Now that is being redirected, we'll tell it what location to load and some extra configurations to ensure popup windows can resolve appropriately. Make note that we are directing 443 traffic to load port 8006 in the proxy_pass directive. This means when you load https://proxmox.mydomain.com, you won't see port 8006 in your address. That's because NGINX is handling it.
I know that the directive ssl on is commented out, but for whatever reason, if it isn't in there it doesn't work.
From here, you can stand up numerous servers and still be able to access them behind one single public IP address. I also manage my own internal DNS in a Windows Server Domain environment so my domain is the same as my hosted domain. In my DNS server configuration, all of the sub-domain names (ie, monitoring, proxmox, www, etc.) are all pointing to my NGINX server's internal IP address. This way, it doesn't matter if I'm trying to get to the server internally or externally, I only need one web address.
In my Proxmox, I have multiple servers, but for the sake of simplifying this tutorial, I'll discuss the following: DDNS server, PRTG Monitoring server, a simple web server, and NGINX server. I'll conclude what all proxy passthrough I actually do.
PRTG by Paessler is a great way to monitor SOHO, Medium, Large, and Multi-site Enterprise infrastructure. It's free to use for up to 100 sensors. Check them out at https://www.paessler.com/prtg (not sponsored).
My residential internet service has a dynamically changing public IP address. Using a DDNS server is pretty straight forward. I setup an LXC Debian 10 container, update apt, and install ddclient. Your domain host will have configuration settings and credentials for ddclient. It is a means of DNS resolving an FQDN for my hosted domain (.com) with a dynamic IP that my home ISP gives me. If you have a static IP, this isn't necessary. Their website is a great resource for configuring. https://ddclient.net/.
Next up is PRTG. PRTG is only an example server that I'll be using NGINX to proxy to. You can use 2 different web servers to test, but PRTG is a good example of configuring SSL (443) settings in NGINX. PRTG can only be run in Windows and I have mine running in a Windows Server 2016, but Windows 10 Pro works just as well. This is also a Proxmox VM. Once you've gone through the setup and configuring of PRTG based on their guides, we can move onto NGINX.
Now .. for the sake of sanity .. I'll be going over 2 things regarding NGINX. One is self-signed certs (or CA certs) and the other is the site configuration file. There are tons of guides out there to simply stand up an NGINX server and get you to the point of creating and editing the site configuration.
In my setup, I always edit /etc/nginx/sites-enabled/"site-config".conf. "site-config" will be whatever your site config file is named. Nano is my preferred editor.
root@nginx-server:~# nano /etc/nginx/sites-enabled/site-config.conf
The following is the only thing I need for my FQDN to resolve to server(s) from the internet. I do not use upstream directive as this works with or without it. This is possible because my NGINX server is my web host for all internet traffic coming into my IP. So my router has ports 80 and 443 directing all traffic to NGINX and NGINX then directs traffic to whatever server is the FQDN is requesting. I'll explain:
First up, the directive below tells any traffic on port 80 that uses a wildcard (*) sub-domain name to my domain or simply my domain to redirect (return 301) to www.mydomain.com.
server {
listen 80;
server_name *.mydomain.com mydomain.com;
return 301 http://www.mydomain.com;
}
If I go to foobar.mydomain.com or bar1.mydomain.com .. it will redirect to www.mydomain.com. This is important so that traffic always ends up at a webpage instead of given an error .. like 404. Of course you can have this go to a webpage designed for 404 traffic, also.
Next up, we repeat the previous directive, changing the listening port to 443 ssl. This way, traffic on port 443 ends up where you want it.
server {
listen 443 ssl;
server_name *.mydomain.com mydomain.com;
return 301 http://www.mydomain.com;
}
Now the web server. My webserver is just a personal site and I haven't obtained a CA cert so it's just for my own personal fun. It is on port 80. The previous directives are leading traffic to this point.
server {
listen 80;
server_name www.mydomain.com;
location / {
proxy_pass http://10.10.10.10/;
}
}
We're telling port 80 traffic that is literally going to http://www.mydomain.com to proxy through NGINX AND because the previous two directives were leading all traffic to the same address, http://www.mydomain.com, all traffic will load the location of the web server. the location directive in is leading to an internal address of a webserver, http://10.10.10.10/ (your web server's internal IP will likely be different).
Now that there's a grasp on the flow, we can expand on our wildcard and use a sub-domain to route to another server. This one is a little tricky since we're using a self-sign certificate that can be applied to domain computers through group policy so domain joined computers can reach the server's web page without a security risk splash page. Will not be covered here.
First, we need to tell port 80 traffic going to monitoring(sub-domain).mydomain.com to forward elsewhere, specifically to port 443 (or https://) in the return 301 directive.
server {
listen 80 default_server;
server_name monitoring.mydomain.com;
return 301 https://monitoring.mydomain.com$request_uri;
}
Even though we have a directive up top that says wildcard subdomains go to www; because we are listing another server with a specific sub-domain, it doesn't qualify as a wildcard anymore. It now has a FQDN to direct to.
Here is the HTTPS where we use SSL self-signed certs listed in the parent directory of /etc/nginx/.
server {
listen 443 ssl default_server;
server_name monitoring.mydomain.com;
#ssl on;
ssl_certificate monitoring.mydomain.com.cert;
ssl_certificate_key monitoring.mydomain.com.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location / {
proxy_pass https://10.10.20.10/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
Some of the keen among you may have questions about servers which require specific ports. This is probably one of the greatest uses of an NGINX reverse proxy server. It allows you to pass 80 and 443 traffic (which is standard internet based) to any server on any port that you have setup internally. Proxmox is a good example as it is on port 8006 by default.
You know the drill by now. We're going to redirect port 80 traffic to port 443 (https://).
server {
listen 80;
server_name proxmox.mydomain.com;
return 301 https://proxmox.mydomain.com$request_uri;
}
Now that is being redirected, we'll tell it what location to load and some extra configurations to ensure popup windows can resolve appropriately. Make note that we are directing 443 traffic to load port 8006 in the proxy_pass directive. This means when you load https://proxmox.mydomain.com, you won't see port 8006 in your address. That's because NGINX is handling it.
server {
listen 443 ssl;
server_name proxmox.mydomain.com;
#ssl on;
location / {
proxy_pass https://10.10.90.10:8006$request_uri;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
I know that the directive ssl on is commented out, but for whatever reason, if it isn't in there it doesn't work.
From here, you can stand up numerous servers and still be able to access them behind one single public IP address. I also manage my own internal DNS in a Windows Server Domain environment so my domain is the same as my hosted domain. In my DNS server configuration, all of the sub-domain names (ie, monitoring, proxmox, www, etc.) are all pointing to my NGINX server's internal IP address. This way, it doesn't matter if I'm trying to get to the server internally or externally, I only need one web address.
Last edited: