[SOLVED] Running three web server guests on private network, one public IP

5. You will not be terminating the SSL sessions on HAProxy, but 'switching' them via their FQDN's and terminating SSL at the servers instead.

Not sure I follow here.

The web servers with the homepage and cloud had ssl-certs from before and I left them as is.
On the nginx proxy manager I addedd ssl-certs as well for the hosts I proxy as I couldn't reach them otherwise without the web browsers complaining that no secure connection could be established.

Are you saying I don't need ssl-certs on the proxy for the web servers and I only need to keep the certs on the web hosts themselves that I already have?
 
@adrian_vg Remember, the example that I provided is HAProxy. Regardles, whenever using a reverse proxy (a.k.a load balancer) to 'front-end' your client requests into back-end servers, you can configure the proxy for SSL numerous ways:

1. Terminate SSL at the proxy and have clear text (port 80 for example) on the servers. This enables a single 'control point' for all SSL traffic management, certificates, cipher sets, etc. but allows a threat actor to go direct to servers unencrypted if they are able to penetrate the perimeter. By the way, data breach, which is the notion that not only has a threat actor penetrated, but they have been able to extract data, is at ~48% in the USA (over 70% in Canada ... ouch). Anyway, this type of configuration is typically called SSL offload. It used to be touted heavily by load balancing vendors, but I am not a fan as you don't get end-to-end encryption, which I think is a mistake strategically.

2. Terminate SSL at the proxy and then re-encrypt the traffic to the back-end servers with the same certificates and cipher sets hosted both on the proxy and servers. This configuration does enable end-to-end encryption.

3. Configure an SSL bridge (or alternately called SSL pass-through). The logic for this configuration is that a user accessing the 'app' will perform a client-key exchange directly with the back-end server and the proxy is simply load balancing the connections for resiliency and scale.

Now, the example HAProxy (HAP) configuration I provided enables the third option, where HAP is simply looking at the requested URL/FQDN and then forwarding the user to the appropriate server to handle the request. SSL client-key exchange bypasses HAP and the SSL session terminates on the server. This configuration allows for numerous FQDN's to sit behind the proxy and leverage a single IP address. You can absolutely enable multiple FQDN's behind option #2 with the right configuration ... called URL redirection in some companies.

So yes to your question, with option #3 you do not need to host the SSL certs on the proxy. I'll be clear and indicate that I am not recommending #3 over #2 ... just explaining the configuration example that I provided. If, for example, you desired to inspect the traffic with an IDS/IPS product and/or WAF, then decrypting the traffic is required so that the IDS / WAF can inspect the content of the traffic prior to forwarding on to the servers, or mitigate the attack.
 
  • Like
Reactions: adrian_vg
As a quick update, perhaps some of the HAP config was slightly confusing, so here is an updated version that makes it a bit more clear:

frontend ft_ssl_vip
bind 192.168.1.2:443
mode tcp

tcp-request inspect-delay 5s
tcp-request content accept if { req_ssl_hello_type 1 }

default_backend bk_ssl_default

# Using SNI to take routing decision
backend bk_ssl_default
mode tcp

acl app01 req_ssl_sni -i app01.domain.com
acl app02 req_ssl_sni -i app02.domain.com
acl app03 req_ssl_sni -i app03.domain.com

use-server srv01 if app01
use-server srv02 if app02
use-server srv03 if app03

option ssl-hello-chk
server srv01 192.168.1.10:443 check
server srv02 192.168.1.11:443 check
server srv03 192.168.1.12:443 check
 
  • Like
Reactions: adrian_vg
So yes to your question, with option #3 you do not need to host the SSL certs on the proxy. I'll be clear and indicate that I am not recommending #3 over #2 ... just explaining the configuration example that I provided. If, for example, you desired to inspect the traffic with an IDS/IPS product and/or WAF, then decrypting the traffic is required so that the IDS / WAF can inspect the content of the traffic prior to forwarding on to the servers, or mitigate the attack.
Gotcha'. Thanks for taking the time to explain!
I'll keep the servers as is for now.

I'm going to lab some more with reverse proxying and containers, as using a vm for that single task seems so wasteful.

You wouldn't happen to know a reverse proxy docker-solution using HAproxy by any chance?
 
  • Like
Reactions: AndyRed
Quite welcome :)

Let me address the question this way ... you can see below an image of my proxmox setup. What you'll note is that I'm running a LXC container with Ubuntu 20.04 as the OS and simply installed HAP in the container with very little resources. As previously noted, it's super duper easy to scale the CPU or RAM with a few clicks to get more resources with this method, coupled with the fact that LXC is a system container that I can 'engineer' better IMO.

I'm personally not a fan of Docker, even though it's quite slick. The fundamental reason is that I leverage U20.04 for the baseline image and use UFW to control 'goes-in' and 'goes-out' without too much fuss. The challenge with Docker is that it completely bypasses UFW and writes rules directly to IPTABLES. I find this a large security risk, as you can think you've configured UFW to allow only certain IP's/subnets, etc. to access the Docker host, but it's a mistake. Of course the wise person will always validate access ... I learned the hard way :)

For example, I had configured a Docker container for a 'service' and then configured UFW to allow port 443 to my specific IP, but low and behold, it was published to the world and took me a bit to figure out what was going on.

Google "docker bypassing ufw" and you'll see what I mean. Sure, there are ways to solve that issue, but the notion that Docker manipulates IPTABLES and bypasses is a concern that I prefer to avoid. So in essence, I've honestly not searched for a HAP Docker solution.

Of course, if you're using the native FW functionality in proxmox, this isn't too much of an issue, as you can engineer the FW specific to the VM/container of course. I leverage the native FW and also leverage UFW ... so to some degree the security 'service chain' is more comprehensive.



example.jpg
 
Last edited:
Greetings @adrian_vg :

A few thoughts for you ...

Container: ubuntu 20.04
vCPU: 1
RAM: 256MB works, but you can scale up as in the increase in traffic demands

NOTES
1. LXC container to keep it really light and have the flex to dynamically scale up vCPU/RAM as needed
2. Ubuntu 20.04 as it's dirt simple
3. # apt install haproxy
4. If all of your sites are SSL (assuming they would be) and you're using https://letsencrypt.org/ for free certs on each container/vm (which I do), you'll be using layer (L4) load balancing with TCP.
5. You will not be terminating the SSL sessions on HAProxy, but 'switching' them via their FQDN's and terminating SSL at the servers instead.
6. This configuration allows you to configure your DNS for numerous (lot's) of FQDN's to a single IP address.

Regarding leveraging HAProxy and a sample configuration to accomplish the objective, I have setup a simple container and have the following configuration defined for HAProxy:
Code:
root@haproxy:/etc/haproxy# more haproxy.cfg
global
        log /dev/log    local0
        log /dev/log    local1 notice
        chroot /var/lib/haproxy
        stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
        stats timeout 30s
        user haproxy
        group haproxy
        daemon

        # Default SSL material locations
#       ca-base /etc/ssl/certs
#       crt-base /etc/ssl/private

        # See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
#        ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CH
ACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
#        ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
#        ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets

defaults
        log     global
        mode    http
        option  httplog
        option  dontlognull
        timeout connect 5000
        timeout client  50000
        timeout server  50000
        errorfile 400 /etc/haproxy/errors/400.http
        errorfile 403 /etc/haproxy/errors/403.http
        errorfile 408 /etc/haproxy/errors/408.http
        errorfile 500 /etc/haproxy/errors/500.http
        errorfile 502 /etc/haproxy/errors/502.http
        errorfile 503 /etc/haproxy/errors/503.http
        errorfile 504 /etc/haproxy/errors/504.http

# Single VIP
frontend ft_ssl_vip
  bind 192.168.1.2:443
  mode tcp

  tcp-request inspect-delay 5s
  tcp-request content accept if { req_ssl_hello_type 1 }

  default_backend bk_ssl_default

# Using SNI to make routing decision
backend bk_ssl_default
  mode tcp

  acl app01 req_ssl_sni -i app01.domain.com
  acl app02 req_ssl_sni -i app02.domain.com
  acl app03 req_ssl_sni -i app03.domain.com

  use-server app01 if app01
  use-server app02 if app02
  use-server app03 if app03

  option ssl-hello-chk
  server app01 192.168.1.10:443 check
  server app02 192.168.1.11:443 check
  server app03 192.168.1.12:443 check

This configuration works like a charm. Hope it proves helpful to you.

Andy


Dear Adrian, thanks a lot for posting your questions and research.
Dear Andy, thanks a lot for posting this solution.

After 1 day of study and research, I managed to realise what I needed with Andy's explanations.

This is my setup :

- home server with Proxmox
- 1 Ubuntu 20.04 vm with Nextcloud as a snap
- 1 Ubuntu 20.04 vm with EspoCRM running on Apache and MariaDB
- 1 Ubuntu 20.04 LXC container with HAProxy

I only have 1 public IP that my ISP provides.

I use freedns.afraid.org to have two subdomains, one for my Nextcloud instance and 1 for my Espo CRM.

Afraid.org redirect my two subdomain names to my public IP.

In my home router, the SSL port 465 is redirected to my LXC container with HAProxy

In my haproxy.cfg, I redirect :
- my Nextcloud subdomain to my Nextcloud vm internal IP
- my EspoCRM subdomain to my EspoCRM vm internal IP

For the use of SSL, I have used Let's Encrypt.

And now, all this works perfectly.

It has taken time to install all this, but it is worth the effort, I now have a powerful setup.

Thanks a lot for your help and kind regards,

Linux for Work
 
  • Like
Reactions: AndyRed

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!