[SOLVED] Hosting a website through a container

ciarandwi

New Member
Aug 22, 2024
19
7
3
Hi,

I'm just looking for some clarity and whether or not i'm thinking down the right path.

We have a Server which is got 5 different virtual machines, these are:
  1. NGinx
  2. Database
  3. Search
  4. Redis
  5. The Code (Server)
I've split all these up into a NAT mode server so it's just 1 IP from Node (sd-...) and all the containers are using a local ip (192.168...) address and all machines can communicate with one-another. We have it configured like this so that we can allocate a number of CPUS, memory etc to each machine and it also means that we could potentially run more containers for websites with ease as everything is configured in it's own container.

I'm now getting onto doing the DNS for the container and from previous examples we have, the DNS is pointing to an IP that is external (200... failover ip address) which is assigned to the NGINX container which should mean the NGINX container is accessible to the internet.

Question: Is it best practise here to get a failover IP or can I just use port forwarding i.e: in DNS set the A record to be the Main IP of the server and then use some port forwarding for the ip so that any traffic from the port 80 or 443 to be redirected to my NGINX container or is it just best to get a failover IP and then

Thanks for you any help

TLDR; In DNS, the A record has be an external IP, is it best to just buy a failover IP Address and assign it to my NGINX container OR is it best to use my Server's IP Address as the A record and then port forward for 80 and 443 to my NGINX container.
 

Attachments

  • server-config.jpg
    server-config.jpg
    14.3 KB · Views: 5
I don't know what exactly a failover IP is in this case, yet with other failover IPs e.g. with hetzner, they only make sense if you have another server where you can switch this failover IP to. If you don't have that, just use the default IP.

NAT is ok and is working, yet I would recommend to use a seperate reverse proxy as the endpoint for the nat so that you can easily add other services behind this without having to change stuff on the nginx of your current project. This can be another nginx (e.g. NPM, Traefik, Caddy, etc.)
 
Hi,

I'm just looking for some clarity and whether or not i'm thinking down the right path.

We have a Server which is got 5 different virtual machines, these are:
  1. NGinx
  2. Database
  3. Search
  4. Redis
  5. The Code (Server)
I've split all these up into a NAT mode server so it's just 1 IP from Node (sd-...) and all the containers are using a local ip (192.168...) address and all machines can communicate with one-another. We have it configured like this so that we can allocate a number of CPUS, memory etc to each machine and it also means that we could potentially run more containers for websites with ease as everything is configured in it's own container.

I'm now getting onto doing the DNS for the container and from previous examples we have, the DNS is pointing to an IP that is external (200... failover ip address) which is assigned to the NGINX container which should mean the NGINX container is accessible to the internet.

Question: Is it best practise here to get a failover IP or can I just use port forwarding i.e: in DNS set the A record to be the Main IP of the server and then use some port forwarding for the ip so that any traffic from the port 80 or 443 to be redirected to my NGINX container or is it just best to get a failover IP and then

Thanks for you any help

TLDR; In DNS, the A record has be an external IP, is it best to just buy a failover IP Address and assign it to my NGINX container OR is it best to use my Server's IP Address as the A record and then port forward for 80 and 443 to my NGINX container.
The base line is this: I'm not quite sure how you are configuring the DNS records for your domain(s) but you would generally register just one A record for your domain pointing to your server's publicly-accessible IP address. You could possibly, and I am not sure about this, also maybe add a second A record for a second publicly-available IP address that points to your application(s), should the first be down for whatever reason. But you would have to research how to use this second IP as a fallback.
 
I don't know what exactly a failover IP is in this case, yet with other failover IPs e.g. with hetzner, they only make sense if you have another server where you can switch this failover IP to. If you don't have that, just use the default IP.

This is from the server provider.

https://www.scaleway.com/en/docs/dedibox-network/ip-failover/concepts/#failover-ips

From that description, it looks like it's just used for moving from one server to another. (See Attachment)

NAT is ok and is working, yet I would recommend to use a seperate reverse proxy as the endpoint for the nat so that you can easily add other services behind this without having to change stuff on the nginx of your current project. This can be another nginx (e.g. NPM, Traefik, Caddy, etc.)

This is basically the below or am I wrong in thinking that? (See Attachment)
 

Attachments

  • Screenshot 2024-12-13 at 11.21.55.png
    Screenshot 2024-12-13 at 11.21.55.png
    18.8 KB · Views: 3
The base line is this: I'm not quite sure how you are configuring the DNS records for your domain(s) but you would generally register just one A record for your domain pointing to your server's publicly-accessible IP address. You could possibly, and I am not sure about this, also maybe add a second A record for a second publicly-available IP address that points to your application(s), should the first be down for whatever reason. But you would have to research how to use this second IP as a fallback.
I think that's the whole point of a failover IP. From the description our Server provider gives, it's used as a way of migrating containers to another datacenter/server OR a way of using it as a secondary IP Address so it could be used for the above if i'm reading it all correctly

I just remember making a post and somebody was talking about how it's not smart to put your server's main IP public to the internet and never really gave an explaination to why. Surely if a server is wanted to be accessed to the internet, you want that or am I just being dumb?
 
I think that's the whole point of a failover IP. From the description our Server provider gives, it's used as a way of migrating containers to another datacenter/server OR a way of using it as a secondary IP Address so it could be used for the above if i'm reading it all correctly

I just remember making a post and somebody was talking about how it's not smart to put your server's main IP public to the internet and never really gave an explaination to why. Surely if a server is wanted to be accessed to the internet, you want that or am I just being dumb?
Well, as it is, your DNS provider maps your publicly-accessible IP to a domain. If you were to ping that very same domain with say `ping your-domain.com`, ping would, among other information, reveal your server's IP address anyway to the person who executed the ping, so I'm not entirely sure why that person told you this.

If you want to setup some kind of failover, you could of course always do the following:
You said in your initial post you have 5 services that need to run. You could spin up 1 set of five containers for each service you plan to run. You make an A record for the public IP address of the container that holds your applications' entry points.

You spin up a second set of 5 containers for each of your services and make an A record for the IP address of the server holding the entry point of your applications. You use this IP as your failover IP address. If you add both sets of containers to PVE's High Availability Manager and add a shared storage for all 10 containers, then the HA will flip-flop between the two sets, depending on if one of the sets of containers fails or not.

Addendum: You would need to make an A record for each main IP address from each set.
 
Last edited:
  • Like
Reactions: ciarandwi
If you want to setup some kind of failover, you could of course always do the following:
You said in your initial post you have 5 services that need to run. You could spin up 1 set of five containers for each service you plan to run. You make an A record for the public IP address of the container that holds your applications' entry points.
This is what we have currently so that's good to know.

From what everyone has said here, it looks like you categorically need a reverse proxy to do this. I'll get my higher ups to order a new IP and we should be good to go

Thank you all for your help today :)
 
One IP is enough with port forwarding/NAT on the edge and an reverse proxy (nginx, caddy, traeffik) with SNI
Surely, that would be 2 IPs? One IP is using a NAT and the other is a reverse proxy address used in your SNI?

So in my case, a 150.1.2.3 address is the Server's IP Address. I wouldn't use that address in an A Record as that isn't a reverse proxy so I would need to get another IP Address and make that a Reverse proxy address and then used that address as the A Record
 
Last edited:
Surely, that would be 2 IPs? One IP is using a NAT and the other is a reverse proxy address used in your SNI?

So in my case, a 150.1.2.3 address is the Server's IP Address. I wouldn't use that address in an A Record as that isn't a reverse proxy so I would need to get another IP Address and make that a Reverse proxy address and then used that address as the A Record

Take my situation as an example: I host 3 web-sites each having a FQDN (www.example.com,stats.example.com,cloud.example.com). At DNS level, each of that host names is a CNAME record to the same host / IP (services.example.com). services.example.com is my edge router, having a routable IPv4 . It does port-forwarding to a reverse proxy (using Caddy), having an internal IP (192.168.0.0/24) which finally proxies the request based on Server Name Identification (SNI) to the 3 hosts (actually containers) using internal IPs.
 
  • Like
Reactions: ciarandwi and UdoB
Take my situation as an example: I host 3 web-sites each having a FQDN (www.example.com,stats.example.com,cloud.example.com). At DNS level, each of that host names is a CNAME record to the same host / IP (services.example.com). services.example.com is my edge router, having a routable IPv4 . It does port-forwarding to a reverse proxy (using Caddy), having an internal IP (192.168.0.0/24) which finally proxies the request based on Server Name Identification (SNI) to the 3 hosts (actually containers) using internal IPs.
Ok, I think I clock onto what you're saying.

Sorry for all these questions, been making websites for nearly 8 years now but never really had the opportunity to do a project from start-to-finish with creating the server containers and what not as somebody else used to do that... The DNS part i'm ok with it was just the networking part.

With that being said, I think we do need order an failover ip (secondary IP) and make it a reverse proxy address. Use that new address in the DNS and all should be working! Add a new bridge to the NGINX container which has this new IP so then that container is accessible by the internet and internally.

We use Cloudflare so it would be like this:

Cloudflare A Record -> NGINX (Reverse Proxy IP) -> NGINX goes to a localhost IP (where the code is hosted) -> Website should respond

Thank you for you help
 
Last edited:
Ok, I think I clock onto what you're saying.

Sorry for all these questions, been making websites for nearly 8 years now but never really had the opportunity to do a project from start-to-finish with creating the server containers and what not as somebody else used to do that... The DNS part i'm ok with it was just the networking part.

With that being said, I think we do need order an failover ip (secondary IP) and make it a reverse proxy address. Use that new address in the DNS and all should be working! Add a new bridge to the NGINX container which has this new IP so then that container is accessible by the internet and internally.

We use Cloudflare so it would be like this:

Cloudflare A Record -> NGINX (Reverse Proxy IP) -> NGINX goes to a localhost IP (where the code is hosted) -> Website should respond

Thank you for you help

Just an update, it seems like everything is operational!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!