Correct way to configure a cluster's network?

justjosh

Member
Nov 4, 2019
93
0
11
58
Hi all,

I have a bunch of public IPs and 3 HV servers running a hyper-converged Proxmox cluster. Current network config: eth0 = public internet facing traffic, eth1 = private storage network for Ceph traffic.

Is it a good idea to bind the management traffic to a separate VLAN on the storage network, allocate everything with private IPs, then run a single load balanced nginx server to reverse proxy the GUIs sitting on private IPs?

The reasoning behind this is two fold - reducing the vectors of attack (less resource intensive compared to running a firewall on each HV) and obviously to save unnecessary public IPs.

Will need the VMs to have public IPs so they will get a virtIO NIC on vmbr0. What are the implications of not having a public IP on each HV for management (other than the proxy server going down and losing connectivity)? The proxy server would have a public IP. My biggest worry is not being able to run SSL certs on the hosts with private IPs.

Thoughts appreciated.

Thanks!
 
There's a bunch of questions in your post, I'll try to go through them:

Is it a good idea to bind the management traffic to a separate VLAN on the storage network

No. It's strongly recommended to *physically* isolate the storage network if possible, as otherwise heavy IO traffic could lead to Corosync (and thus the cluster) going out of sync, leading to a loss of quorum. You *can* use it as a fallback cluster link, but even then I'd be very careful.

allocate everything with private IPs, then run a single load balanced nginx server to reverse proxy the GUIs sitting on private IPs

Technically yes, although you lose any fault-tolerance of course.

reducing the vectors of attack (less resource intensive compared to running a firewall on each HV)

Reducing attack vectors/reducing resource usage are two different things. The former makes little sense IMO, since a proxy server that transparently forwards all requests to the Proxmox hosts adds little to security. The PVE GUI is already secured using SSL/HTTPS.

In terms of performance, modern iptables firewalls use very little resources, I'd try it out and see if you're running into any kind of performance issues before unnecessarily optimizing.

Will need the VMs to have public IPs so they will get a virtIO NIC on vmbr0.

Bridges (vmbr0, etc...) are layer 2, i.e. they bridge ethernet, not IP traffic (assigning an IP address to a bridge is a Linux-specific thing). Giving out public IPs to VMs should be easy, provided that they are routable in the network your vmbr0 is connected to.

My biggest worry is not being able to run SSL certs on the hosts with private IPs.

The PVE GUI is always protected using HTTPS. It might make sense to use the proxy approach for securing services running in your VMs though, but as you stated, that removes fault-tolerance even if you use HA for the services itself.

Hope that helps!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!