Hi all,
I have a bunch of public IPs and 3 HV servers running a hyper-converged Proxmox cluster. Current network config: eth0 = public internet facing traffic, eth1 = private storage network for Ceph traffic.
Is it a good idea to bind the management traffic to a separate VLAN on the storage network, allocate everything with private IPs, then run a single load balanced nginx server to reverse proxy the GUIs sitting on private IPs?
The reasoning behind this is two fold - reducing the vectors of attack (less resource intensive compared to running a firewall on each HV) and obviously to save unnecessary public IPs.
Will need the VMs to have public IPs so they will get a virtIO NIC on vmbr0. What are the implications of not having a public IP on each HV for management (other than the proxy server going down and losing connectivity)? The proxy server would have a public IP. My biggest worry is not being able to run SSL certs on the hosts with private IPs.
Thoughts appreciated.
Thanks!
I have a bunch of public IPs and 3 HV servers running a hyper-converged Proxmox cluster. Current network config: eth0 = public internet facing traffic, eth1 = private storage network for Ceph traffic.
Is it a good idea to bind the management traffic to a separate VLAN on the storage network, allocate everything with private IPs, then run a single load balanced nginx server to reverse proxy the GUIs sitting on private IPs?
The reasoning behind this is two fold - reducing the vectors of attack (less resource intensive compared to running a firewall on each HV) and obviously to save unnecessary public IPs.
Will need the VMs to have public IPs so they will get a virtIO NIC on vmbr0. What are the implications of not having a public IP on each HV for management (other than the proxy server going down and losing connectivity)? The proxy server would have a public IP. My biggest worry is not being able to run SSL certs on the hosts with private IPs.
Thoughts appreciated.
Thanks!