Hi there, I am using Proxmox VE 5.1 in a datacenter. My current setting is that all VMs share the same public IP address.
I have also available second public IP address that I would like to assign / dedicate to one VM.
The configuration I have is that the PVE has the following /etc/network/interfaces:
The VMs' networks are bridged to vmbr2 and their configuration is this (it is an excerpt from /etc/sysconfig/network-scripts/ifcfg-eth0 as it is CentOS):
So, the way I understand it is that all traffic is routed via vmbr2 to vmbr0 and thus is using the eno1 NIC with the public IP xxx.xxx.xxx.78.
Also all INBOUND traffic from outside to port xxx to the public IP xxx.xxx.xxx.78 is routed to the 192.168.0.110:xxx, otherwise it "ends" on the PVE.
The vmbr1 is currently not in use.
This worked perfectly fine for me until now!
As it is a production server, and there is really not a way to reconfigure the PVE networking without restarting the whole node, I would like to get some feedback before I do any changes. And I am still undecided which scenario (from the ones described below I will go for).
Scenario #1: I will use the second available IP address (xxx.xxx.xxx.73) for multiple VM(s).
My idea is that I configure the vmbr1 the same way as is vmbr0 and vmbr3 similar to vmbr2, such as:
Then I would route the specific ports to the VMs, such as:
And all the VMs that would like to use the public IP xxx.xxx.xxx.73 would be bridged via vmbr3.
I somehow think that this a correct hypothesis. But there is one thing that I noticed when trying to use the Proxmox VE GUI to cofnigure the bridges and that is that I cannot assign the vmbr1 the Port/Slaves to eno1 and Gateway to xxx.xxx.xxx.65 (so to be the same as the vmbr0 except for the address), it says that the Port eno1 is already in use on the vmbr0 and the same for the Gateway.
Q: Is this just the GUI or is it not possible to set it this way? I do not have neither other NIC nor other Gateway available!
Scenario #2: I will to use the second available IP address (xxx.xxx.xxx.73) for a single specific VM.
In this case I would not have the vmbr1 configured at all and would just bridge it directly to the vmbr0 and the network configuration in the VM would like something like:
Plus for interVM communication I would also define another interface such as:
Q: Will this work as I expect?
Note:
* There is NGINX installed on the PVE that takes care of the communication on ports 80 and 443 to their respective web servers.
Any idea(s) / comments / corrections are highly appreciated!
Thanks all in advance!
tom.+
I have also available second public IP address that I would like to assign / dedicate to one VM.
The configuration I have is that the PVE has the following /etc/network/interfaces:
Code:
auto lo
iface lo inte loopback
iface eno1 inet manual
auto vmbr0
iface vmbr0 inet static
address xxx.xxx.xxx.78
netmask 255.255.255.0
gateway xxx.xxx.xxx.65
bridge_ports eno1
bridge_stp off
bridge_fd 0
auto vmbr1
iface vmbr1 inet static
address xxx.xxx.xxx.73
netmask 255.255.255.0
bridge_ports none
bridge_stp off
bridge_fd 0
auto vmbr2
iface vmbr2 inet static
address 192.168.0.254
netmask 255.255.255.0
bridge_ports none
bridge_stp off
bridge_fd 0
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '192.168.0.0/24' -o vmbr0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '192.168.0.0/24' -o vmbr0 -j MASQUERADE
post-up iptables -t nat -A PREROUTING -i vmbr0 -p tcp --dport xxx -j DNAT --to 192.168.0.110:xxx
post-down iptables -t nat -D PREROUTING -i vmbr0 -p tcp --dport xxx -j DNAT --to 192.168.0.110:xxx
The VMs' networks are bridged to vmbr2 and their configuration is this (it is an excerpt from /etc/sysconfig/network-scripts/ifcfg-eth0 as it is CentOS):
Code:
DEVICE="eth0"
IPADDR=192.168.0.110
GATEWAY=192.168.0.254
So, the way I understand it is that all traffic is routed via vmbr2 to vmbr0 and thus is using the eno1 NIC with the public IP xxx.xxx.xxx.78.
Also all INBOUND traffic from outside to port xxx to the public IP xxx.xxx.xxx.78 is routed to the 192.168.0.110:xxx, otherwise it "ends" on the PVE.
The vmbr1 is currently not in use.
This worked perfectly fine for me until now!
As it is a production server, and there is really not a way to reconfigure the PVE networking without restarting the whole node, I would like to get some feedback before I do any changes. And I am still undecided which scenario (from the ones described below I will go for).
Scenario #1: I will use the second available IP address (xxx.xxx.xxx.73) for multiple VM(s).
My idea is that I configure the vmbr1 the same way as is vmbr0 and vmbr3 similar to vmbr2, such as:
Code:
auto vmbr1
iface vmbr1 inet static
address xxx.xxx.xxx.73
netmask 255.255.255.0
gateway xxx.xxx.xxx.65
bridge_ports eno1
bridge_ports none
bridge_stp off
bridge_fd 0
auto vmbr3
iface vmbr3 inet static
address 192.168.0.253
netmask 255.255.255.0
bridge_ports none
bridge_stp off
bridge_fd 0
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '192.168.0.0/24' -o vmbr1 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '192.168.0.0/24' -o vmbr1 -j MASQUERADE
Then I would route the specific ports to the VMs, such as:
Code:
post-up iptables -t nat -A PREROUTING -i vmbr1 -p tcp --dport yyy -j DNAT --to 192.168.0.170:yyy
post-down iptables -t nat -D PREROUTING -i vmbr1 -p tcp --dport yyy -j DNAT --to 192.168.0.170:yyy
And all the VMs that would like to use the public IP xxx.xxx.xxx.73 would be bridged via vmbr3.
I somehow think that this a correct hypothesis. But there is one thing that I noticed when trying to use the Proxmox VE GUI to cofnigure the bridges and that is that I cannot assign the vmbr1 the Port/Slaves to eno1 and Gateway to xxx.xxx.xxx.65 (so to be the same as the vmbr0 except for the address), it says that the Port eno1 is already in use on the vmbr0 and the same for the Gateway.
Q: Is this just the GUI or is it not possible to set it this way? I do not have neither other NIC nor other Gateway available!
Scenario #2: I will to use the second available IP address (xxx.xxx.xxx.73) for a single specific VM.
In this case I would not have the vmbr1 configured at all and would just bridge it directly to the vmbr0 and the network configuration in the VM would like something like:
Code:
DEVICE="eth0"
IPADDR=xxx.xxx.xxx.73
GATEWAY=xxx.xxx.x.65
Plus for interVM communication I would also define another interface such as:
Code:
DEVICE="eth1"
IPADDR=192.168.0.170
GATEWAY=192.168.0.254
Q: Will this work as I expect?
Note:
* There is NGINX installed on the PVE that takes care of the communication on ports 80 and 443 to their respective web servers.
Any idea(s) / comments / corrections are highly appreciated!
Thanks all in advance!
tom.+