Hello,
I have searched a lot in this forum and reddit for some solution to create multiple isolated subnets in Proxmox and everything seems to fail.
A bit of details:
I have one dedicated server with one NIC and one puplic IP. I am trying to create a lab environment for myself where I'll be learning to use enterprise linux and Windows Server and I need to create subnets for each setup. I would love to make each subnet isolated and unable to reach the other one. Currently, I have the below configured after reading Proxmox docs and looking around here:
vmbr0 is where I have an ocserv instance running so that I can VPN to my network and access all my servers using their private IPs.
vmbr1 is where I have some Linux servers and vmbr2 is for windows.
I am using NAT and Masquerading to give my VMs access to the internet. But the issue with this setup is that all VMs can communicate with each other across different subnets.
Is there a way to achieve what I want ? where vmbr1 and vmbr2 VMs can't communicate to each other by default, and have internet access through the host NIC ? Be it a change to my config, or maybe some external appliance
I think whatever achieve this will also isolate vmbr0 which I use to reach all my servers, so there should be a way to allow me to open SSH/RDP ports between vmbr0 and vmbr1/2. I don't know I am really new to linux and virtualization.
Thanks in advance.
I have searched a lot in this forum and reddit for some solution to create multiple isolated subnets in Proxmox and everything seems to fail.
A bit of details:
I have one dedicated server with one NIC and one puplic IP. I am trying to create a lab environment for myself where I'll be learning to use enterprise linux and Windows Server and I need to create subnets for each setup. I would love to make each subnet isolated and unable to reach the other one. Currently, I have the below configured after reading Proxmox docs and looking around here:
Code:
root@proxmox ~ # cat /etc/network/interfaces
source /etc/network/interfaces.d/*
auto lo
iface lo inet loopback
iface lo inet6 loopback
auto enp0s31f6
iface enp0s31f6 inet static
address <My_Public_IP>
netmask 255.255.255.192
gateway <Gateway>
up route add -net <Net> netmask 255.255.255.192 gw <Gateway> dev enp0s31f6
auto vmbr0
iface vmbr0 inet static
address 10.10.0.1
netmask 255.255.255.0
bridge_ports none
bridge_stp off
bridge_fd 0
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '10.10.0.0/24' -o enp0s31f6 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.10.0.0/24' -o enp0s31f6 -j MASQUERADE
#ocserv
post-up iptables -t nat -A PREROUTING -i enp0s31f6 -p tcp --dport 443 -j DNAT --to 10.10.1.5:443
post-down iptables -t nat -D PREROUTING -i enp0s31f6 -p tcp --dport 443 -j DNAT --to 10.10.1.5:443
auto vmbr1
iface vmbr1 inet static
address 10.10.1.1
netmask 255.255.255.0
bridge_ports none
bridge_stp off
bridge_fd 0
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '10.10.1.0/24' -o enp0s31f6 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.10.1.0/24' -o enp0s31f6 -j MASQUERADE
auto vmbr2
iface vmbr2 inet static
address 10.10.2.1
netmask 255.255.255.0
bridge_ports none
bridge_stp off
bridge_fd 0
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '10.10.2.0/24' -o enp0s31f6 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.10.2.0/24' -o enp0s31f6 -j MASQUERADE
vmbr0 is where I have an ocserv instance running so that I can VPN to my network and access all my servers using their private IPs.
vmbr1 is where I have some Linux servers and vmbr2 is for windows.
I am using NAT and Masquerading to give my VMs access to the internet. But the issue with this setup is that all VMs can communicate with each other across different subnets.
Is there a way to achieve what I want ? where vmbr1 and vmbr2 VMs can't communicate to each other by default, and have internet access through the host NIC ? Be it a change to my config, or maybe some external appliance
I think whatever achieve this will also isolate vmbr0 which I use to reach all my servers, so there should be a way to allow me to open SSH/RDP ports between vmbr0 and vmbr1/2. I don't know I am really new to linux and virtualization.
Thanks in advance.