Simulate hyper-v virtual switch external

dankusmemus

New Member
Jul 30, 2022
4
0
1
I'm trying to figure out a way to simulate the NAT found in Hyper V's virtual switches. When I use vmware and hyper-v I can easily create a separate NAT network for the VM which will be different from the network it's connected to. I can then RDP into that vm using only it's IP address without any additional port forwarding. From my understanding, if I create a NAT network, I would then need to port forward all the VMs I want to be accessible. Is there a solution to allow me to RDP into a NAT network without port forwarding?

Thanks!

Edit: Not sure if this would be work but it's something I'm thinking of. I can create a virtual router to sit between vmbr0 and vmbr1 (virtual machine network I want to NAT). That way I would be able to route traffic between the 2 networks.
 
Last edited:
Sorry, didn't fully read your question. Having a private VM host network on Proxmox is fairly common particularly when running in a hosted environment.

The usual way is via iptables command which need to manually added to the hosts /etc/network/interfaces file

Here's an example
auto lo iface lo inet loopback auto ens18 iface ens18 inet manual auto vmbr0 iface vmbr0 inet static address 10.100.10.60/24 gateway 10.100.10.254 bridge-ports ens18 bridge-stp off bridge-fd 0 auto vmbr1 iface vmbr1 inet static address 192.168.100.1/24 bridge-ports none bridge-stp off bridge-fd 0 post-up echo 1 > /proc/sys/net/ipv4/ip_forward post-up iptables -t nat -A POSTROUTING -s 192.168.100.0/24 -o vmbr0 -j MASQUERADE post-down iptables -t nat -D POSTROUTING -s 192.168.100.0/24 -o vmbr0 -j MASQUERADE post-up iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1 post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1

The host is connected to the physical 10.100.10.0 network while the VM's are using a 192.168.100.0 network and route all external traffic via vmbr0. However, this does not enable any incoming RDP connections. You would need to add some port mappings to direct incoming connections to the relevant VM. Alternatively, running tailscale on the VM and the remote system should allow direct access (not tried it, but I should work)
 
Sorry, didn't fully read your question. Having a private VM host network on Proxmox is fairly common particularly when running in a hosted environment.

The usual way is via iptables command which need to manually added to the hosts /etc/network/interfaces file

Here's an example
auto lo iface lo inet loopback auto ens18 iface ens18 inet manual auto vmbr0 iface vmbr0 inet static address 10.100.10.60/24 gateway 10.100.10.254 bridge-ports ens18 bridge-stp off bridge-fd 0 auto vmbr1 iface vmbr1 inet static address 192.168.100.1/24 bridge-ports none bridge-stp off bridge-fd 0 post-up echo 1 > /proc/sys/net/ipv4/ip_forward post-up iptables -t nat -A POSTROUTING -s 192.168.100.0/24 -o vmbr0 -j MASQUERADE post-down iptables -t nat -D POSTROUTING -s 192.168.100.0/24 -o vmbr0 -j MASQUERADE post-up iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1 post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1

The host is connected to the physical 10.100.10.0 network while the VM's are using a 192.168.100.0 network and route all external traffic via vmbr0. However, this does not enable any incoming RDP connections. You would need to add some port mappings to direct incoming connections to the relevant VM. Alternatively, running tailscale on the VM and the remote system should allow direct access (not tried it, but I should work)
Thanks for the reply! That was something I also tried in which I used the iptables rule to NAT the internal network to vmbr0. Only issue is that I would need to port forward every single host I want to be accessible in the future which causes annoyances for scaling. In Vmware as well as Hyper-V, they use virtual switches to bring the traffic forward without needing port forwarding. I haven't fully tested it yet, but I plan on creating an "openwrt" virtual machine to act as a router between the internal network and vmbr0.
 
I'm not sure I understand the technology you're referring to - is it vmware NSX?

There is a development feature on proxmox, documented here
https://pve.proxmox.com/pve-docs/chapter-pvesdn.html
which might be relevant
I'm mostly referring to the ability to create a separate network (other than vmbr0) and still be able to access that network if it is not bridged and in a different network than vmbr0. Let's sat if I have 192.168.1.0/24 as my vmbr0 network and I want to have 192.168.10.0/24 as my vmbr1, but I still want to be able to access those hosts in vmbr1 via RDP. In hyper-v, a virtual switch is created automatically to allow access to the host via RDP even if it is on a separate network than the main bridged network. I think the solution I can think of in Proxmox is to create a virtual router to route the traffic between vmbr1 and vmbr0 instead of requiring port forwarding for every host on the vmbr1 network.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!