[TUTORIAL] PVE 6.2 Private VM (NAT) network configuration setup

tuathan

Member
May 23, 2020
52
8
8
I had been trying to create a private IP (NAT) setup for my VMs and managed to do it as follows, heavily relying on infomation in reference [1]. I have re-titled this as a Tutorial now:

1. In the Proxmox web interface for the host network configuration create a second bridge: vmbr1 with IP address only e.g. 192.168.1.1/24
(Assumes vmbr0 is configured and in use by PVE host already for network access in this example on the 10.140.79.X)

2. On the PVE host node edit using nano /etc/network/interfaces to look like below:

# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface eth0 inet manual

auto vmbr0
iface vmbr0 inet static
address 10.140.79.120/24
gateway 10.140.79.1
bridge-ports eth0
bridge-stp off
bridge-fd 0

auto vmbr1
iface vmbr1 inet static
address 192.168.1.1/24
bridge-ports none
bridge-stp off
bridge-fd 0

post-up echo > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '192.168.1.0/24' -o vmbr0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '192.168.1.0/24' -o vmbr0 -j MASQUERADE

3.
Bring up the second (NAT) bridge:

ifup vmbr1

4.
On the VM guest edit using nano /etc/network/interfaces to look like below:

auto lo
iface lo inet manual

iface ens18 inet manual

auto vmbr0
iface vmbr0 inet static
address 192.168.1.2
netmask 255.255.255.0
gateway 192.168.1.1
bridge-ports ens18
bridge-stp off
bridge-fd 0



For further virtual machines you can use these ips:
  • 192.168.1.3
  • 192.168.1.4
  • ...
5. run iptables command on the PVE host:

iptables -t nat -A PREROUTING -i vmbr0 -p tcp --dport 3033 -j DNAT --to 192.168.1.2:22

6. SSH onto the VM (via NAT) :

ssh -p 3033 root@ip_of_proxmox_host

7. Make iptables rule perminant (optional)

install iptables-persistent on the PVE host:

sudo apt-get iptables-persistent


Reference: [1] https://cyberpersons.com/2016/07/27/setup-nat-proxmox/
 
Last edited:
For my VPS Proxmox 6.2 hosted in the cloud i did the following, that is similar to your tutorial but shorter.

  1. enable paket forwarding in /etc/sysctl.conf on the Proxmox host

  2. edit /etc/network/interfaces on the ProxMox host, to get 10.10.10.0/24 for your containers, that route through eth0 of the ProxMox host

    Code:
    # network interface settings; autogenerated# Please do NOT modify this file directly, unless you know what
    # you're doing.
    #
    # If you want to manage parts of the network configuration manually,
    # please utilize the 'source' or 'source-directory' directives to do
    # so.
    # PVE will preserve these directives, but will NOT read its network
    # configuration from sourced files, so do not attempt to move any of
    # the PVE managed interfaces into external files!
    
    auto lo
    iface lo inet loopback
    
    auto eth0
    iface eth0 inet static
    address YOUR-PUBLIC-STATIC-IP/YOUR-PUBLIC-MASK
    gateway YOUR-STATIC-GATEWAY
    
    auto vmbr1
    iface vmbr1 inet static
    address 10.10.10.1
    netmask 255.255.255.0
    bridge_ports none
    bridge_stp off
    bridge_fd 0
    
    post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eth0 -j MASQUERADE
    post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eth0 -j MASQUERADE

  3. configure a container inside 10.10.10.0/24 - as an example using 10.10.10.2

    Bildschirmfoto 2020-09-08 um 11.39.39.png
 
@tuathan
Can I ask you why you're using a bridge interface inside the virtual machine and not simply the interface directly (ens18)?
Secondly, I'm having a little trouble understanding how this works without the auto directive for the interface itself which faces the network. Does vmbr0 implying raising the interface (eth0 on the host and ens18 in the vm)?
 
For my VPS Proxmox 6.2 hosted in the cloud i did the following, that is similar to your tutorial but shorter....


It Worked. Just make sure in IPTABLES RULES your INTERFACE NAME is correct.

How can I make port forwarding work ?
 
Last edited:
You can use standard iptables features on the ProxMox host:

Code:
iptables -t nat -A PREROUTING -d YOUR-PUBLIC-STATIC-IP/32 -p tcp -m tcp --dport YOUR-EXTERNAL-PORT -j DNAT --to-destination YOUR-CONTAINER-IP:YOUR-CONTAINER-PORT
 
  • Like
Reactions: srana2000
You can use standard iptables features on the ProxMox host:

Code:
iptables -t nat -A PREROUTING -d YOUR-PUBLIC-STATIC-IP/32 -p tcp -m tcp --dport YOUR-EXTERNAL-PORT -j DNAT --to-destination YOUR-CONTAINER-IP:YOUR-CONTAINER-PORT
Thank you .
 
Is this a typo?

Code:
post-up echo > /proc/sys/net/ipv4/ip_forward

Shouldn't it be:

Code:
post-up echo "1" > /proc/sys/net/ipv4/ip_forward

Or am I missing something?
 
@tuathan
Can I ask you why you're using a bridge interface inside the virtual machine and not simply the interface directly (ens18)?
Secondly, I'm having a little trouble understanding how this works without the auto directive for the interface itself which faces the network. Does vmbr0 implying raising the interface (eth0 on the host and ens18 in the vm)?
Hi.

In my case i would answer, because it is a good practice to keep an standard and normalization in your configurations and installation, in that way you are following an standard configuration and is easy to manage, teach, learn and pass to others.

A single network card will not add any functionality, if by any chance in the future you require vlans, or use the same network card for your VM'S.

Even if you are so crazy to unset the public ip to proxmox Host and create a secondary internal bridge and a phantom nic, set a vm as a firewall IPS and IDS and then plug it to the secondary BRIDGE to the internal nic, in this way even PROXMOX has its own firewall you can make it pass by an aditional firewall (SOPHOS,PFSENSE, etc) you have virtually set up in order to add more functionalities. Something as follow:

[ PROXMOX VE ] CLOUD --- [NIC1 - BRIDGE0 - nic1 of VMFIREWALL - TRANSPARENT FILTERS AND RULES - nic2 of VMFIREWALL - BRIDGE1 - FAKE NIC2] -- [internal nic with public IP]


Then in the firewall for security access you could reach it via VPN setting an internal network an internal nic and an internal VM for internal access and admon via VPN or any other crazy way you could come up with.

Remember that if you are in a cloud server you could wand to add another firewall you like the most than using the simple firewall of proxmox that is ok and works but if you want to add something more than a NATING and add IDS, IPS and more crazy things, a single NIC would not be enough.

I woud ask, why you would leave a network card alone directly and not into a bridge?

leaving a nic into a bridge you set a core and a base for any future crazy and good ideas that would come to your mind.
 
Last edited:
I had been trying to create a private IP (NAT) setup for my VMs and managed to do it as follows, heavily relying on infomation in reference [1]. I have re-titled this as a Tutorial now:

1. In the Proxmox web interface for the host network configuration create a second bridge: vmbr1 with IP address only e.g. 192.168.1.1/24
(Assumes vmbr0 is configured and in use by PVE host already for network access in this example on the 10.140.79.X)

2. On the PVE host node edit using nano /etc/network/interfaces to look like below:

# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface eth0 inet manual

auto vmbr0
iface vmbr0 inet static
address 10.140.79.120/24
gateway 10.140.79.1
bridge-ports eth0
bridge-stp off
bridge-fd 0

auto vmbr1
iface vmbr1 inet static
address 192.168.1.1/24
bridge-ports none
bridge-stp off
bridge-fd 0

post-up echo > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '192.168.1.0/24' -o vmbr0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '192.168.1.0/24' -o vmbr0 -j MASQUERADE

3.
Bring up the second (NAT) bridge:

ifup vmbr1

4.
On the VM guest edit using nano /etc/network/interfaces to look like below:

auto lo
iface lo inet manual

iface ens18 inet manual

auto vmbr0
iface vmbr0 inet static
address 192.168.1.2
netmask 255.255.255.0
gateway 192.168.1.1
bridge-ports ens18
bridge-stp off
bridge-fd 0



For further virtual machines you can use these ips:
  • 192.168.1.3
  • 192.168.1.4
  • ...
5. run iptables command on the PVE host:

iptables -t nat -A PREROUTING -i vmbr0 -p tcp --dport 3033 -j DNAT --to 192.168.1.2:22

6. SSH onto the VM (via NAT) :

ssh -p 3033 root@ip_of_proxmox_host

7. Make iptables rule perminant (optional)

install iptables-persistent on the PVE host:

sudo apt-get iptables-persistent


Reference: [1] https://cyberpersons.com/2016/07/27/setup-nat-proxmox/
Thanks a lot for your post. It helped me with my setup and resolved my problem
 
Hi.

In my case i would answer, because it is a good practice to keep an standard and normalization in your configurations and installation, in that way you are following an standard configuration and is easy to manage, teach, learn and pass to others.

A single network card will not add any functionality, if by any chance in the future you require vlans, or use the same network card for your VM'S.

Even if you are so crazy to unset the public ip to proxmox Host and create a secondary internal bridge and a phantom nic, set a vm as a firewall IPS and IDS and then plug it to the secondary BRIDGE to the internal nic, in this way even PROXMOX has its own firewall you can make it pass by an aditional firewall (SOPHOS,PFSENSE, etc) you have virtually set up in order to add more functionalities. Something as follow:



[ PROXMOX VE ]
CLOUD --- [NIC1 - BRIDGE0 - nic1 of VMFIREWALL - TRANSPARENT FILTERS AND RULES - nic2 of VMFIREWALL - BRIDGE1 - FAKE NIC2] -- [internal nic with public IP]



Then in the firewall for security access you could reach it via VPN setting an internal network an internal nic and an internal VM for internal access and admon via VPN or any other crazy way you could come up with.

Remember that if you are in a cloud server you could wand to add another firewall you like the most than using the simple firewall of proxmox that is ok and works but if you want to add something more than a NATING and add IDS, IPS and more crazy things, a single NIC would not be enough.

I woud ask, why you would leave a network card alone directly and not into a bridge?

leaving a nic into a bridge you set a core and a base for any future crazy and good ideas that would come to your mind.
I don't buy that and I think it's overcomplicated. There's no reason to set up bridge interfaces inside the virtual machines too. You can very easily add and remove interfaces whenever you want and associate them to whatever VLAN you want, so that wouldn't compromise consistency and flexibility (such as using VLANs later on – you'd associate those interfaces with an access port (untagged) and you're good to go).
 
I don't buy that and I think it's overcomplicated. There's no reason to set up bridge interfaces inside the virtual machines too. You can very easily add and remove interfaces whenever you want and associate them to whatever VLAN you want, so that wouldn't compromise consistency and flexibility (such as using VLANs later on – you'd associate those interfaces with an access port (untagged) and you're good to go).
It’s true that setting up bridges might seem overcomplicated for newcomers or those unfamiliar with Linux networking, but for experienced Linux administrators, it’s a standard and powerful practice. Proxmox simplifies many of the tasks we’ve traditionally done via the command line, and bridges are a key part of that. Let me clarify a few points:

  1. Bridges on the Proxmox Host, Not Inside VMs:
    I’m not advocating for creating bridges inside every virtual machine. Instead, I’m referring to setting up bridges on the Proxmox host itself. This is where the real power of Proxmox networking lies. By configuring bridges on the host, you create a flexible and scalable networking environment for all your VMs. This approach is not only efficient but also aligns with Proxmox’s own recommendations.
  2. Why Bridges Are Essential:
    Bridges enable advanced networking features like bridging, routing, and masquerading, which are crucial for managing a virtualized environment. Without bridges, you’d lose the ability to easily manage VLANs, isolate networks, or implement complex routing setups. Proxmox’s documentation explicitly recommends using bridges (either Linux bridges or Open vSwitch) because they enhance performance and provide the flexibility needed for modern virtualized environments.
  3. Proxmox as the Master Host:
    Proxmox should always be seen as the master host, and bridges are the foundation for networking within its "cloud" of VMs. Without bridges, administering tasks like network isolation, traffic filtering, or even simple NAT setups becomes significantly harder. Bridges allow you to abstract the physical network interfaces and create virtual networks that can be easily managed and scaled.
  4. Future-Proofing and Flexibility:
    While it’s true that you can add or remove interfaces as needed, bridges provide a consistent and future-proof setup. For example, if you decide to implement VLANs, firewalls, or advanced routing later, having bridges in place makes the transition seamless. It’s much easier to adapt an existing bridge to new requirements than to retrofit a non-bridged setup.
  5. Performance and Capability:
    Bridges aren’t just about complexity—they enhance performance and capability. By using bridges, you can optimize traffic flow, implement security measures (like firewalls or IDS/IPS systems), and even integrate third-party tools (e.g., Sophos, pfSense) into your network architecture. This level of control and performance is difficult to achieve without bridges.
  6. Proxmox’s Own Guidance:
    Proxmox’s official documentation and best practices recommend using bridges for a reason. Whether you choose Linux bridges or Open vSwitch, they provide the foundation for a robust and scalable virtualized network. Ignoring this recommendation might work for simple setups, but it limits your ability to grow and adapt your environment.

Final Thoughts:
While it’s possible to avoid bridges in very simple setups, doing so sacrifices the flexibility, scalability, and advanced features that Proxmox is designed to provide. For those of us who have worked with Linux networking for years, bridges are a natural and essential part of any virtualized environment. They’re not just a "nice-to-have"—they’re a cornerstone of effective Proxmox administration.
 
Your post stinks of LLM output, it sounds both robotic and useless.

That said, if you're not advocating for creating bridges inside every virtual machine, then why respond in the first place? I've never said you shouldn't use bridges on the Proxmox host. Also, saying that you wouldn't use bridges inside every virtual machine means that there are some virtual machines which you should be adding bridges too. Which ones? You're not addressing that (because chatGPT probably isn't covering that). This isn't about being a newcomer or not, it's about setting up your network in a proper way, that's it.
 
I had been trying to create a private IP (NAT) setup for my VMs and managed to do it as follows, heavily relying on infomation in reference [1]. I have re-titled this as a Tutorial now:

1. In the Proxmox web interface for the host network configuration create a second bridge: vmbr1 with IP address only e.g. 192.168.1.1/24
(Assumes vmbr0 is configured and in use by PVE host already for network access in this example on the 10.140.79.X)

2. On the PVE host node edit using nano /etc/network/interfaces to look like below:

# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface eth0 inet manual

auto vmbr0
iface vmbr0 inet static
address 10.140.79.120/24
gateway 10.140.79.1
bridge-ports eth0
bridge-stp off
bridge-fd 0

auto vmbr1
iface vmbr1 inet static
address 192.168.1.1/24
bridge-ports none
bridge-stp off
bridge-fd 0

post-up echo > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '192.168.1.0/24' -o vmbr0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '192.168.1.0/24' -o vmbr0 -j MASQUERADE

3.
Bring up the second (NAT) bridge:

ifup vmbr1

4.
On the VM guest edit using nano /etc/network/interfaces to look like below:

auto lo
iface lo inet manual

iface ens18 inet manual

auto vmbr0
iface vmbr0 inet static
address 192.168.1.2
netmask 255.255.255.0
gateway 192.168.1.1
bridge-ports ens18
bridge-stp off
bridge-fd 0



For further virtual machines you can use these ips:
  • 192.168.1.3
  • 192.168.1.4
  • ...
5. run iptables command on the PVE host:

iptables -t nat -A PREROUTING -i vmbr0 -p tcp --dport 3033 -j DNAT --to 192.168.1.2:22

6. SSH onto the VM (via NAT) :

ssh -p 3033 root@ip_of_proxmox_host

7. Make iptables rule perminant (optional)

install iptables-persistent on the PVE host:

sudo apt-get iptables-persistent


Reference: [1] https://cyberpersons.com/2016/07/27/setup-nat-proxmox/
Hello! I’d like to help you achieve your goal of setting up a private IP (NAT) environment for your VMs in a more streamlined and modern way. Since you’re using Proxmox, I recommend upgrading to Proxmox VE 8 if you haven’t already. This version introduces Software-Defined Networking (SDN), which simplifies many of the tasks you’re trying to accomplish. With SDN, you can configure NAT and other networking features directly through the Proxmox GUI, making the process more intuitive and less error-prone.

Below, I’ll walk you through an example setup using your provided IP addresses. I’ll also explain each step so you can understand why it’s done this way. For this example, I’ll use Open vSwitch (OVS) bridges, which I highly recommend for their flexibility and performance. However, you can adapt this to use Linux bridges if you prefer.


Step 1: Configure the Network Interfaces

First, let’s set up the network interfaces on your Proxmox host. This configuration will define how your host communicates with the outside world and how it handles internal VM traffic.

Edit the file /etc/network/interfaces to include the following:

Bash:
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

source /etc/network/interfaces.d/*

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual
        ovs_type OVSPort
        ovs_bridge vmbr1
        dns-nameservers x.x.x.x y.y.y.y
        dns-search your.domain
# dns-* options are implemented by the resolvconf package, if installed

auto ovsip1
iface ovsip1 inet static
        address 10.140.79.120/24
        gateway 10.140.79.1
        ovs_type OVSIntPort
        ovs_bridge vmbr1

auto vmbr0
iface vmbr0 inet manual
        ovs_type OVSBridge

auto vmbr1
iface vmbr1 inet manual
        ovs_type OVSBridge
        ovs_ports ovsip1 eth0

auto vmbr2
iface vmbr2 inet manual
        ovs_type OVSBridge

Explanation:

  • eth0: This is your physical network interface. We configure it as an OVS port and attach it to the vmbr1 bridge.
  • ovsip1: This is an internal OVS port that allows the host to communicate with the external network. It’s assigned your public IP (10.140.79.120).
  • vmbr1: This OVS bridge is for external and internal traffic, respectively, this will be called your WAN or external bridge.
  • vmbr0 and vmbr2: These OVS bridges will handle private traffic for your VMs, development environments without external access, used for internal communication between VM's and isolation, for example somes VM can be on vmbr0 and others in vmbr2 and all VM's in vmbr0 will not chat with vmbr2.

Step 2: Configure SDN for NAT

Next, we’ll configure SDN to handle NAT for your private network. This is where the magic happens! Create or edit the file /etc/network/interfaces.d/sdn with the following content:

Bash:
#version:21

auto vnet1
iface vnet1
        address 192.168.1.1/24
        post-up iptables -t nat -A POSTROUTING -s '192.168.1.0/24' -o ovsip1 -j SNAT --to-source 10.140.79.120
        post-down iptables -t nat -D POSTROUTING -s '192.168.1.0/24' -o ovsip1 -j SNAT --to-source 10.140.79.120
        post-up iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
        post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1
        bridge_ports none
        bridge_stp off
        bridge_fd 0
        alias vnet1
        ip-forward on

Explanation:

  • vnet1: This is the virtual network interface for your private NAT. It’s assigned the IP 192.168.1.1, which acts as the gateway for your VMs.
  • SNAT: This rule ensures that traffic from your private network (192.168.1.0/24) is masqueraded (NAT’ed) using your host’s public IP (10.140.79.120).
  • ip-forward on: Enables IP forwarding, allowing traffic to flow between your private network and the external network.

Step 3: Configure SDN Zones and Subnets

Now, let’s define the SDN zones and subnets. These configurations tell Proxmox how to manage your private network.

3.1. Zones Configuration (/etc/pve/sdn/zones.cfg):
Bash:
simple: zone1
        dhcp dnsmasq
        ipam pve

3.2. Virtual Networks Configuration (/etc/pve/sdn/vnets.cfg):
Bash:
vnet: vnet1
        zone zone1
        alias vnet1

3.3. Subnets Configuration (/etc/pve/sdn/subnets.cfg):
Bash:
subnet: zone1-192.168.1.0-24
        vnet vnet1
        dhcp-dns-server 192.168.1.2
        dhcp-range start-address=192.168.1.154,end-address=192.168.1.254
        gateway 192.168.1.1
        snat 1

Explanation:

  • zone1: This is the SDN zone for your private network. It uses dnsmasq for DHCP and pve for IP address management.
  • vnet1: This is the virtual network tied to zone1.
  • subnet: Defines the private subnet (192.168.1.0/24), including the DHCP range and gateway.

Step 4: Apply and Test

  1. Apply the Configuration:
    • Restart the networking service on your Proxmox host:
    Bash:
    systemctl restart networking
    • Verify that the bridges and interfaces are up:
    Bash:
    ip a
  2. Test the Setup:
    • Assign private IPs (192.168.1.x) to your VMs and ensure they can access the internet through NAT.
    • Use the Proxmox GUI to monitor and manage your SDN configuration.

Why This Approach?

  • Simplicity: By using SDN, you avoid manual iptables rules and complex configurations.
  • Scalability: This setup can easily be extended to include more subnets, VLANs, or advanced routing.
  • GUI Integration: Proxmox’s SDN features are fully integrated into the GUI, making management much easier.

NOTE:​

Remeber to install dnsmasq for dhcp leases:
Code:
# apt update
# apt install dnsmasq
# systemctl disable --now dnsmasq
Remember to set forwarding in sysctl.conf to allow the SNAT to work:
Bash:
# echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
# sysctl -p

Reference [1] https://pve.proxmox.com/pve-docs/chapter-pvesdn.html
 
Last edited:
Your post stinks of LLM output, it sounds both robotic and useless.

That said, if you're not advocating for creating bridges inside every virtual machine, then why respond in the first place? I've never said you shouldn't use bridges on the Proxmox host. Also, saying that you wouldn't use bridges inside every virtual machine means that there are some virtual machines which you should be adding bridges too. Which ones? You're not addressing that (because chatGPT probably isn't covering that). This isn't about being a newcomer or not, it's about setting up your network in a proper way, that's it.
First of all I did not use LLM I am answering what you have said and i qoute "... There's no reason to set up bridge interfaces inside the virtual machines too ....", I leave your full text:
I don't buy that and I think it's overcomplicated. There's no reason to set up bridge interfaces inside the virtual machines too. You can very easily add and remove interfaces whenever you want and associate them to whatever VLAN you want, so that wouldn't compromise consistency and flexibility (such as using VLANs later on – you'd associate those interfaces with an access port (untagged) and you're good to go).

I You read my answer i didn't propose to set bridge interfaces inside the virtual machines, my proposal is in an environment as AWS for example, is your CLOUD it will goes to your NIC and having a proxmox there, that NIC will need to have a Bridge i which an VM will be attached and being a firewall as pfsense, it needs another internal NIC to another internal Bridge where all others VMS will be protected by its "pfsense" firewall for example.

That is what i said.

All the alleged information taken from LLM can be found on the proxmox site, it does a recomendation with all information i extracted from my course documentation i prepared few days ago.

Greetings.