Hi there,
I'm currently setting up a server as a Proxmox host which is supposed to hold two VMs (for now):
- one Debian 12 instance as a remote desktop server with ThinLinc
- one Debian 12 instance as a compute node for HTcondor
The host machine has 4 onboard 1GB network ports and a 10GB Intel NIC with two ports as a PCIe card.
My plan is to have:
- two of the 1GB ports in a bond, going into our internal switch for the management network and the virtual network between the two VMs (IP range 192.168.0.0/24)
- the other two 1GB ports in a bond, going into our DMZ switch for external access to the ThinLinc server (IP range 192.168.10.0/24)
- the two 10GB ports in a bond, going into our internal storage network on an isolated switch (IP range 10.0.10.0/24)
The 10GB bond and the internal 1GB bond should be accessible by both VMs and the host. The external (DMZ) 1GB bond should absolutely not allow any access to the host or condor VM but ONLY to the ThinLinc VM. Is there any way to configure it this way without passthrough'ing the two 1GB ports to this VM?
I don't want to use passthrough because of the complete allocation of the VMs RAM into the host RAM, as I learned here (https://forum.proxmox.com/threads/pcie-passthrough-breaks-display-of-memory-usage.142444/) and here (https://forum.proxmox.com/threads/very-high-memory-usage-on-vm.140907/), since both VMs together will have >90% of the host's RAM allocated in total and I believe there will be severe issues if >80% of host RAM is constantly allocated, if I understood right.
Thanks in advance for any advice!
I'm currently setting up a server as a Proxmox host which is supposed to hold two VMs (for now):
- one Debian 12 instance as a remote desktop server with ThinLinc
- one Debian 12 instance as a compute node for HTcondor
The host machine has 4 onboard 1GB network ports and a 10GB Intel NIC with two ports as a PCIe card.
My plan is to have:
- two of the 1GB ports in a bond, going into our internal switch for the management network and the virtual network between the two VMs (IP range 192.168.0.0/24)
- the other two 1GB ports in a bond, going into our DMZ switch for external access to the ThinLinc server (IP range 192.168.10.0/24)
- the two 10GB ports in a bond, going into our internal storage network on an isolated switch (IP range 10.0.10.0/24)
The 10GB bond and the internal 1GB bond should be accessible by both VMs and the host. The external (DMZ) 1GB bond should absolutely not allow any access to the host or condor VM but ONLY to the ThinLinc VM. Is there any way to configure it this way without passthrough'ing the two 1GB ports to this VM?
I don't want to use passthrough because of the complete allocation of the VMs RAM into the host RAM, as I learned here (https://forum.proxmox.com/threads/pcie-passthrough-breaks-display-of-memory-usage.142444/) and here (https://forum.proxmox.com/threads/very-high-memory-usage-on-vm.140907/), since both VMs together will have >90% of the host's RAM allocated in total and I believe there will be severe issues if >80% of host RAM is constantly allocated, if I understood right.
Thanks in advance for any advice!