Hi all,
I'm sure this is a noob mistake I'm making somewhere, but need some help to optimize the network config for a couple of VMs (will grow to more later) - well actually 1 LXC container and 1 VM.
I have a single PVE host. The host has a 2-port NIC on the motherboard plus a 4-port NIC with configuration as follows:
I have an LXC container set up for a Debian file/media server (Turnkey Linux Mediaserver) running samba for Windows clients. I also have a Windows 10 VM running that is used for various things that access the data on the samba shares frequently, so I'd like to maximize file transfer speed between the file server and VM.
Originally, I tried to set up networking for these machines as follows:
So as a test, I removed vmbr10 and eth1 on each virtual machine, and just connected eth0 on both machines to vmbr1 (still tied to 1 physical port on the host). Sure enough, this time the file transfer speed was almost double. What I don't like about it is that the VM and container have to share a physical port for outside connections.
Obviously I'm missing something in the original configuration. I am admittedly a networking novice at best, so just looking for some help to optimize this. In the end, I'd like to have what I was aiming for in the original config:
I'm sure this is a noob mistake I'm making somewhere, but need some help to optimize the network config for a couple of VMs (will grow to more later) - well actually 1 LXC container and 1 VM.
I have a single PVE host. The host has a 2-port NIC on the motherboard plus a 4-port NIC with configuration as follows:
- One physical port is tied to vmbr0 for PVE management.
- One physical port is tied to vmbr1 (no IP address, etc.).
- One physical port is tied to vmbr2 (no IP address, etc.).
- I also set up another bridge called vmbr10 (no IP address, etc.).
I have an LXC container set up for a Debian file/media server (Turnkey Linux Mediaserver) running samba for Windows clients. I also have a Windows 10 VM running that is used for various things that access the data on the samba shares frequently, so I'd like to maximize file transfer speed between the file server and VM.
Originally, I tried to set up networking for these machines as follows:
- eth0 on the Debian machine connected to vmbr1 with configuration completed inside the container as normal. IP address is static and assigned by an external router on 192.168.0.0/24 network.
- Also on the Debian container, using PVE GUI I created a 2nd interface (eth1) connected to vmbr10 and gave it IP address 10.0.0.10/24 with no gateway.
- Inside the Debian container I changed route metrics to make eth1 primary.
- eth0 on the Windows VM was connected to vmbr2 with configuration completed inside the VM as normal. As with eth0 on the Debian machine, this one also gets a static IP from my external router on the 192.168.0.0/24 network.
- Also on the Windows VM, using PVE GUI I created a 2nd interface (eth1) connected to vmbr10 and gave it IP address 10.0.0.20/24 with no gateway.
- Inside the Windows VM I changed route metrics to make eth1 primary.
So as a test, I removed vmbr10 and eth1 on each virtual machine, and just connected eth0 on both machines to vmbr1 (still tied to 1 physical port on the host). Sure enough, this time the file transfer speed was almost double. What I don't like about it is that the VM and container have to share a physical port for outside connections.
Obviously I'm missing something in the original configuration. I am admittedly a networking novice at best, so just looking for some help to optimize this. In the end, I'd like to have what I was aiming for in the original config:
- Dedicated physical port for each virtual machine for connections outside the PVE host.
- Traffic between VM and container should stay inside the PVE host for maximum speed (again, assumed the 2nd bridge on the 10.0.0.0/24 network would accomplish that).
Last edited: