[SOLVED] Help Needed - Optimal Network Config for VMs

ktide1

New Member
Feb 12, 2019
4
0
1
47
Hi all,

I'm sure this is a noob mistake I'm making somewhere, but need some help to optimize the network config for a couple of VMs (will grow to more later) - well actually 1 LXC container and 1 VM.

I have a single PVE host. The host has a 2-port NIC on the motherboard plus a 4-port NIC with configuration as follows:
  1. One physical port is tied to vmbr0 for PVE management.
  2. One physical port is tied to vmbr1 (no IP address, etc.).
  3. One physical port is tied to vmbr2 (no IP address, etc.).
  4. I also set up another bridge called vmbr10 (no IP address, etc.).

I have an LXC container set up for a Debian file/media server (Turnkey Linux Mediaserver) running samba for Windows clients. I also have a Windows 10 VM running that is used for various things that access the data on the samba shares frequently, so I'd like to maximize file transfer speed between the file server and VM.

Originally, I tried to set up networking for these machines as follows:
  1. eth0 on the Debian machine connected to vmbr1 with configuration completed inside the container as normal. IP address is static and assigned by an external router on 192.168.0.0/24 network.
  2. Also on the Debian container, using PVE GUI I created a 2nd interface (eth1) connected to vmbr10 and gave it IP address 10.0.0.10/24 with no gateway.
  3. Inside the Debian container I changed route metrics to make eth1 primary.
  4. eth0 on the Windows VM was connected to vmbr2 with configuration completed inside the VM as normal. As with eth0 on the Debian machine, this one also gets a static IP from my external router on the 192.168.0.0/24 network.
  5. Also on the Windows VM, using PVE GUI I created a 2nd interface (eth1) connected to vmbr10 and gave it IP address 10.0.0.20/24 with no gateway.
  6. Inside the Windows VM I changed route metrics to make eth1 primary.
So with this setup, I assumed traffic between the Debian container and the Windows VM would remain within host at optimal transfer speed, using the 10.0.0.0/24 network since both were connected to vmbr10 (vmbr1 and vmbr2 I assumed would only be used for connections outside the host). However, using some test file transfers in the Windows VM, speeds were slower than expected - still good, but obviously at limit of the physical gigabit network. So it appeared each machine was using the 192.168.0.0/24 network only.

So as a test, I removed vmbr10 and eth1 on each virtual machine, and just connected eth0 on both machines to vmbr1 (still tied to 1 physical port on the host). Sure enough, this time the file transfer speed was almost double. What I don't like about it is that the VM and container have to share a physical port for outside connections.

Obviously I'm missing something in the original configuration. I am admittedly a networking novice at best, so just looking for some help to optimize this. In the end, I'd like to have what I was aiming for in the original config:
  1. Dedicated physical port for each virtual machine for connections outside the PVE host.
  2. Traffic between VM and container should stay inside the PVE host for maximum speed (again, assumed the 2nd bridge on the 10.0.0.0/24 network would accomplish that).
This seems like it would be a fairly common use case. So what am I missing? Thanks in advance!
 
Last edited:

Richard

Proxmox Staff Member
Staff member
Mar 6, 2015
719
25
28
Austria
  1. Also on the Debian container, using PVE GUI I created a 2nd interface (eth1) connected to vmbr10 and gave it IP address 10.0.0.10/24 with no gateway.
  2. Inside the Debian container I changed route metrics to make eth1 primary.
  3. eth0 on the Windows VM was connected to vmbr2 with configuration completed inside the VM as normal. As with eth0 on the Debian machine, this one also gets a static IP from my external router on the 192.168.0.0/24 network.
  4. Also on the Windows VM, using PVE GUI I created a 2nd interface (eth1) connected to vmbr10 and gave it IP address 10.0.0.20/24 with no gateway.
  5. Inside the Windows VM I changed route metrics to make eth1 primary.
So with this setup, I assumed traffic between the Debian container and the Windows VM would remain within host at optimal transfer speed, using the 10.0.0.0/24 network since both were connected to vmbr10 (vmbr1 and vmbr2 I assumed would only be used for connections outside the host). However, using some test file transfers in the Windows VM, speeds were slower than expected - still good, but obviously at limit of the physical gigabit network. So it appeared each machine was using the 192.168.0.0/24 network only.

I guess here is a misunderstanding: "route metric" does not have any effect regarding which IP address is used (and is useless if you did not define any route via that interface). Which IP address (and as result which network) is used has to be defined by the application(s) you run; e.g.: if you use an NFS server specify a 10.0.0.0/24 and not a 192.168.0.0/24 address as destination.
 
  • Like
Reactions: ktide1

ktide1

New Member
Feb 12, 2019
4
0
1
47
I guess here is a misunderstanding: "route metric" does not have any effect regarding which IP address is used (and is useless if you did not define any route via that interface). Which IP address (and as result which network) is used has to be defined by the application(s) you run; e.g.: if you use an NFS server specify a 10.0.0.0/24 and not a 192.168.0.0/24 address as destination.
OK, thank you for the reply, I think I understand. I'll poke around in my SMB server and report back.
 

ktide1

New Member
Feb 12, 2019
4
0
1
47
Hi, just wanted to post back and say I got this sorted out. Thanks to Richard for pointing me in the right direction - as I said I'm really a networking novice, lol. For starters, I had failed to add the IP address of eth1 as a 2nd interface in my smb.conf file, so it was impossible to connect to the share from Windows using that IP address until I added it explicitly to samba config and rebooted. After that, I could connect to the share at 10.0.0.10 as I planned (via the host only vmbr10) and transfer speeds were much faster as expected. From there I just had to add an entry to the hosts file in the Windows machine so I could browse the samba server by hostname over the 10.0.0.0 network and all is good now.

Thanks again!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!