Setup an additional host only network

Darkproduct

New Member
May 26, 2019
3
0
1
28
Hey proxmox community,

I use proxmox on my home server and I have a simple setup at the moment.
  • Proxmox host with zfs
    • VM Ubuntu 18.04 with OpenVPN and Samba
The host shares parts of zfs rpool with the VM over NFS and then I use samba on the VM to share it over OpenVPN.
I tested my write speed from the host and the VM and there was a massive difference:
  • host: 2.5 Gb/s
  • VM: 96.1 Mb/s
I checked my networking and I think I found the main problem. All communication between the host and the VM is going over my router. I think this could be solved with a host-only network, that's only virtual.

I created a new bridge on the host system: /etc/network/interfaces
Code:
auto lo
iface lo inet loopback

iface enp3s0 inet manual

#bridged network card
auto vmbr0
iface vmbr0 inet static
        address 192.168.122.15
        netmask 255.255.255.0
        gateway 192.168.122.1
        bridge_ports enp3s0
        bridge_stp off
        bridge_fd 0

#direct host connection
auto vmbr1
iface vmbr1 inet static
        address 192.168.132.1
        netmask 255.255.255.0
        bridge_ports none
        bridge_stp off
        bridge_fd 0

iface enp6s0 inet manual

Added the new network device in the hardware tab of proxmox to my VM.

Added the new interface on the VM: /etc/netplan/50-cloud-init.yaml
Code:
network:
    ethernets:
        ens18:
            addresses: []
            dhcp4: true
        ens19:
            addresses: [192.168.132.18/24]
            dhcp4: no
    version: 2

I'm not 100% sure, but I think my routing table looks fine:
On host:
Code:
default via 192.168.122.1 dev vmbr0 onlink
192.168.122.0/24 dev vmbr0 proto kernel scope link src 192.168.122.15
192.168.132.0/24 dev vmbr1 proto kernel scope link src 192.168.132.1

On VM:
Code:
default via 192.168.122.1 dev ens18 proto dhcp src 192.168.122.18 metric 100
10.8.0.0/24 via 10.8.0.2 dev tun0
10.8.0.2 dev tun0 proto kernel scope link src 10.8.0.1
192.168.122.0/24 dev ens18 proto kernel scope link src 192.168.122.18
192.168.122.1 dev ens18 proto dhcp scope link src 192.168.122.18 metric 100
192.168.132.0/24 dev ens19 proto kernel scope link src 192.168.132.18

But I can't ping from host to VM nor from VM to host. I'm new to networking and I'm not a Linux guru. Not sure, what's missing here. Does anyone have an Idea?
 
I found something that worked: https : // blog . jenningsga . com / private-network-with-proxmox/

I added these lines to vmbr1 in /etc/network/interfaces:
Code:
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up   iptables -t nat -A POSTROUTING -s '192.168.132.0/24' -o vmbr0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '192.168.132.0/24' -o vmbr0 -j MASQUERADE

But I'm 100% sure why this is so important.

Unfortunately, this fix only improved my write-speed from 96.1 MB/s to 133 MB/s. So I have to search again. I know host NFS -> VM is not great, but I want my files to be directly on ZFS with snapshots and available to other containers.
 
Hi Darkproduct

For trouble shooting disable the firewall on both the VM and PVE.

Make sure the bridge created has a common GW.

Make sure the subnet is the same on the VM and PVE

Make sure there is only 1 GW used for both VM and PVE

All of the above can be done in PVE GUI/ web interface.

Any internal traffic on PVE Host to VM is going to be fastest because it doesn’t need to leave PVE it’s over the internal bridge.

Any traffic over VPN is going to be slow because it needs to traverse 1 or more networks.

Why are you using openvpn? Are you trying to share files over the Internet?

More information on your goal and why would help work out the best setup for the solution.

“”Cheers
G
 
Hi Darkproduct

For troubleshooting disable the firewall on both the VM and PVE.

Make sure the bridge created has a common GW.

Make sure the subnet is the same on the VM and PVE

Make sure there is only 1 GW used for both VM and PVE

All of the above can be done in PVE GUI/ web interface.

I did all of that. Maybe it's because I didn't restart the PVE. Not sure. But this problem is solved, even if it didn't have the effect I wanted.
Any internal traffic on PVE Host to VM is going to be fastest because it doesn’t need to leave PVE it’s over the internal bridge.
I know. That's why I'm doing it now. First I was not sure if the PVE will automatically detect traffic from VM to itself and does internal magic, but now I know.

Any traffic over VPN is going to be slow because it needs to traverse 1 or more networks.
Not sure why. I hoped NFS would be faster, so I don't have any problems there, but I haven't tweaked it yet. I can still increase the number of daemons, the package size and so on. We'll see how much performance I'll get from this. If it's not enough I'm still open to try a different strategy.

Why are you using OpenVPN? Are you trying to share files over the Internet?
  • Because I would like to connect to my server from anywhere.
  • I want to share my files with family and friends.
  • Use owncloud and maybe plex, but hide it behind my VPN and have better security.
  • Route all my mobile traffic over. (More secure, no tracking, and so on.)

If someone has a way better setup, to archive something like this, or a better way to get raw data shared between PVE and VM, let me know.
 
What’s the MTU on your VPN tunnel.
It needs to be less than 1450 because packet encapsulation space is needed.
May even need to be set to less based on your ISP.

Make sure the MTU is set correctly on both ends of the tunnel.

When MTU is set incorrectly packets become fragmented and overall performance is lost as TCP packets need to be resent.

Hope the above helps with troubleshooting.

“”Cheers
G
 
Hi Darkproduct

just had another idea.

get rid of the VPN and use port 443 for owncloud and the firewall to lock the allowed external IP address.

This way family IP's that are whitelisted on the firewall will only have access to your owncloud.

this is a simpler setup and less links in the chain.

then you don't need to worry about MTU as this will automatically be adjusted by the users connection.

just a thought to simplify the design on the setup and would increase internet throughput.

""Cheers
G
 
Would also like to use inter VM routing inside the Proxmox host.
I would like to assigne two Network interfaces to my VMs.

vmbr0 for "normal" LAN traffic
vmbr1 for VM to VM traffic

would this be possible with the above scenario/howto?

Thx
 
Would also like to use inter VM routing inside the Proxmox host.
I would like to assigne two Network interfaces to my VMs.

vmbr0 for "normal" LAN traffic
vmbr1 for VM to VM traffic

would this be possible with the above scenario/howto?

Thx
Yes it’s possible.

are you thinking 1 Internet facing + 1 internal facing?

if you have the same bridged network on the host you can just create 2. If on the vm and map them to the appropriate brodge network.

it’s documented in the admin guide how to create a bridge.

https://forum.proxmox.com/threads/setting-up-a-bridge.65379/

“”Cheers
G
 
thx velocity08.

What network speed could i expect on a modern board/cpu for the internal VM Bridge?
More than Darkproduct 133 MB/s ?? ;-)
 
thx velocity08.

What network speed could i expect on a modern board/cpu for the internal VM Bridge?
More than Darkproduct 133 MB/s ?? ;-)
network speed will be almost native.

Darkproduct is talking about Read/ Write speed to his ZFS pool if im reading his post correctly.

Network is about bandwidth so if you are running 1 GB Nic then you should get about 115-125 MB/s = 1000 Gbps bandwodth approx across the network.

The bridge is just an extension of the native Host Nic nothing should change here.

If we are talking about Read/ Write performance then this will be a mix of does your network have a bottle next and what can the network attached storage deliver meaning will it saturate the network connection.

let me know if anything is unclear

""Cheers
G
 
I guess i totaly miss understand this "host only Network" ;-)
I thought this is a virtuell network on the proxmox server itself....
so no traffic "leaves" the Proxmox host.

But you tell me that traffic leaves the Host.
My purpose was a virtuel network inside the Host with no NIC limitations.
Im confused ;-)
 
Last edited:
I guess i totaly miss understand this "host only Network" ;-)
I thought this is a virtuell network on the proxmox server itself....
so no traffic "leaves" the Proxmox host.

But you tell me that traffic leaves the Host.
My purpose was a virtuel network inside the Host with no NIC limitations.
Im confused ;-)

The bridge is just an extension of the native Host Nic nothing should change here.

sorry let me clarify.

internal comms to local local VM's there will be no loss of performance as its just an internal bridge nothing leaves the host or even the Nic so you shoudl get native IO speed on the host which will be greater than the Nic bandwidth.

it's just a bridge device, performance will be native performance.

We use a lot of iSCSI storage so reads/ writes still need to hit somewhere over a network to the storage, this is where the bandwidth of the Nic comes into play for the storage network, nothing to do with the bridge different bottle neck.

the only reason i mentioned it was because i can't see in any of "Darkproduct" posts where they are talking about network performance being degraded only talking about read/ write performance.

maybe i've missed something feel free to point me in the right direction or to expand.

""Cheers
G
 
hi.
thx for help!!!!

my usecase:
i run on my new proxmox node a truenas vm with PCI passtrough storage array.

i would like to route all my "storage related " vm traffic inside this Host only network.
thats why i wrote i need two Bridges.
one for storage related traffic from vm's to the storage vm and the otherone is for regular traffic to clients or the Internet.


my first attemps dosent work well.
i only get 100mb's inside the Host only network. i didn't know why.


maybe i should try...

post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '192.168.132.0/24' -o vmbr0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '192.168.132.0/24' -o vmbr0 -j MASQUERADE
...that routing rules next.
but i clearly didn't understand why this rule could be importend in terms of speed.

i could communicate inside the new bridge only network .
otherwise i couldn't make a messerument
 
Last edited:
  • Like
Reactions: velocity08
I guess i figured it out..for the moment ;-)
All my VMs are configured as Intel E1000.
ethtool told me that this is a 1Gbit Card ;-)

So i switched to virtIO as the network driver.
Know i measure 35Gbits from VM to VM :)

looks great so far
 
  • Like
Reactions: velocity08
I guess i figured it out..for the moment ;-)
All my VMs are configured as Intel E1000.
ethtool told me that this is a 1Gbit Card ;-)

So i switched to virtIO as the network driver.
Know i measure 35Gbits from VM to VM :)

looks great so far
Good find!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!