Network config with VLANS - OVS or Linux Bridge?

sirfragalot

New Member
Feb 19, 2019
8
0
1
40
Hi,

Any advice on the following gratefully received!

Background:
I have a Fortigate Firewall handling all my vlans, off which I have a FortiSwitch connected via 4-port 802.1ad aggregated FortiLink (Trunk). The following VLANs are configured:

Code:
MGMT VLAN (2)
Address: 192.168.1.1/24
Hosts/Net: 254

VLAN 5 (VoIP)
Address: 10.32.10.1/29
Hosts/Net: 6

VLAN 10 (Wired - Untrusted)
Address: 10.32.10.33/27
Hosts/Net: 30

VLAN 15 (Wireless - Untrusted)
Address: 10.32.10.65/26
Hosts/Net: 62

VLAN 20 (Wired - Trusted)
Address: 10.32.10.129/26
Hosts/Net: 62

VLAN 25 (Wireless - Trusted)
Address: 10.32.10.193/26
Hosts/Net: 62

I have my Proxmox server connected to the Fortiswitch and the port is configured with native VLAN 2 and 'Allowed 'VLANs 10, 20.

Currently I have the Networking within Proxmox as Linux Bridge (vmbr0) and have assigned an address of 192.168.1.3.

I have set up a few VMs and assigned the vmbr0 interface with VLAN Tag: 10 (using Virtio driver)...All is good the VMs get assigned IP addresses from the Fortigate and with Policies set up between VLANs on the FG I can communicate between hosts on VLAN 25, 20 and the VMs on VLAN 10.

The plot thickens...
I am also using the Proxmox server as a fileserver using NFS and SAMBA.

I can mount the NFS share from the proxmox server (192.168.1.3) to all the devices I need to within my Network, and I can saturate the current 1Gb connection (113MB/s)....Happy days.

However, now I would like to improve the performance between the VMs (VLAN10) and the Host (VLAN2) - I am aware that as the Fortigate is managing the VLANs, inter-VLAN traffic must route via the Fortigate and I need policies to do so, standard. Therefore logically speaking traffic will go Host > Switch > Fortigate > Switch > Host > VM, theoretically halving my available bandwidth...

I have done some tests, mounting the NFS share on one of the VMs and doing a basic read/write (which kind of disproves my theory), however I am still not getting the 1Gb/s speeds and it also is moving packets around my network unnecessarily.

Code:
[media@ds-node-2 downloads]$ dd if=/dev/zero of=testfile bs=100k count=1k
1024+0 records in
1024+0 records out
104857600 bytes (105 MB) copied, 2.10455 s, 49.8 MB/s

[media@ds-node-2 downloads]$ dd if=/dev/zero of=testfile bs=100k count=10k
10240+0 records in
10240+0 records out
1048576000 bytes (1.0 GB) copied, 13.7751 s, 76.1 MB/s

[media@ds-node-2 downloads]$ dd if=/dev/zero of=testfile bs=100k count=100k
102400+0 records in
102400+0 records out
10485760000 bytes (10 GB) copied, 117.052 s, 89.6 MB/s

[media@ds-node-2 downloads]$ dd if=/dev/zero of=testfile bs=1M count=10k
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 116.847 s, 91.9 MB/s

In comparison and for reference, I have done some local testing from on the Proxmox host itself and get somewhat more performance:

Code:
root@pve:/datapool/share/downloads# dd if=/dev/zero of=testfile bs=1M count=10k
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 2.10789 s, 5.1 GB/s

root@pve:/datapool/share/downloads# dd if=/dev/zero of=testfile bs=1M count=100k
102400+0 records in
102400+0 records out
107374182400 bytes (107 GB, 100 GiB) copied, 21.4082 s, 5.0 GB/s

root@pve:/datapool/share/downloads# dd if=/dev/zero of=testfile bs=1M count=100k
102400+0 records in
102400+0 records out
107374182400 bytes (107 GB, 100 GiB) copied, 22.2531 s, 4.8 GB/s

And a Read test for good measure:
Code:
root@pve:/datapool/share/downloads# dd if=testfile of=/dev/null bs=1M count=100k
102400+0 records in
102400+0 records out
107374182400 bytes (107 GB, 100 GiB) copied, 8.27409 s, 13.0 GB/s

With this in mind I would like to improve the performance of the Host to VMs (and vice-versa), so was thinking...

If I were to create a virtual interface on the Proxmox host for each of the VLANs (10 & 20), assigning an IP for the host on each, and then create a bridge (and OVSInt for each??). Then I could access the NFS/SAMBA shares on the Host from the VMs via directly connected VLANs; and therefore packets would never have to route via the Fortigate...so performance would theoretically be the similar sort of figures as I am achieving in my local testing above (Assuming OVS / Linux Bridge are 10Gb/s).

Is that a reasonable assumption? Or am I way off the mark?

If so, how do I achieve this? How do I configure this?

Thanks in advance for any help, and I apologise for the lengthy post!
 
And here's the output of 'ip a' on the proxmox host, fwiw:

Code:
2: enp0s31f6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
    link/ether e0:d5:5e:b1:dd:c2 brd ff:ff:ff:ff:ff:ff
3: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether e0:d5:5e:b1:dd:c2 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.3/24 brd 192.168.1.255 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::e2d5:5eff:feb1:ddc2/64 scope link
       valid_lft forever preferred_lft forever
5: vmbr0v10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether e0:d5:5e:b1:dd:c2 brd ff:ff:ff:ff:ff:ff
6: enp0s31f6.10@enp0s31f6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0v10 state UP group default qlen 1000
    link/ether e0:d5:5e:b1:dd:c2 brd ff:ff:ff:ff:ff:ff
9: tap102i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0v10 state UNKNOWN group default qlen 1000
    link/ether 42:4b:b6:f1:b1:ba brd ff:ff:ff:ff:ff:ff
10: tap103i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0v10 state UNKNOWN group default qlen 1000
    link/ether 02:36:19:a6:8d:8c brd ff:ff:ff:ff:ff:ff
11: tap104i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0v10 state UNKNOWN group default qlen 1000
    link/ether 76:86:9d:e1:68:b5 brd ff:ff:ff:ff:ff:ff

Both enp0s31f6 & enp0s31f6.10 are showing as 1Gb connections...

And I assume 'tap10xi0' interfaces are pseudo 'management' interfaces for the 3 VMs running, as their link speed is 10Mb/s only.
 
Hi
Is that a reasonable assumption? Or am I way off the mark?
yes but you do not need OpenVswitch for that Linux bridge can do this.

If so, how do I achieve this? How do I configure this?
PHP:
iface enp0s31f6 inet manual

# VLAN 10 dev
auto enp0s31f6.10
iface enp0s31f6.10 manual

auto vmbr10
iface vmbr10 inet static
        address 10.32.10.34
        netmask  255.255.255.224
        bridge-ports enp0s31f6.10
        bridge-stp off
        bridge-fd 0
 
  • Like
Reactions: sirfragalot
Hi

yes but you do not need OpenVswitch for that Linux bridge can do this.


PHP:
iface enp0s31f6 inet manual

# VLAN 10 dev
auto enp0s31f6.10
iface enp0s31f6.10 manual

auto vmbr10
iface vmbr10 inet static
        address 10.32.10.34
        netmask  255.255.255.224
        bridge-ports enp0s31f6.10
        bridge-stp off
        bridge-fd 0

Thank you, I assume this goes under my existing config in /etc/network/interfaces

I think this line already exists 'iface enp0s31f6 inet manual' ...do I i.e. copy & paste the above as is? or just from the #VLAN 10 dev down?
 
Sorted...minor fix to the config you posted...

iface enp0s31f6.10 manual

should be:

iface enp0s31f6.10 inet manual

As without the inet the network service wouldn't start.


But what a difference that has made!
Code:
[media@ds-node-2 downloads]$ dd if=/dev/zero of=testfile bs=1M count=1k
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 6.27575 s, 171 MB/s

And with zfs sync set to disabled:
Code:
[media@ds-node-2 downloads]$ dd if=/dev/zero of=testfile bs=1M count=10k
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 5.94149 s, 1.8 GB/s

[media@ds-node-2 downloads]$ dd if=/dev/zero of=testfile bs=1M count=100k
102400+0 records in
102400+0 records out
107374182400 bytes (107 GB) copied, 61.4781 s, 1.7 GB/s

Happy days!

Thank you for the help.
 
Sorted...minor fix to the config you posted...

iface enp0s31f6.10 manual

should be:

iface enp0s31f6.10 inet manual

As without the inet the network service wouldn't start.


But what a difference that has made!
Code:
[media@ds-node-2 downloads]$ dd if=/dev/zero of=testfile bs=1M count=1k
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 6.27575 s, 171 MB/s

And with zfs sync set to disabled:
Code:
[media@ds-node-2 downloads]$ dd if=/dev/zero of=testfile bs=1M count=10k
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 5.94149 s, 1.8 GB/s

[media@ds-node-2 downloads]$ dd if=/dev/zero of=testfile bs=1M count=100k
102400+0 records in
102400+0 records out
107374182400 bytes (107 GB) copied, 61.4781 s, 1.7 GB/s

Happy days!

Thank you for the help.


I have a similar config to you (external OPNsense firewal with multiple vlans) and I initially did my setup in OpenvSwitch. I'd like to experiment with Linux Bridges / Vlans but am having trouble finding good examples to work off of.

Is there any chance you could share your full interfaces file here? I'd bet it would give me a great head-start on moving off of OpenvSwitch!