Hi,
Any advice on the following gratefully received!
Background:
I have a Fortigate Firewall handling all my vlans, off which I have a FortiSwitch connected via 4-port 802.1ad aggregated FortiLink (Trunk). The following VLANs are configured:
I have my Proxmox server connected to the Fortiswitch and the port is configured with native VLAN 2 and 'Allowed 'VLANs 10, 20.
Currently I have the Networking within Proxmox as Linux Bridge (vmbr0) and have assigned an address of 192.168.1.3.
I have set up a few VMs and assigned the vmbr0 interface with VLAN Tag: 10 (using Virtio driver)...All is good the VMs get assigned IP addresses from the Fortigate and with Policies set up between VLANs on the FG I can communicate between hosts on VLAN 25, 20 and the VMs on VLAN 10.
The plot thickens...
I am also using the Proxmox server as a fileserver using NFS and SAMBA.
I can mount the NFS share from the proxmox server (192.168.1.3) to all the devices I need to within my Network, and I can saturate the current 1Gb connection (113MB/s)....Happy days.
However, now I would like to improve the performance between the VMs (VLAN10) and the Host (VLAN2) - I am aware that as the Fortigate is managing the VLANs, inter-VLAN traffic must route via the Fortigate and I need policies to do so, standard. Therefore logically speaking traffic will go Host > Switch > Fortigate > Switch > Host > VM, theoretically halving my available bandwidth...
I have done some tests, mounting the NFS share on one of the VMs and doing a basic read/write (which kind of disproves my theory), however I am still not getting the 1Gb/s speeds and it also is moving packets around my network unnecessarily.
In comparison and for reference, I have done some local testing from on the Proxmox host itself and get somewhat more performance:
And a Read test for good measure:
With this in mind I would like to improve the performance of the Host to VMs (and vice-versa), so was thinking...
If I were to create a virtual interface on the Proxmox host for each of the VLANs (10 & 20), assigning an IP for the host on each, and then create a bridge (and OVSInt for each??). Then I could access the NFS/SAMBA shares on the Host from the VMs via directly connected VLANs; and therefore packets would never have to route via the Fortigate...so performance would theoretically be the similar sort of figures as I am achieving in my local testing above (Assuming OVS / Linux Bridge are 10Gb/s).
Is that a reasonable assumption? Or am I way off the mark?
If so, how do I achieve this? How do I configure this?
Thanks in advance for any help, and I apologise for the lengthy post!
Any advice on the following gratefully received!
Background:
I have a Fortigate Firewall handling all my vlans, off which I have a FortiSwitch connected via 4-port 802.1ad aggregated FortiLink (Trunk). The following VLANs are configured:
Code:
MGMT VLAN (2)
Address: 192.168.1.1/24
Hosts/Net: 254
VLAN 5 (VoIP)
Address: 10.32.10.1/29
Hosts/Net: 6
VLAN 10 (Wired - Untrusted)
Address: 10.32.10.33/27
Hosts/Net: 30
VLAN 15 (Wireless - Untrusted)
Address: 10.32.10.65/26
Hosts/Net: 62
VLAN 20 (Wired - Trusted)
Address: 10.32.10.129/26
Hosts/Net: 62
VLAN 25 (Wireless - Trusted)
Address: 10.32.10.193/26
Hosts/Net: 62
I have my Proxmox server connected to the Fortiswitch and the port is configured with native VLAN 2 and 'Allowed 'VLANs 10, 20.
Currently I have the Networking within Proxmox as Linux Bridge (vmbr0) and have assigned an address of 192.168.1.3.
I have set up a few VMs and assigned the vmbr0 interface with VLAN Tag: 10 (using Virtio driver)...All is good the VMs get assigned IP addresses from the Fortigate and with Policies set up between VLANs on the FG I can communicate between hosts on VLAN 25, 20 and the VMs on VLAN 10.
The plot thickens...
I am also using the Proxmox server as a fileserver using NFS and SAMBA.
I can mount the NFS share from the proxmox server (192.168.1.3) to all the devices I need to within my Network, and I can saturate the current 1Gb connection (113MB/s)....Happy days.
However, now I would like to improve the performance between the VMs (VLAN10) and the Host (VLAN2) - I am aware that as the Fortigate is managing the VLANs, inter-VLAN traffic must route via the Fortigate and I need policies to do so, standard. Therefore logically speaking traffic will go Host > Switch > Fortigate > Switch > Host > VM, theoretically halving my available bandwidth...
I have done some tests, mounting the NFS share on one of the VMs and doing a basic read/write (which kind of disproves my theory), however I am still not getting the 1Gb/s speeds and it also is moving packets around my network unnecessarily.
Code:
[media@ds-node-2 downloads]$ dd if=/dev/zero of=testfile bs=100k count=1k
1024+0 records in
1024+0 records out
104857600 bytes (105 MB) copied, 2.10455 s, 49.8 MB/s
[media@ds-node-2 downloads]$ dd if=/dev/zero of=testfile bs=100k count=10k
10240+0 records in
10240+0 records out
1048576000 bytes (1.0 GB) copied, 13.7751 s, 76.1 MB/s
[media@ds-node-2 downloads]$ dd if=/dev/zero of=testfile bs=100k count=100k
102400+0 records in
102400+0 records out
10485760000 bytes (10 GB) copied, 117.052 s, 89.6 MB/s
[media@ds-node-2 downloads]$ dd if=/dev/zero of=testfile bs=1M count=10k
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 116.847 s, 91.9 MB/s
In comparison and for reference, I have done some local testing from on the Proxmox host itself and get somewhat more performance:
Code:
root@pve:/datapool/share/downloads# dd if=/dev/zero of=testfile bs=1M count=10k
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 2.10789 s, 5.1 GB/s
root@pve:/datapool/share/downloads# dd if=/dev/zero of=testfile bs=1M count=100k
102400+0 records in
102400+0 records out
107374182400 bytes (107 GB, 100 GiB) copied, 21.4082 s, 5.0 GB/s
root@pve:/datapool/share/downloads# dd if=/dev/zero of=testfile bs=1M count=100k
102400+0 records in
102400+0 records out
107374182400 bytes (107 GB, 100 GiB) copied, 22.2531 s, 4.8 GB/s
And a Read test for good measure:
Code:
root@pve:/datapool/share/downloads# dd if=testfile of=/dev/null bs=1M count=100k
102400+0 records in
102400+0 records out
107374182400 bytes (107 GB, 100 GiB) copied, 8.27409 s, 13.0 GB/s
With this in mind I would like to improve the performance of the Host to VMs (and vice-versa), so was thinking...
If I were to create a virtual interface on the Proxmox host for each of the VLANs (10 & 20), assigning an IP for the host on each, and then create a bridge (and OVSInt for each??). Then I could access the NFS/SAMBA shares on the Host from the VMs via directly connected VLANs; and therefore packets would never have to route via the Fortigate...so performance would theoretically be the similar sort of figures as I am achieving in my local testing above (Assuming OVS / Linux Bridge are 10Gb/s).
Is that a reasonable assumption? Or am I way off the mark?
If so, how do I achieve this? How do I configure this?
Thanks in advance for any help, and I apologise for the lengthy post!