FreeNAS virtIO/PCI passthrough LACP

Xilmen

Active Member
Mar 30, 2017
26
0
41
34
Hello,

I'm currently virtualizing my FreeNAS to VM on my proxmox. However, I have a problem.

These are the tests I did.

On proxmox:
For debian the LACP is working properly. (network virtIO bonding on vm)
For freeNAS the LACP does not work. (network virtIO lagg on vm)

Physical machine
Same test with a physical machine (same hardware/switch) FreeNAS work on LACP

As a result I deduced the virtIO function on FreeNAS (11.1U6) does not work for the LACP (802.3ad).

To solve this problem I would like to use the pci passthrough feature. (https://pve.proxmox.com/wiki/Pci_passthrough "Determine your PCI card address, and configure your VM")

When I do my "lspci" I find my cards:
02: 00.0 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (rev 06)
02: 00.1 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (rev 06)

I add the following line in my configuration (/etc/pve/qemu-server/VMID.conf): hostpci0: 02:00

I have the following error at startup:
Can not open iommu_group: No such file or directory

In the syslog:
<root@pam> end task UPID:SRVNAME:1234419F:1593D2B1:5AA691F2:qmstart:100:root@pam: Cannot open iommu_group: No such file or director

Do you have any ideas ?
Sorry for my English !
Ty
 
For freeNAS the LACP does not work. (network virtIO lagg on vm)
Why would you make the bond in the VM?
Why not make the bond on the host?
Can not open iommu_group: No such file or directory
Did you enable iommu in the Bios/uEFI and did you set the kernel parametes?
 
I tried to map my bonding from my host to my freenas, here is my performance.

root@freenas:~ # iperf -c 192.168.100.151 -P 4
------------------------------------------------------------
Client connecting to 192.168.100.151, TCP port 5001
TCP window size: 32.8 KByte (default)
------------------------------------------------------------
[ 4] local 192.168.100.152 port 62773 connected with 192.168.100.151 port 5001
[ 5] local 192.168.100.152 port 36050 connected with 192.168.100.151 port 5001
[ 3] local 192.168.100.152 port 59735 connected with 192.168.100.151 port 5001
[ 6] local 192.168.100.152 port 32347 connected with 192.168.100.151 port 5001
[ ID] Interval Transfer Bandwidth
[ 6] 0.0-10.0 sec 297 MBytes 249 Mbits/sec
[ 4] 0.0-10.0 sec 249 MBytes 209 Mbits/sec
[ 5] 0.0-10.0 sec 305 MBytes 256 Mbits/sec
[ 3] 0.0-10.0 sec 263 MBytes 220 Mbits/sec
[SUM] 0.0-10.0 sec 1.09 GBytes 932 Mbits/sec
root@freenas:~ # iperf -c 192.168.100.151
------------------------------------------------------------
Client connecting to 192.168.100.151, TCP port 5001
TCP window size: 32.8 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.100.152 port 36388 connected with 192.168.100.151 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 1.08 GBytes 929 Mbits/sec

Same test on debian VM (no bonding from host)

root@vm-docker:~# iperf -c 192.168.100.151 -P 4
------------------------------------------------------------
Client connecting to 192.168.100.151, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[ 6] local 192.168.100.3 port 44578 connected with 192.168.100.151 port 5001
[ 4] local 192.168.100.3 port 44572 connected with 192.168.100.151 port 5001
[ 5] local 192.168.100.3 port 44574 connected with 192.168.100.151 port 5001
[ 3] local 192.168.100.3 port 44576 connected with 192.168.100.151 port 5001
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-10.0 sec 312 MBytes 261 Mbits/sec
[ 6] 0.0-10.0 sec 450 MBytes 377 Mbits/sec
[ 3] 0.0-10.0 sec 664 MBytes 557 Mbits/sec
[ 4] 0.0-10.0 sec 430 MBytes 360 Mbits/sec
[SUM] 0.0-10.0 sec 1.81 GBytes 1.55 Gbits/sec
root@vm-docker:~# iperf -c 192.168.100.151
------------------------------------------------------------
Client connecting to 192.168.100.151, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.100.3 port 44580 connected with 192.168.100.151 port 5001
write failed: Connection reset by peer
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 0.0 sec 106 KBytes 1.83 Gbits/sec

My networking config

auto lo
iface lo inet loopback

iface enp3s0 inet manual

iface enp2s0f0 inet manual

iface enp2s0f1 inet manual

auto bond0
iface bond0 inet manual
slaves enp2s0f0 enp2s0f1
bond_miimon 100
bond_mode 802.3ad
bond_xmit_hash_policy layer2

auto vmbr0
iface vmbr0 inet static
address 192.168.2.152
netmask 255.255.255.0
gateway 192.168.2.254
bridge_ports enp3s0
bridge_stp off
bridge_fd 0

auto vmbr1
iface vmbr1 inet manual
bridge_ports bond0
bridge_stp off
bridge_fd 0
 
Last edited:
And what is the problem you get 1GBit on an LACP layer 2 bond with iperf?
LACP use one link, in this case, this is normal.
src and dest mac address stays the same, so LACP algorithms use always the same link.
 
Ok, thanks for this information.
What is the best practice for my problem ?
 
What is the best practice for my problem ?
I guess your problem is you like to get more than 1GBit speed in iperf?

If so there is no best practice to speed up an application over a single interface speed if the src and dest are the same.
 
I do not understand why you proposed to make a bond on the host in this case ? do you explain me ?
 
There is no way to get the seed of two nics with a single app as long the src and dest stay the same.
This also does not in with FreeNAS.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!