Poor virtio network performance on FreeBSD guests

Alexey Tarasov

Renowned Member
Mar 1, 2016
16
1
68
35
Hi all!

I am using the latest Proxmox 4.1 with all updates installed.
I have several VM's with FreeBSD guests and 1 VM with Ubuntu 14 (all KVM).
Host system file download speed: 60 MBps.
FreeBSD guest download speed: 2 MBps on virtio network with TSO enabled, 5-9 MBps with TSO disabled; 12 MBps on e1000 network.
Ubuntu guest: 60 MBps with virtio.

I've tried the following:
1) Different FreeBSD versions: 9.3, 10.2, 10.3-BETA3.
2) Different TSO settings, enabling/disabling RXCSUM.
3) Different TSO settings on host system.

The best results I got described above :(

Does anyone have any ideas how to get full network performance inside FreeBSD guests?
 
what do you mean TSO? Why not just stick with e1000 adapter? It looks like you get 12Mbps with that specified instead of the VirtIO, correct?
what OS do you select when you create the VM?
upload_2016-3-1_10-34-8.png
 
1) All advices regarding network performance with virtio are to turn off hardware TSO (TCP segmentation offload).
2) I use e1000 now, but it is also just 1/5 of full performance. Also e1000 is driver with large overhead, Virtio was designed to eliminate it, so it is better to use it, right?
3) I select Other.
 
The lines you indicated just loads Virtio kernel extension.
I've disabled TSO by adding this to /boot/loader.conf:
Code:
hw.vtnet.X.csum_disable=1
 
I've pfSense 2.2.6-RELEASE (amd64) (the fact that is 64 bit maybe is important?), based on FreeBSD 10.1-RELEASE-p25. I've set only checksum offload in the guest, nothing in the host, host and guest communicate through vmbr0 (is my "lan" inteface on pfsense).
I've installed iperf run as server in pfsense and client in proxmox:
Code:
root@proxmox:~# iperf -c 192.168.1.253
------------------------------------------------------------
Client connecting to 192.168.1.253, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.1.9 port 57394 connected with 192.168.1.253 port 5001
[ ID] Interval  Transfer  Bandwidth
[  3]  0.0-10.0 sec  1.06 GBytes  912 Mbits/sec
root@proxmox:~#

the VM is configured as this:
Code:
root@proxmox:~# qm config 108
boot: c
bootdisk: virtio0
cores: 2
description: vmbr0 192.168.1.253
ide2: none,media=cdrom
memory: 512
name: pfsense
net0: virtio=96:F5:F2:95:7A:3D,bridge=vmbr0
net1: virtio=3E:53:3B:1D:AB:CD,bridge=vmbr4
net2: virtio=06:0E:E3:E5:58:70,bridge=vmbr5
net3: virtio=96:2E:BB:F6:2E:F1,bridge=vmbr6
net4: virtio=36:37:62:31:65:32,bridge=vmbr3
numa: 0
onboot: 1
ostype: other
smbios1: uuid=9fe41a13-39d6-47e8-86bb-bb72d1f509e6
sockets: 1
startup: order=1
tablet: 0
virtio0: local:108/vm-108-disk-1.qcow2,size=8G

and proxmox version is the latest and the greatest
Code:
root@proxmox:~# pveversion -v
proxmox-ve: 4.1-39 (running kernel: 4.2.8-1-pve)
pve-manager: 4.1-15 (running version: 4.1-15/8cd55b52)
pve-kernel-4.2.6-1-pve: 4.2.6-36
pve-kernel-3.19.8-1-pve: 3.19.8-3
pve-kernel-4.2.8-1-pve: 4.2.8-39
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-33
qemu-server: 4.0-62
pve-firmware: 1.1-7
libpve-common-perl: 4.0-49
libpve-access-control: 4.0-11
libpve-storage-perl: 4.0-42
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-8
pve-container: 1.0-46
pve-firewall: 2.0-18
pve-ha-manager: 1.0-23
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.5-7
lxcfs: 2.0.0-pve1
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve7~jessie

Let me know.
 
Update.

IPERF tests from Guest to Host machine shows the following:
Code:
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 5.79.104.162 port 5001 connected with ****** port 43966
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  3.70 GBytes  3.17 Gbits/sec

Seems that everything is good with speed.

But file download is still slower than from the Host system.
 
I had exactly the same experience with proxmox and pfsense (whether this is a wider kvm/virtio/freebsd issue I cannot say as have not used on other platforms). Iperf would show decent performance, while still far lower than linux guests. But in use, actual download speeds were bad.

My 'solution' was just to switch to a linux based routing solution (and thus full virtio performance). I would go VyOS if you want appliance style command line or OpenWRT if you want a pfSense level of GUI/packages.

Iperf to host over openvswitch:
Code:
root@OpenWrt:~# iperf3 -c 10.0.0.88
Connecting to host 10.0.0.88, port 5201
[  4] local 10.0.0.1 port 57321 connected to 10.0.0.88 port 5201
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  22.0 GBytes  18.9 Gbits/sec    0             sender
[  4]   0.00-10.00  sec  22.0 GBytes  18.9 Gbits/sec                  receiver

Iperf to server over gigabit lan:
Code:
root@OpenWrt:~# iperf3 -c 10.0.0.66
Connecting to host 10.0.0.66, port 5201
[  4] local 10.0.0.1 port 48011 connected to 10.0.0.66 port 5201
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  1.10 GBytes   942 Mbits/sec    0             sender
[  4]   0.00-10.00  sec  1.10 GBytes   942 Mbits/sec                  receiver
 
Im having issues with VirtiIO-performance on FreeBSD guests as well. I migrated my pfsense gateway from ESXi with vmxnet3 driver to Proxmox and VirtIO. On ESXi iperf showed gigabit performance without issues but on VirtIO I'm getting 500 mbit/s tops if I'm lucky. And CPU usage goes through the roof as well.

Im inclined to blame FreeBSD here as VirtIO-based network interfaces on my other Linux guests performs just fine.
 
Sorry to bump an old thread but i'm curious if anyone has noticed if any progress has been made on this? Is freebsd virtio still horribly slow on proxmox/kvm?
 
Hi bleomicyn,
I do not experience any problems anymore.
It works good with hardware offload switched off.

Awesome, can I ask which version of freebsd? And also with hardware offload switched off only in the guest or also in proxmox?
 
Last edited:
disabling hardware csum is recommend when using pf ( it is a know issue see of https://pve.proxmox.com/wiki/PfSense_Guest_Notes )
did you do that ?

for the rest YMMV, I have hear a Virtualizaed FreeBSD 11 and I get near the wire performance with the virtio driver ( on the same subnet, not doing any kind of routing)
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[ 4] local 192.168.16.24 port 5001 connected with 192.168.16.5 port 59416
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-60.0 sec 6.55 GBytes 938 Mbits/sec


if your want to switch the tso and lso flags have a look the FreeBSD vtnet(4) man page
 
  • Like
Reactions: bleomycin
disabling hardware csum is recommend when using pf ( it is a know issue see of https://pve.proxmox.com/wiki/PfSense_Guest_Notes )
did you do that ?

for the rest YMMV, I have hear a Virtualizaed FreeBSD 11 and I get near the wire performance with the virtio driver ( on the same subnet, not doing any kind of routing)
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[ 4] local 192.168.16.24 port 5001 connected with 192.168.16.5 port 59416
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-60.0 sec 6.55 GBytes 938 Mbits/sec


if your want to switch the tso and lso flags have a look the FreeBSD vtnet(4) man page

In this thread for pfsense they also discuss turning off checksumming on the proxmox host as well: https://forum.pfsense.org/index.php?topic=88467.0

Is that something you also recommend doing, or is doing in on the VM side sufficient? I haven't been able to find clear instructions on how to do that for proxmox?
 
Hi All.
I confirm that disabling Hardware Offload is a correct solution for this case.
I am running many pfSense instances for a while with this setting and everything is ok.
 
Hi All.
I confirm that disabling Hardware Offload is a correct solution for this case.
I am running many pfSense instances for a while with this setting and everything is ok.
Did you disable Hardware Offloading on the Guest VM and Host, or just the Guest VM?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!