PCI Passthrough Intel 82575GB vs. VirtIO (paravirtualized) for pfSense Install

brautech

New Member
Feb 16, 2021
3
0
1
50
I am installing pfSense on the latest version of Proxmox. I have a quad Intel NIC with the subject chipset. I was following the configuration guide on Netgate's website and it outlined choosing VirtIO for the network card. My question is would I get better throughput and performance if I use PCI Passthrough instead? I have a Gibabit Internet connection and I want to ensure I get the best performance possible. Would there be any real downside to doing this?

Thanks for any feedback.
Randy
 
If your aren't using PCI passthrough you can't use hardware offloading and your virtual CPU needs to handle all the computation and that could slow down the connection. But I think for just Gbit it shouldn't be a big problem to use virtio.
I use virtio with a 10G NIC and OPNsense can only route with about 0.9 to 1.2 Gbit.
 
Last edited:
  • Like
Reactions: brautech
If your aren't using PCI passthrough you can't use hardware offloading and your virtual CPU needs to handle all the computation and that could slow down the connection. But I think for just Gbit it shouldn't be a big problem to use virtio.
I use virtio with a 10G NIC and OPNsense can only route with about 0.9 to 1.2 Gbit.
Yeah, think I'll try the passthrough then. Sounds like I might get the expected throughput, but why risk it if i can avoid it I guess.

Thanks,
Randy
 
Well, I got the VM up and running and things run pretty good. However, I am not getting the throughput I should be. I have a 1Gb up/down fiber connection to the house. When I test my connection through my old Ubiquiti Security Gateway I get about 920mb both up and down (give or take a few mb difference). When I test through the pfSense virtual box I get about 900mb down but consistently get about 200-300mb less on the upload (600-700mb).

I have turned off Disable Hardware Checksum Offloading and turned it back on, not much of a difference either way.

Since I'm getting descent speeds down I have to think not using PCI passthrough isn't the issue. Any ideas what could be going on?
Thanks,
Randy
 
I don't know if it helps, but i had a strange issue with the speed either.

If my nic is connected (passed through sr-iov) with an 1gb connection to the switch, i get in iperf constantly 740mb/s.

Now it's connected with 10gb/s with the switch and im getting finally constant 975mb/s with iperf. It doesn't even fluctuates, from the moment i start iperf and run it, it's fluctuating between 974 and 975mb/s. That's for me super contant xD

I don't know why this is the case and it doesn't makes any sense.
Everything else is connected with 1gb/s in the home. So 10gb/s doesn't makes sense. But my switch had 2x10gb/s ports free and the server has anyway an x550.

Another thing. If you activate in pfsense ips/ids or sensei etc... This all doesn't work with offloading.
And if you pass through your ethernet card, pcie doesn't work either, only pci. Use q35 and not i440 as system.

Cheers
 
Haven't tested pfSense, which is what I'm using for edge firewall right now, but I was on OPNsense for several years until I noticed someone started a thread about poor 10GbE throughput. Then I started benchmarking builds, and documented clear regressive performance build over build. It got seriously worse with every new release.

I managed to improve performance by side-loading a FreeBSD kernel modified using Calomel's Netflix RACK modification and a handful of their sysctl recommendations, but it's not something I was real eager to continue, as upgrades would either regress or KP

If you'd like to see the documentation, it's here - this is a great thread: https://forum.opnsense.org/index.php?topic=18754.75

I can't take credit for uncovering the regression, but I did take part in some of the (very unscientific) iperf3 tests that were submitted to the thread. It's extremely obvious when you look at the results, which is sad because I really like the OPNsense project, but their work product is clearly inferior in regards to throughput - and who wants to use a REALLY slow routing platform (?). I guess if all you ever run with it is 1Gbps, it would be fine (maybe chain it directly to TNSR if you're doing anything over 1Gbps, or something...)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!