Force Virtio PCIe in Pfsense VM?

wickeren

Renowned Member
Sep 16, 2010
7
0
66
I'm trying to get Pfsense to recognize the Virtio (network) adapter as PCIe instead of just PCI.
It should then also been seen as <VirtIO PCI (modern) Network adapter> instead of <VirtIO PCI (legacy) Network adapter> in the system log. It SHOULD be possible to to do this if machine type is set to Q35, as this platform supports PCIe. But somehow I can't get this to work, despite using all sorts of settings combinations. I have seen others have had this working on plain libvirt KVM, where the logs showed <VirtIO PCI (modern) Network adapter> , so looks like it's not an Pfsense issue.

PCIe(modern) should give better performance, that's what I'm looking for. Passtrough is not an option, as it's a cluster and HA/live migration is needed.

Any ideas?
 
Did someone ever see something else but <VirtIO PCI (legacy) Network adapter> in a PFsense VM under Proxmox?
 
Proxmox probably connects virtual network devices to the virtual PCI bus (instead of the virtual PCIe root, which is not always available). Maybe check the QEMU/KVM command that Proxmox actually runs (via qm showcmd) and manually (via the args: setting in the VM configuration file) add a VirtIO network device to the PCIe bus instead? I would not expect a difference in performance, as both are a virtual bus and the network device is software.
 
Last edited:
Proxmox probably connects virtual network devices to the virtual PCI bus (instead of the virtual PCIe root, which is not always available). Maybe check the QEMU/KVM command that Proxmox actually runs (via qm showcmd) and manually (via the args: setting in the VM configuration file) add a VirtIO network device to the PCIe bus instead? I would not expect a difference in performance, as both are a virtual bus and the network device is software.
From my understanding, the "modern" Virtio in Pfsense should support multiqueue, giving performance benefits over the "legacy" implementation.
 
Proxmox probably connects virtual network devices to the virtual PCI bus (instead of the virtual PCIe root, which is not always available). Maybe check the QEMU/KVM command that Proxmox actually runs (via qm showcmd) and manually (via the args: setting in the VM configuration file) add a VirtIO network device to the PCIe bus instead? I would not expect a difference in performance, as both are a virtual bus and the network device is software.
you where right, the networks adapters were connected to a PCI bus, only spice seems to be on the PCIE bus:

Code:
/usr/bin/kvm -id 901 -name 'Pfsense-test,debug-threads=on' -no-shutdown -chardev 'socket,id=qmp,path=/var/run/qemu-server/901.qmp,server=on,wait=off' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect-ms=5000' -mon 'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/901.pid -daemonize -smbios 'type=1,uuid=af50c38b-eb89-41db-9238-bf46d0dd7ccd' -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE_4M.secboot.fd' -drive 'if=pflash,unit=1,id=drive-efidisk0,format=raw,file=/dev/pve/vm-901-disk-1,size=540672' -smp '4,sockets=2,cores=2,maxcpus=4' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vnc 'unix:/var/run/qemu-server/901.vnc,password=on' -cpu host,+kvm_pv_eoi,+kvm_pv_unhalt -m 8192 -object 'iothread,id=iothread-virtio0' -global 'ICH9-LPC.disable_s3=1' -global 'ICH9-LPC.disable_s4=1' -readconfig /usr/share/qemu-server/pve-q35-4.0.cfg -device 'vmgenid,guid=2f3e0aff-5445-4a14-9bda-d37324a2ac1d' -device 'qxl-vga,id=vga,max_outputs=4,bus=pcie.0,addr=0x1' -object 'rng-random,filename=/dev/urandom,id=rng0' -device 'virtio-rng-pci,rng=rng0,max-bytes=1024,period=1000,bus=pci.1,addr=0x1d' -device 'virtio-serial,id=spice,bus=pci.0,addr=0x9' -chardev 'spicevmc,id=vdagent,name=vdagent' -device 'virtserialport,chardev=vdagent,name=com.redhat.spice.0' -spice 'tls-port=61000,addr=127.0.0.1,tls-ciphers=HIGH,seamless-migration=on' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:2cd38966b82e' -drive 'file=/dev/pve/vm-901-disk-0,if=none,id=drive-virtio0,cache=writeback,format=raw,aio=io_uring,detect-zeroes=on' -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,iothread=iothread-virtio0' -netdev 'type=tap,id=net0,ifname=tap901i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=BC:24:11:0C:F0:BB,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256' -device virtio-iommu-pci -machine 'type=q35+pve1'

I'm not sure how to add the correct args: to the VM conf file. Tried some things and it either failed to start at all or no NIC was detected in the VM with added arguments.
 
Have you considered passing the nic threw to the vm? Seems to work better for me using openwrt.
Your mileage may very.
 
Have you considered passing the nic threw to the vm? Seems to work better for me using openwrt.
Your mileage may very.
As mentioned in the start post, this is not an option, as it's a cluster and HA/live migration is needed. That can’t be done if passthrough is used.
 
you where right, the networks adapters were connected to a PCI bus, only spice seems to be on the PCIE bus:

Code:
/usr/bin/kvm -id 901 -name 'Pfsense-test,debug-threads=on' -no-shutdown -chardev 'socket,id=qmp,path=/var/run/qemu-server/901.qmp,server=on,wait=off' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect-ms=5000' -mon 'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/901.pid -daemonize -smbios 'type=1,uuid=af50c38b-eb89-41db-9238-bf46d0dd7ccd' -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE_4M.secboot.fd' -drive 'if=pflash,unit=1,id=drive-efidisk0,format=raw,file=/dev/pve/vm-901-disk-1,size=540672' -smp '4,sockets=2,cores=2,maxcpus=4' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vnc 'unix:/var/run/qemu-server/901.vnc,password=on' -cpu host,+kvm_pv_eoi,+kvm_pv_unhalt -m 8192 -object 'iothread,id=iothread-virtio0' -global 'ICH9-LPC.disable_s3=1' -global 'ICH9-LPC.disable_s4=1' -readconfig /usr/share/qemu-server/pve-q35-4.0.cfg -device 'vmgenid,guid=2f3e0aff-5445-4a14-9bda-d37324a2ac1d' -device 'qxl-vga,id=vga,max_outputs=4,bus=pcie.0,addr=0x1' -object 'rng-random,filename=/dev/urandom,id=rng0' -device 'virtio-rng-pci,rng=rng0,max-bytes=1024,period=1000,bus=pci.1,addr=0x1d' -device 'virtio-serial,id=spice,bus=pci.0,addr=0x9' -chardev 'spicevmc,id=vdagent,name=vdagent' -device 'virtserialport,chardev=vdagent,name=com.redhat.spice.0' -spice 'tls-port=61000,addr=127.0.0.1,tls-ciphers=HIGH,seamless-migration=on' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:2cd38966b82e' -drive 'file=/dev/pve/vm-901-disk-0,if=none,id=drive-virtio0,cache=writeback,format=raw,aio=io_uring,detect-zeroes=on' -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,iothread=iothread-virtio0' -netdev 'type=tap,id=net0,ifname=tap901i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=BC:24:11:0C:F0:BB,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256' -device virtio-iommu-pci -machine 'type=q35+pve1'

I'm not sure how to add the correct args: to the VM conf file. Tried some things and it either failed to start at all or no NIC was detected in the VM with added arguments.
It would be more convenient if this was controllable in the GUI under advanced. For some reasons only the display adapter (not only Spice but also VGA default) is PCIE attached, but all other devices are just on the PCI bus. Or it could even be automatically if Q35 is used...
Can't get it to work manually with args: in the config but that's probably my bad.
 
I'm wondering this too but then on an opnsense and Debian vm.

-device 'virtio-net-pci,mac=,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256,bootindex=102' -machine 'type=q35+pve1'
 
Last edited:
I'm wondering this too but then on an opnsense and Debian vm.

-device 'virtio-net-pci,mac=,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256,bootindex=102' -machine 'type=q35+pve1'
Yeah, really puzzeled why they use PCI devices instead of PCIe, even in Q35 machine type. It might not make much of a difference in Proxmox itself performance wise, but it should have impact on performance in Pfsense/Opnsense, as the "modern" virtio driver should support multiqueue in the BSD VM, which the legacy version lacks.
Not sure how Linux behaves in this, Linux virto driver have been always way faster (with lower CPU too) compared to BSD virtio drivers.