Cannot configure vtnet (virtio) slot size in OPNSense network adapter

wenis

New Member
Jan 12, 2024
2
0
1
Hello.

I'm having a bit of trouble getting an opnsense virtual machine to respect settings for TX and RX slots of my network interfaces as set in the host. I have optimized my network configuration for opnsense by following many of the recommendations on this forum and other Internet sources. No matter what I do, vtnet in opnsense loads with 256 TX and 512 RX slots.

Bash:
root@opnsense:~ # dmesg | grep vtnet
vtnet0: <VirtIO Networking Adapter> on virtio_pci2
vtnet0: Ethernet address: XXX
vtnet0: netmap queues/slots: TX 4/256, RX 4/512
000.000765 [ 449] vtnet_netmap_attach       vtnet attached txq=4, txd=256 rxq=4, rxd=512
vtnet1: <VirtIO Networking Adapter> on virtio_pci3
vtnet1: Ethernet address: XXX
vtnet1: netmap queues/slots: TX 4/256, RX 4/512
000.000766 [ 449] vtnet_netmap_attach       vtnet attached txq=4, txd=256 rxq=4, rxd=512
vtnet0: link state changed to UP
vtnet1: link state changed to UP

This has been driving me bonkers for a couple days.

Here are some configurations:

Bash:
root@prox:~# ethtool -g enp1s0
Ring parameters for enp1s0:
Pre-set maximums:
RX:             4096
RX Mini:        n/a
RX Jumbo:       n/a
TX:             4096
Current hardware settings:
RX:             1024
RX Mini:        n/a
RX Jumbo:       n/a
TX:             1024
RX Buf Len:             n/a
CQE Size:               n/a
TX Push:        off
TCP data split: n/a


Bash:
root@prox:~# ethtool -g enp2s0
Ring parameters for enp2s0:
Pre-set maximums:
RX:             4096
RX Mini:        n/a
RX Jumbo:       n/a
TX:             4096
Current hardware settings:
RX:             1024
RX Mini:        n/a
RX Jumbo:       n/a
TX:             1024
RX Buf Len:             n/a
CQE Size:               n/a
TX Push:        off
TCP data split: n/a

Bash:
root@prox:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback

iface enp1s0 inet manual

iface enp2s0 inet manual

auto vmbr0
iface vmbr0 inet static
        address IP_ADDR
        gateway IP_ADDR
        bridge-ports enp1s0
        bridge-stp off
        bridge-fd 0
        pre-up ethtool -K enp1s0 combined 4
        pre-up ethtool -G enp1s0 rx 1024 tx 1024
        pre-up ethtool -K enp1s0 tx off gso off
        post-up ethtool -K vmbr0 tx off gso off
iface wlo1 inet manual

auto vmbr1
iface vmbr1 inet manual
        bridge-ports enp2s0
        bridge-stp off
        bridge-fd 0
        pre-up ethtool -K enp2s0 combined 4
        pre-up ethtool -G enp2s0 rx 1024 tx 1024
        pre-up ethtool -K enp2s0 tx off gso off
        post-up ethtool -K vmbr1 tx off gso off

source /etc/network/interfaces.d/*

VM config:

Code:
root@prox:~# cat /etc/pve/qemu-server/105.conf
boot: order=scsi0
cores: 2
cpu: kvm64,flags=+aes
memory: 4096
meta: creation-qemu=8.1.2,ctime=1704830500
name: opnsense
net0: virtio=XXX,bridge=vmbr0,queues=4
net1: virtio=XXX,bridge=vmbr1,queues=4
numa: 1
onboot: 1
ostype: l26
scsi0: local-lvm:vm-105-disk-0,iothread=1,size=16G
scsihw: virtio-scsi-single
sockets: 2
startup: order=1

What on earth is Proxmox or OPNSense doing where my network configuration wouldn't be respected? Any ideas how I can configure the TX and RX slot size?

Code:
root@prox:~# qm showcmd 105
/usr/bin/kvm -id 105 -name 'opnsense,debug-threads=on' -no-shutdown -chardev 'socket,id=qmp,path=/var/run/qemu-server/105.qmp,server=on,wait=off' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon 'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/105.pid -daemonize -smbios 'type=1,uuid=43dcc86a-a40b-4b6e-8976-309c971a75ad' -smp '4,sockets=2,cores=2,maxcpus=4' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vnc 'unix:/var/run/qemu-server/105.vnc,password=on' -cpu kvm64,+aes,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep -m 4096 -object 'memory-backend-ram,id=ram-node0,size=2048M' -numa 'node,nodeid=0,cpus=0-1,memdev=ram-node0' -object 'memory-backend-ram,id=ram-node1,size=2048M' -numa 'node,nodeid=1,cpus=2-3,memdev=ram-node1' -object 'iothread,id=iothread-virtioscsi0' -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'pci-bridge,id=pci.3,chassis_nr=3,bus=pci.0,addr=0x5' -device 'vmgenid,guid=e556055d-6b9e-47c0-982d-4b1f9d85ab6a' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'VGA,id=vga,bus=pci.0,addr=0x2' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:82a732215da0' -device 'virtio-scsi-pci,id=virtioscsi0,bus=pci.3,addr=0x1,iothread=iothread-virtioscsi0' -drive 'file=/dev/pve/vm-105-disk-0,if=none,id=drive-scsi0,format=raw,cache=none,aio=io_uring,detect-zeroes=on' -device 'scsi-hd,bus=virtioscsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap105i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on,queues=4' -device 'virtio-net-pci,mac=XXX,netdev=net0,bus=pci.0,addr=0x12,id=net0,vectors=10,mq=on,packed=on,rx_queue_size=1024,tx_queue_size=256' -netdev 'type=tap,id=net1,ifname=tap105i1,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on,queues=4' -device 'virtio-net-pci,mac=XXX,netdev=net1,bus=pci.0,addr=0x13,id=net1,vectors=10,mq=on,packed=on,rx_queue_size=1024,tx_queue_size=256' -machine 'type=pc+pve0'
 
I'm not sure I follow your question.

I have not tried installing OPNSense on bare metal, but I'm guessing the network interface kernel module would load property with the appropriate queue and slot size.

This issue is still present, and affecting my ability to reach full transmit speeds on a 1Gbps fiber link.
 
you seem to conflate network interfaces which are virtual and network interfaces which are virtual

-device 'virtio-net-pci,mac=XXX,netdev=net0,bus=pci.0,addr=0x12,id=net0,vectors=10,mq=on,packed=on,rx_queue_size=1024,tx_queue_size=256'

setting this manually in 105.conf is probably mandatory
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!