How to change virtio-pci tx/rx queue size

Faisal Reza

New Member
May 23, 2019
3
0
1
Sukabumi, Indonesia
jagonetwork.id
Hello everyone,

i'm having issue with RX drop on the guest vm ifconfig interface
after do some searching, it might caused by the virtual interface ring buffer because default value is 256,

i want to try to change the value to 1024 as suggested at Intel Networking Documentation

i also already tried to edit the /etc/pve/qemu-server/<vmid>.conf manually with this following configuration, also not working
Code:
agent: 1
boot: cda
bootdisk: sata0
cores: 4
ide0: none,media=cdrom
memory: 2048
name: balancer
net0: virtio=00:16:3e:50:5c:69,bridge=vmbr0,tx_queue_size=1024,rx_queue_size=1024
net1: virtio=D2:26:9E:99:0D:8E,bridge=vmbr0,tx_queue_size=1024,rx_queue_size=1024,tag=2247
numa: 0
onboot: 1
sata0: datastore-4tb-sas:2247/vm-2247-disk-1.raw,cache=writeback,size=40G
smbios1: uuid=dc60b7a5-5001-4a68-9949-595346c5ea6f
sockets: 1

is there any way to change this tx/rx queue size parameter?
many thanks

Regards
 
Last edited:
For testing, you can start qemu manually with modified parameters. You can get the exact qemu command line with:

# qm showcmd <vmid>
 
hi dietmar, thanks for responding
here's the result
Code:
root@jkt-host04:~# qm showcmd 2247
vm 2247 - unable to parse value of 'net0' - format error
rx_queue_size: property is not defined in schema and the schema does not allow additional properties
tx_queue_size: property is not defined in schema and the schema does not allow additional properties
/usr/bin/kvm -id 2247 -chardev 'socket,id=qmp,path=/var/run/qemu-server/2247.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -pidfile /var/run/qemu-server/2247.pid -daemonize -smbios 'type=1,uuid=dc60b7a5-5001-4a68-9949-595346c5ea6f' -name balancer -smp '4,sockets=1,cores=4,maxcpus=4' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vga std -vnc unix:/var/run/qemu-server/2247.vnc,x509,password -cpu kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enforce -m 2048 -k en-us -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -chardev 'socket,path=/var/run/qemu-server/2247.qga,server,nowait,id=qga0' -device 'virtio-serial,id=qga0,bus=pci.0,addr=0x8' -device 'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:fd3d37ef938' -drive 'if=none,id=drive-ide0,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0,bootindex=200' -device 'ahci,id=ahci0,multifunction=on,bus=pci.0,addr=0x7' -drive 'file=/var/lib/vz2/images/2247/vm-2247-disk-1.raw,if=none,id=drive-sata0,cache=writeback,format=raw,aio=threads,detect-zeroes=on' -device 'ide-drive,bus=ahci0.0,drive=drive-sata0,id=sata0,bootindex=100' -netdev 'type=tap,id=net1,ifname=tap2247i1,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=D2:26:9E:99:0D:8E,netdev=net1,bus=pci.0,addr=0x13,id=net1'

i'm also trying to start the vm manually using following command
root@jkt-host04:~# qm stop 2247
and then start with
Code:
root@jkt-host04:~#/usr/bin/kvm -id 2247 -chardev 'socket,id=qmp,path=/var/run/qemu-server/2247.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -pidfile /var/run/qemu-server/2247.pid -daemonize -smbios 'type=1,uuid=dc60b7a5-5001-4a68-9949-595346c5ea6f' -name balancer -smp '4,sockets=1,cores=4,maxcpus=4' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vga std -vnc unix:/var/run/qemu-server/2247.vnc,x509,password -cpu kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enforce -m 2048 -k en-us -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -chardev 'socket,path=/var/run/qemu-server/2247.qga,server,nowait,id=qga0' -device 'virtio-serial,id=qga0,bus=pci.0,addr=0x8' -device 'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:fd3d37ef938' -drive 'if=none,id=drive-ide0,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0,bootindex=200' -device 'ahci,id=ahci0,multifunction=on,bus=pci.0,addr=0x7' -drive 'file=/var/lib/vz2/images/2247/vm-2247-disk-1.raw,if=none,id=drive-sata0,cache=writeback,format=raw,aio=threads,detect-zeroes=on' -device 'ide-drive,bus=ahci0.0,drive=drive-sata0,id=sata0,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap2247i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=00:16:3e:50:5c:69,rx_queue_size=1024,tx_queue_size=1024,netdev=net0,bus=pci.0,addr=0x12,id=net0' -netdev 'type=tap,id=net1,ifname=tap2247i1,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=D2:26:9E:99:0D:8E,rx_queue_size=1024,tx_queue_size=1024,netdev=net1,bus=pci.0,addr=0x13,id=net1'
kvm: -device virtio-net-pci,mac=00:16:3e:50:5c:69,rx_queue_size=1024,tx_queue_size=1024,netdev=net0,bus=pci.0,addr=0x12,id=net0: Property '.tx_queue_size' not found

the result is
Property '.tx_queue_size' not found

any clue on this?
 
tx_queue_size only works with vhost-user (so for dpdk+ovs and it's not implemented)

also I think multiqueue need to be enable to get rx_queue_size working


@dietmar

Interesting, redhat rhev have bumped default values to the max (1024) some years ago

https://bugzilla.redhat.com/show_bug.cgi?id=1366989

Maybe could we setup that for proxmox too ? (when mq is enable)
 
  • Like
Reactions: vesalius
I think the qemu already has it as I see
Invalid tx_queue_size (= %u), must be a power of 2 between %d and %d
in qemu-system-x86_64 but how to pass it the config :-(
 
just revive this post to see whether any roadmap to implement rx_queue_size and tx_queue_size for kvm guest.
256 just too low for high traffic guest, which eventually lead to huge packet drop.
 
Any updates on this? I am getting a lot of tcp retransmits or udp lost datagrams on iperf3 in vms, especially on 10G, but also on 1G. Diferent servers, diferent versions of proxmox, diferent network cards, diferent guest operating systems (windows somehow works better). On the hosts the loss is zero. That causes weird issues especialiy for qos sensitive apps,... This is a serios matter, why is it beeing ignored like this?
 
Any updates on this? I am getting a lot of tcp retransmits or udp lost datagrams on iperf3 in vms, especially on 10G, but also on 1G. Diferent servers, diferent versions of proxmox, diferent network cards, diferent guest operating systems (windows somehow works better). On the hosts the loss is zero. That causes weird issues especialiy for qos sensitive apps,... This is a serios matter, why is it beeing ignored like this?
Did you try multiqueue with virtio network adapter?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!