How does multiqueue really work?

mgiammarco

Well-Known Member
Feb 18, 2010
161
7
58
Hello,
I would like to add multiqueue support.
I read:
- number of queues must be the same of number of cpus: false, in proxmox I can put 8 multiqueues and not more
- in linux vm I must give the command ethtool -L ens1 combined X.... but unfortunately I am using windows VM

Can someone explain me the truth?
Thanks,
Mario
 
multiqueue is for splitting rx packets (incoming), to different queues/cpu in the vm.
I'm not sure about the 8 queue limit, maybe it's a proxmox gui limit. (can you try to change the value in vm config file directly ?).
But I think it could be increase.

It's only usefull if you receive a lot of small packets (like a synflood ddos for example). 1 queue is able to reach around 250-500 kpps. (Then you should see 1 core saturated)

I last linux kernels (> 4.15 If I remember correctly), you don't need to use ethtool anymore, the driver is able to setup queue correctly from value defined at proxmox side.

For windows, I don't remember it need special tuning. (if it need tuning, it should be in the nic options )
 
Thanks for reply, I have checked in nic options and there is nothing specific to multiqueue.
 
I would like to underline that proxmox official documentation says that you must put mulitqueue=number of threads BUT then proxmox gui behaviour is different from official documentation
 
Anyway I have tried enabling multiqueue=8 in some test windows VM and freenas VM and I obtained total damage: network up and down, packet loss and so on
 
I also tried to config mulltiqueue but maximum limit at 8. How i config more?
I think it's a limitation of the gui only.

you can try to edit the vm config file in /etc/pve/qemu-server/<vmid>.conf, and increase the queue=.. value on the netX: interface.

you should be able to use as maximum the total number of cores of your vms.
 
  • Like
Reactions: t.natakorn
I think it's a limitation of the gui only.

you can try to edit the vm config file in /etc/pve/qemu-server/<vmid>.conf, and increase the queue=.. value on the netX: interface.

you should be able to use as maximum the total number of cores of your vms.


Thank you, now it can reach my total no of cores.
 
I'm trying to enable multiqueue but when I change the vm config file to a higher number than 8, as suggested here, the nics disappear from the gui config and the vm boots without them. Any suggestions?
 
Your multiqueue shouldn't be bigger than the VM got vCPUs. Does the VM got tleast 8 vCPUs?
 
the max queue is hardcoded to 16 in /usr/share/perl5/PVE/QemuServer.pm

Code:
    queues => {
        type => 'integer',
        minimum => 0, maximum => 16,
        description => 'Number of packet queues to be used on the device.',
        optional => 1,
    },

and the gui is limited to 8.

Technically, it should works with more queues (maxqueues = number of cores of the vm).

Please open a bugreport on bugzilla.proxmox.com.
 
  • Like
Reactions: fahadshery
the max queue is hardcoded to 16 in /usr/share/perl5/PVE/QemuServer.pm

Code:
    queues => {
        type => 'integer',
        minimum => 0, maximum => 16,
        description => 'Number of packet queues to be used on the device.',
        optional => 1,
    },
does this mean I could setup 16 for the following VM:
1704665330535.png
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!