limit kvm iothreads or virtio-scsi vdevice/driver queue_depth in centos7 vm

RolandK

Renowned Member
Mar 5, 2019
955
190
88
51
hello,

i'm using vm's with virtio-scsi-single, aio=threads and iothread=1 for better vm latency. i had too much trouble with vm jitter, cpu freeze on high io load situations.

when there is high write i/o in vm, i see kvm spawn 64 io threads for writing for the correspongin kvm vm process.

in debian 10 vm , i can set

echo 8 >/sys/devices/pci0000:00/0000:00:05.0/0000:01:01.0/virtio3/host2/target2:0:0/2:0:0:0/queue_depth
( or echo 8 > /sys/block/sda/device/queue_depth , where sda is symlinked inside above path)

which immediately decreases corresponding kvm iothrads down to 8.

in high io situations, 64 parallel writers on a system with many other VMs doesn't seem to make sense to me, since when there are 64 threads waiting for i/o, loadavg on that host skyrockets to >60, too. i think it's counterproductive to have so many writers on zfs for a single VM.

unfortunately, in centos7 queue_depth apparently is not tuneable, as you cannot write to that sysfs entry. i don't find a way to pass it as a param to the module on load/boot.

any hints how number of iothreads can be limited for a centos7 VM ?

roland

ps:
apparently, limiting iothreads at the kvm/qemu level is not ready for primetime yet:
https://lists.gnu.org/archive/html/qemu-devel/2018-07/msg02933.html
https://patchwork.ozlabs.org/project/qemu-devel/patch/20220202175234.656711-1-nsaenzju@redhat.com/
 
ok, to answer this to myself:

# kvm -device virtio-scsi-pci,help|grep cmd
cmd_per_lun=<uint32> - (default: 128)

ok , there is a param for virtio-scsi-pci to limit num_queues

unfortunately, it seems impossible to pass params to that virtual device, as there is translation layer in proxmox for it.

in $vmid.conf,

scsihw: virtio-scsi-single

won't take this line:

scsihw: virtio-scsi-single,cmd_per_lun=8

so,

in QemuServer.pm, i tried changing

push @$devices, '-device', "$scsihw_type,id=$controller_prefix$controller$pciaddr$iothread$queues"

into

push @$devices, '-device', "$scsihw_type,cmd_per_lun=8,id=$controller_prefix$controller$pciaddr$iothread$queues"

and did
# systemctl restart pvedaemon

it works.

i had other occurences, where i wanted to modify standard kvm options of a VM where proxmox won't let me because of own configuration layer from which kvm options/commandline is derived.

any chance for more general flexibility in proxmox to modify custom kvm options, without hacking proxmox code (where it will be overwritten on next update) ?

i'm aware that there is qm set 100 -args .... , but it seems that's only useable for for adding additional options, but not modifying/extending existing ones which are generated from $vmid.conf. correct ?

for example, virtio-scsi-pci has all this options, how can these be tweaked ?

wo would need some (not yet existing) "qm set $vmid -scsihwoptions...", do we ?

Code:
# kvm -device virtio-scsi-pci,help

virtio-scsi-pci options:

  addr=<int32>           - Slot and optional function number, example: 06.0 or 06 (default: -1)
  any_layout=<bool>      - on/off (default: true)
  ats=<bool>             - on/off (default: false)
  cmd_per_lun=<uint32>   -  (default: 128)
  disable-legacy=<OnOffAuto> - on/off/auto (default: "auto")
  disable-modern=<bool>  -  (default: false)
  event_idx=<bool>       - on/off (default: true)
  failover_pair_id=<str>
  hotplug=<bool>         - on/off (default: true)
  indirect_desc=<bool>   - on/off (default: true)
  ioeventfd=<bool>       - on/off (default: true)
  iommu_platform=<bool>  - on/off (default: false)
  iothread=<link<iothread>>
  max_sectors=<uint32>   -  (default: 65535)
  migrate-extra=<bool>   - on/off (default: true)
  modern-pio-notify=<bool> - on/off (default: false)
  multifunction=<bool>   - on/off (default: false)
  notify_on_empty=<bool> - on/off (default: true)
  num_queues=<uint32>    -  (default: 4294967295)
  packed=<bool>          - on/off (default: false)
  page-per-vq=<bool>     - on/off (default: false)
  param_change=<bool>    - on/off (default: true)
  rombar=<uint32>        -  (default: 1)
  romfile=<str>
  seg_max_adjust=<bool>  -  (default: true)
  use-disabled-flag=<bool> -  (default: true)
  use-started=<bool>     -  (default: true)
  vectors=<uint32>       -  (default: 4294967295)
  virtio-backend=<child<virtio-scsi-device>>
  virtio-pci-bus-master-bug-migration=<bool> - on/off (default: false)
  virtqueue_size=<uint32> -  (default: 256)
  x-disable-legacy-check=<bool> -  (default: false)
  x-disable-pcie=<bool>  - on/off (default: false)
  x-ignore-backend-features=<bool> -  (default: false)
  x-pcie-deverr-init=<bool> - on/off (default: true)
  x-pcie-extcap-init=<bool> - on/off (default: true)
  x-pcie-flr-init=<bool> - on/off (default: true)
  x-pcie-lnkctl-init=<bool> - on/off (default: true)
  x-pcie-lnksta-dllla=<bool> - on/off (default: true)
  x-pcie-pm-init=<bool>  - on/off (default: true)
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!