hello,
i'm using vm's with virtio-scsi-single, aio=threads and iothread=1 for better vm latency. i had too much trouble with vm jitter, cpu freeze on high io load situations.
when there is high write i/o in vm, i see kvm spawn 64 io threads for writing for the correspongin kvm vm process.
in debian 10 vm , i can set
echo 8 >/sys/devices/pci0000:00/0000:00:05.0/0000:01:01.0/virtio3/host2/target2:0:0/2:0:0:0/queue_depth
( or echo 8 > /sys/block/sda/device/queue_depth , where sda is symlinked inside above path)
which immediately decreases corresponding kvm iothrads down to 8.
in high io situations, 64 parallel writers on a system with many other VMs doesn't seem to make sense to me, since when there are 64 threads waiting for i/o, loadavg on that host skyrockets to >60, too. i think it's counterproductive to have so many writers on zfs for a single VM.
unfortunately, in centos7 queue_depth apparently is not tuneable, as you cannot write to that sysfs entry. i don't find a way to pass it as a param to the module on load/boot.
any hints how number of iothreads can be limited for a centos7 VM ?
roland
ps:
apparently, limiting iothreads at the kvm/qemu level is not ready for primetime yet:
https://lists.gnu.org/archive/html/qemu-devel/2018-07/msg02933.html
https://patchwork.ozlabs.org/project/qemu-devel/patch/20220202175234.656711-1-nsaenzju@redhat.com/
i'm using vm's with virtio-scsi-single, aio=threads and iothread=1 for better vm latency. i had too much trouble with vm jitter, cpu freeze on high io load situations.
when there is high write i/o in vm, i see kvm spawn 64 io threads for writing for the correspongin kvm vm process.
in debian 10 vm , i can set
echo 8 >/sys/devices/pci0000:00/0000:00:05.0/0000:01:01.0/virtio3/host2/target2:0:0/2:0:0:0/queue_depth
( or echo 8 > /sys/block/sda/device/queue_depth , where sda is symlinked inside above path)
which immediately decreases corresponding kvm iothrads down to 8.
in high io situations, 64 parallel writers on a system with many other VMs doesn't seem to make sense to me, since when there are 64 threads waiting for i/o, loadavg on that host skyrockets to >60, too. i think it's counterproductive to have so many writers on zfs for a single VM.
unfortunately, in centos7 queue_depth apparently is not tuneable, as you cannot write to that sysfs entry. i don't find a way to pass it as a param to the module on load/boot.
any hints how number of iothreads can be limited for a centos7 VM ?
roland
ps:
apparently, limiting iothreads at the kvm/qemu level is not ready for primetime yet:
https://lists.gnu.org/archive/html/qemu-devel/2018-07/msg02933.html
https://patchwork.ozlabs.org/project/qemu-devel/patch/20220202175234.656711-1-nsaenzju@redhat.com/