Disk throttling issue

redtex

Renowned Member
Sep 13, 2012
28
1
68
Hi !!!

Recently, I've discovered, that disk throttling functional only with "Virtio" disk type...
And even with virtio only hard limit functional, setting any value to burst option does nothing, no burst at all.
Is it by design ?
I'm using ZFS via ISCSI storage type.
Code:
# pveversion --verbose
proxmox-ve-2.6.32: 3.4-160 (running kernel: 2.6.32-40-pve)
pve-manager: 3.4-9 (running version: 3.4-9/4b51d87a)
pve-kernel-2.6.32-40-pve: 2.6.32-160
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-3
pve-cluster: 3.0-18
qemu-server: 3.4-6
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-33
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-11
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
 
AFAIK, qemu implements the leaky bucket algorithm for disk throttling, see https://en.wikipedia.org/wiki/Leaky_bucket
Also QEMU I/O throttling should work with all types of block devices ("-drive"s) and not only with VirtIO.
Although you probably can see more difference with VirtIO as it's faster.

Think on a bucket where data gets added from the guest on top, and the output is a hole at the bottom which lets the data in an constant rate through to the image file/vdisk/your storage.

Burst means how much the disk can output in a short period of time (until the bucket is "full"), so burst would be the depth of the bucket.
The limits would represent the size of the hole in the bucket.

Hope that helps, somewhat :)

could also be of interest: http://www.nodalink.com/blog_throttling_25_01_2014.html
 
No, it's not..... It's scsi-generic nature, I think.

Proof:
On hypervisor:
# /usr/bin/kvm -id 500 \
........... output skipped ..............
-drive file=iscsi://192.168.107.23/iqn.2005-05.com.sdsys1:pool2t/4,if=none,id=drive-scsi3,iops_rd=50,iops_wr=50,aio=native,cache=none,detect-zeroes=on \
-device scsi-generic,bus=scsihw0.0,channel=0,scsi-id=0,lun=3,drive=drive-scsi3,id=scsi3 \
........... output skipped ..............

In guest:
# fio ./fio_rw.ini
readtest: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=32
writetest: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=32
fio-2.2.8
........... output skipped ..............
readtest: (groupid=0, jobs=1): err= 0: pid=23917: Fri Sep 11 16:50:29 2015
read : io=513384KB, bw=37941KB/s, iops=9485, runt= 13531msec
........... output skipped ..............
writetest: (groupid=0, jobs=1): err= 0: pid=23918: Fri Sep 11 16:50:29 2015
write: io=510248KB, bw=37710KB/s, iops=9427, runt= 13531msec
........... output skipped ..............

But when I change scsi-generic on to scsi-hd, it's behaves as expected:

# fio ./fio_rw.ini
readtest: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=32
writetest: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=32
fio-2.2.8
........... output skipped ..............
readtest: (groupid=0, jobs=1): err= 0: pid=3765: Fri Sep 11 17:08:49 2015
read : io=2976.0KB, bw=196354B/s, iops=47, runt= 15520msec
........... output skipped ..............
writetest: (groupid=0, jobs=1): err= 0: pid=3766: Fri Sep 11 17:08:49 2015
write: io=3128.0KB, bw=206370B/s, iops=50, runt= 15521msec
........... output skipped ..............
 
Last edited:
So, what should I do to achieve disk throttling without running "kvm ....." by hand ? Use virtio ? Or maybe something else ?
 
Ok your test are looking pretty obvious. I'm testing also, need some more time atm, sorry :)

What speaks against VirtIO? It's paravirtualized and has many benefits (speed, throughput ...).

Are you able to achieve the throttling by hand? What parameters do you use?