I/O and IO Delay on sata affects VM responsiveness even if on different SAS storage

mmenaz

Active Member
Jun 25, 2009
828
20
38
Northern east Italy
Hi, so far often install proxmox on a sata driver and leave the raid5 or 10 (FSYNCS/SECOND: 2497) as additional lvm storage for kvm vm.
I've also some OpenVZ vm that are on local storage and so on the sata proxmox is installed into.
I've noticed that if I move data on sata, i.e. copy a backup file or iso images around, IO Delay increases (16-24%) and my KVM vm, that are on SAS, seem to be VERY trashed.
Was mine a wrong assumption, or is there something I need to fine tune to not have this bad behaviour? One OpenVZ VM has a samba share on it and his modest sata performances would be good enough, but when some big file is moved puts all the rest of the VM on their knees and this is not good.
I'm going to migrate the all stuff in a 2.1 installation, but don't think is a problem specific to 1.9 branch (but I would love it would :))
proxmox:/mnt/backup/aaa_copie_locali_temp# pveversion -v
pve-manager: 1.9-26 (pve-manager/1.9/6567)
running kernel: 2.6.32-6-pve
proxmox-ve-2.6.32: 1.9-55+ovzfix-1
pve-kernel-2.6.32-4-pve: 2.6.32-33
pve-kernel-2.6.32-6-pve: 2.6.32-55+ovzfix-1
qemu-server: 1.1-32
pve-firmware: 1.0-14
libpve-storage-perl: 1.0-19
vncterm: 0.9-2
vzctl: 3.0.29-3pve1
vzdump: 1.2-16
vzprocps: 2.0.11-2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.15.0-2
ksm-control-daemon: 1.0-6

Thanks in advance
 

gkovacs

Well-Known Member
Dec 22, 2008
509
48
48
Budapest, Hungary
Re: I/O and IO Delay on sata affects VM responsiveness even if on different SAS stora

Under Proxmox most IO delay is caused by the lame CFQ disk IO scheduler, which is not suitable for server environments. You can significantly decrease IO delay and increase responsiveness during heavy IO workloads (like vzdump) by setting the IO scheduler to "deadline".

You can do this by adding "elevator=deadline" to /boot/grub/menu.lst, at the end of your running kernel line, so it looks something like this:
Code:
kernel          /vmlinuz-2.6.32-6-pve root=/dev/mapper/pve-root ro quiet elevator=deadline

We have been testing all the IO schedulers under Proxmox for several years now, on both single disk setups and 4, 6 and 8-disk software and hardware RAID arrays, and deadline consistently provides the highest performance for every kernel (2.6.26, 2.6.32-4, 2.6.32-6, 2.6.32-10, etc.).

Even the Linux kernel developer community is starting to agree that CFQ is not optimal for server use:
http://blogs.sybase.com/database/2010/03/io-schedulers-is-linux-really-an-enterprise-os/
http://www.gelato.unsw.edu.au/IA64wiki/LinuxIOSchedulers

More info on changing your IO scheduler:
http://stackoverflow.com/questions/1009577/selecting-a-linux-i-o-scheduler
http://www.serverwatch.com/tutorials...-Scheduler.htm
 

dietmar

Proxmox Staff Member
Staff member
Apr 28, 2005
17,124
521
133
Austria
www.proxmox.com
Re: I/O and IO Delay on sata affects VM responsiveness even if on different SAS stora

Unfortunately, deadline scheduler lacks fundamental features (ionice, openvz ioprio).
 

gkovacs

Well-Known Member
Dec 22, 2008
509
48
48
Budapest, Hungary
Re: I/O and IO Delay on sata affects VM responsiveness even if on different SAS stora

Unfortunately, deadline scheduler lacks fundamental features (ionice, openvz ioprio).

I'm not sure how useful ionice or ioprio can be if the server's responsiveness is not maintained. With CFQ, websites and other network services regularly time out during vzdump backups, regardless of the "niceness".
 

mmenaz

Active Member
Jun 25, 2009
828
20
38
Northern east Italy
Re: I/O and IO Delay on sata affects VM responsiveness even if on different SAS stora

Thanks a lot for the info. Seems to understand that this "problem" affects Proxmox 2.1 as well. I agree that the wose solution is loose ionice. In the current situation the slowest I/O, if used, dictates the slowest performance of the whole system and this is not good at all (i.e. if you install proxmox on raid1 and LVM storage on raid10, high I/O on raid1 will affect raid10 performances as well!).
Wondeing also how I/O is when I vzdump to a large SATA and/or when I "off line backup" with rsync to an external USB driver, I have to test.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!