Changing scsi controller type increases disk write but also increases server load!

Hello all,

Been running some raid test the last few months and the findings are somewhat amusing and scary at the same time, not sure if its a bug but we need some help to find out the issues and solution.

Example of the vm config

bootdisk: scsi0
cores: 4
memory: 4096
name: test
ostype: l26
scsi0: SATA:100/vm-100-disk-1.raw,format=raw,cache=writeback,size=40G
scsihw: megasas
sockets: 1

results 17G copied at 41.5671s, 413 MB/s and server load will go to 10-14 load average

running this command in the vm dd if=/dev/zero of=test bs=1M count=16k conv=fdatasync The vm and host schedulers set to deadline changing to noop shows a slight difference but not enough to post the results.

bootdisk: scsi0
cores: 4
ide2: local:iso/CentOS-6.4-x86_64-minimal.iso,media=cdrom
memory: 4096
name: test
ostype: l26
scsi0: SATA:100/vm-100-disk-1.raw,format=raw,cache=writeback,size=40G
scsihw: lsi
sockets: 1

results 17G copied at 116.105s, 148 MB/s and server load will go to 3 to 4 load average

Just changing the controller type for the vm to lsi which is set by default gives me slower disk speeds but the load of the system doesn't go past 4-5 and the io delay is not as high.



bootdisk: scsi0
cores: 4
ide2: local:iso/CentOS-6.4-x86_64-minimal.iso,media=cdrom
memory: 4096
name: test
ostype: l26
scsi0: SATA:100/vm-100-disk-1.raw,format=raw,cache=writeback,size=40G
scsihw: virtio-scsi-pci
sockets: 1

using virto for the controller type has the system load going to 50+ and shows the disk response as 17G copied at 113.133s, 152 MB/s

root@host-03:~# pveperf
CPU BOGOMIPS: 72531.28
REGEX/SECOND: 930624
HD SIZE: 94.49 GB (/dev/mapper/pve-root)
BUFFERED READS: 505.01 MB/sec
AVERAGE SEEK TIME: 0.15 ms
FSYNCS/SECOND: 2384.59
DNS EXT: 74.14 ms
DNS INT: 58.48 ms (google.com)


The raid card used is a lsi megaraid sas 9260-8I latest firmware.

Thanks in advance.
 
Last edited:
Re: Changing scsi controller type increases disk write but also increases server load

cache=writeback is very likely what your problem is.

writeback causes the data to be copied multiple times.
writethrough also causes additional data copies and in my experience causes worse performance.
cache=none, from my testing, provides the most consistent and fastest performance.

I use virtio with cache=none on all of my vms.

writeback, if you want to use it, might benefit from tuning the dirty buffers, I started a thread about this today:
http://forum.proxmox.com/threads/15893-IO-Performance-Tuning


When changing the cache mode on the disks you need to stop, then start the VM for the change to take effect.
 
Re: Changing scsi controller type increases disk write but also increases server load

Load dropped back to normal levels but the disk write speed dropped as well.

New results below notice that the speeds are just about the same but now megasas is slower by half from the previous tests with no load now and the other 2 lsi and virtio are now equal with no big difference between all of them.

bootdisk: scsi0
cores: 4
memory: 4096
name: test
ostype: l26
scsi0: SATA:100/vm-100-disk-1.raw,format=raw,cache=default,size=40G
scsihw: megasas
sockets: 1

using megasas results 17G copied at 79.3624s, 216 MB/s and server load 0.4 - 1.5

running this command in the vm dd if=/dev/zero of=test bs=1M count=16k conv=fdatasync The vm and host schedulers set to deadline changing to noop shows a slight difference but not enough to post the results.

bootdisk: scsi0
cores: 4
ide2: local:iso/CentOS-6.4-x86_64-minimal.iso,media=cdrom
memory: 4096
name: test
ostype: l26
scsi0: SATA:100/vm-100-disk-1.raw,format=raw,cache=default,size=40G
scsihw: lsi
sockets: 1

using lsi results 17G copied at 80.8978s, 212 MB/s and server load 0.4 - 1.5


bootdisk: scsi0
cores: 4
ide2: local:iso/CentOS-6.4-x86_64-minimal.iso,media=cdrom
memory: 4096
name: test
ostype: l26
scsi0: SATA:100/vm-100-disk-1.raw,format=raw,cache=default,size=40G
scsihw: virtio-scsi-pci
sockets: 1

using virto results 17G copied at 79.3272s, 217 MB/s and server load 0.4 - 1.5
 
Re: Changing scsi controller type increases disk write but also increases server load

With writeback you had faster write because you were writing to RAM and the host would then write to disk in the background. That is also why load was higher.

With no cache you are writing to the disk not to RAM. No cache has less load because data is not copied around in RAM as much.

Don't let that higher writeback speed fool you, your server will not be writing sequentially like dd does. It will be reading and writing to random parts of the disk. Put your server under a real workload and measure what works best.

In my experience writeback killed read performance while only providing a fake temporary write speed boost.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!