VirtIO vs SCSI

tuxis

Famous Member
Jan 3, 2014
216
155
108
Ede, NL
www.tuxis.nl
Some VMs get huge disk I/O performance increase, simply by switching the disk bus/device to VirtIO Block.

Results with SCSI:

Code:
[root@acrux testjes]# dd if=/dev/zero of=testfile bs=1M count=1000 oflag=dsync
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 93.3164 s, 11.2 MB/s
[root@acrux testjes]# dd if=/dev/zero of=testfile bs=1G count=1 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 20.9574 s, 51.2 MB/s


Results with VirtIO block:

Code:
[root@acrux testjes]# dd if=/dev/zero of=testfile bs=1M count=1000 oflag=dsync
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 42.9133 s, 24.4 MB/s
[root@acrux testjes]# dd if=/dev/zero of=testfile bs=1G count=1 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 5.74577 s, 187 MB/s

Can this huge performance increase be expected by using VirtIO? It seems odd that such a seemingly small change has such a large impact :)

Background information: Ceph cluster with 3 nodes. SCSI controller used is VirtIO SCSI (in both tests).
 
Hi,

generally, a test with dd is not really meaningful.
Use fio instead.

What I can tell is that the scsi virtio is better maintained and virtio-blk is the older one.
 
Hi,

generally, a test with dd is not really meaningful.
Use fio instead.

What I can tell is that the scsi virtio is better maintained and virtio-blk is the older one.
I used `fio` afterwards to test the best-performing setups according to `dd`. VirtIO SCSI controller with SCSI bus for disk with SSD emulation and writeback cache is a winner. Actually, stuff like IO thread with VirtIO SCSI single controller makes things perform worse. I was also surprised to see that SSD emulation makes a significant difference..

https://pve.proxmox.com/wiki/Performance_Tweaks states: "Use virtIO for disk and network for best performance."

Do they mean VirtIO controller or VirtIO bus?
 
Last edited:
I used `fio` afterwards to test the best-performing setups according to `dd`. VirtIO SCSI controller with SCSI bus for disk with SSD emulation and writeback cache is a winner. Actually, stuff like IO thread with VirtIO SCSI single controller makes things perform worse. I was also surprised to see that SSD emulation makes a significant difference..

https://pve.proxmox.com/wiki/Performance_Tweaks states: "Use virtIO for disk and network for best performance."

Do they mean VirtIO controller or VirtIO bus?

I just signed up to say thank you for this post, I was having performance issues in the VM's on 6g SSD's pulling only 50MB/s
With your Recommendations, it's now up to 150MB/s
 
... stuff like IO thread with VirtIO SCSI single controller makes things perform worse.
On a single bench maybe, but on a real-life system with several i/o-intensive workloads (VM's and databases), why i/o-threading should be a disadvantage if i/o-threading give you several uncorrelated i/o-venues? Without i/o-threading everything stays in line and has to wait until it is to be handled.

I'm curious now ...
 
I know this is an old thread but i'm currently investigating IOPs on our Ceph cluster.

Test VM Debian 10 (Kernel 4.19)
Test Suite fio with 4k randrw test
Every test was repeated 3 times

First tests to raw rdb block devices from within the VM to identify the best BUS (SCSI vs Virtio) with no cache and writeback cache

BUSCacheIOThreadSSD EmulationDiscardIOPS Read (k)IOPS Write (k)BW Read (MB/s)BW Write (MB/s)
SCSINone00014 k6 k24 MB/s24 MB/s
SCSIWriteback11140 k17 k164 MB/s71 MB/s
SCSIWriteback01140 k17 k165 MB/s71 MB/s
SCSIWriteback00040 k17 k165 MB/s71 MB/s
SCSIWriteback00139 k17 k163 MB/s70 MB/s
VirtioNone00014 k6 k54 MB/s22 MB/s
VirtioWriteback10146 k20 k187 MB/s79 MB/s
VirtioWriteback00145 k19 k185 MB/s79 MB/s
VirtioWriteback00045 k19 k185 MB/s79 MB/s

I dropped "None" Cache combinations as the first test showed that writeback is definitely the winner.

So far IOThread didn't slow down anything.

Next i did tests on a mounted ext4 filesystem (not rootfs) and suprinsingly IOThread indeed did slow down things.

BUSCacheIOThreadSSD EmulationDiscardIOPS Read (k)IOPS Write (k)BW Read (MB/s)BW Write (MB/s)
VirtioWriteback10136 k16 k148 MB/s64 MB/s
VirtioWriteback00141 k18 k169 MB/s72 MB/s
VirtioWriteback00045 k19 k183 MB/s78 MB/s
VirtioWriteback10038 k16 k157 MB/s67 MB/s

I was also surprised that activating discard did slow down things too.

It looks like Virtio with writeback cache and all other options disabled provides the best performance for ext4.
Other filesystems may behave differently.

But is using writeback cache really safe when using live migration? What are the possible problems?

Update:
Doing the raw device tests on Debian 11 (Kernel 5.10) did increase overall performance by 10-15%

BUSCacheIOThreadSSD EmulationDiscardIOPS Read (k)IOPS Write (k)BW Read (MB/s)BW Write (MB/s)
VirtioWriteback00056 k24 k230 MB/s99 MB/s

Doing the filesystem tests on Debian 11 (Kernel 5.10) did decrease overall performance by 10-15% which is very frustrating.

BUSCacheIOThreadSSD EmulationDiscardIOPS Read (k)IOPS Write (k)BW Read (MB/s)BW Write (MB/s)
VirtioWriteback00038 k17 k160 MB/s68 MB/s
 
Last edited:
It looks like Virtio with writeback cache and all other options disabled provides the best performance for ext4.
Other filesystems may behave differently.

But is using writeback cache really safe when using live migration? What are the possible problems?

Hi,

Write-back cache is not so safe, because the vDisk will cache in RAM all this data(if buffer will be sufficient large) and will tell to the applications ... OK, I write the data on disk, go on .... When buffers will be full or at X seconds(OS dependent, for ex. Windows use 60 sec. if I remember, and around 5-10 sec. for linux) will be flushed on the vDisk. The real danger is that betheen 2 consecutive data flushes , if anything bad will be happend(like OS crash, host crash, power lose, and so on) then you lost this buffered data(who reside in RAM) .

Live Migration of any VM WITH qemu-guest active and operational, will be ok(live migration will start to flush any buffered data on vDisk, and after that will start to move the vDisk). Without qemu-guest ... yes, you can lose your data if during the proccess(if the VM or the node will crash).

Also note that discard option disabled, is not so good as you think, because the backend storage of the vDisk, will not be able to delete any unused blocks who are deleted by the OS VM filesystem.

Good luck / Bafta !
 
  • Like
Reactions: melroy89 and flames
Write-back cache is not so safe, because the vDisk will cache in RAM all this data(if buffer will be sufficient large) and will tell to the applications ... OK, I write the data on disk, go on .... When buffers will be full or at X seconds(OS dependent, for ex. Windows use 60 sec. if I remember, and around 5-10 sec. for linux) will be flushed on the vDisk. The real danger is that betheen 2 consecutive data flushes , if anything bad will be happend(like OS crash, host crash, power lose, and so on) then you lost this buffered data(who reside in RAM) .
Sure, that's what writeback does. :)

Live Migration of any VM WITH qemu-guest active and operational, will be ok(live migration will start to flush any buffered data on vDisk, and after that will start to move the vDisk). Without qemu-guest ... yes, you can lose your data if during the proccess(if the VM or the node will crash).
Ok, so the usual writeback drawbacks. Thanks.

Also note that discard option disabled, is not so good as you think, because the backend storage of the vDisk, will not be able to delete any unused blocks who are deleted by the OS VM filesystem.
Yes, but this only applies to flash storage. I just wanted to point out that discard impacts performance. In my case i need discard but this does not automatically applies to others. I've tested on flash storage, would be interesting to see if the "discard" options also impacts performance on spinning disks.
 
  • Like
Reactions: flames
Not true!
True but i would appreciate if instead of just saying "Not true" you would provide usable information. :)

I wrongly interpreted the Discard Option! It is about thin provisioning and not about flash storage.

"Disk images in Proxmox are sparse regardless of the image type, meaning the disk image grows slowly as more data gets stored in it. Over time, data gets created and deleted within the filesystem of the disk image. But in a sparse disk image, even after data is deleted, it never reclaims the free space. The VM may report the correct available storage space but Proxmox storage will show higher storage usage. The Discard option allows the node to reclaim the free space that does not have any data. This is equivalent to the TRIM option that was introduced in SSD drives. Before this option can be used, we have to ensure that the VM uses the VirtIO SCSI controller. We can set the SCSI Controller Type under virtual machine's Options tab: ..."
https://www.oreilly.com/library/vie...05/03431488-8696-41e3-92e2-a60482b6e4e9.xhtml

Or
https://gist.github.com/hostberg/86bfaa81e50cc0666f1745e1897c0a56
 
Keep also in mind that any host-side caching settings (everything besides NONE) will need additional RAM on the host/hypervisor and often people complain about PVE using too much memory. That's why. It can also be the case that you cache on multiple layers, e.g. with ZFS, NONE is the default and should also be the default if you don't want to cache twice (only on the host, there may also be a cache inside of your guest OS)
 
  • Like
Reactions: flames
All of your points are valid but in the end the Cluster has to be fast enough to fit the needs and with Cache "None" it's not.
I generally favor safety over performance but in this case the numbers are far to bad and users have to do their work without going to have a coffee until the applications are done with the work.
 
Last edited:
  • Like
Reactions: flames

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!