VirtIO vs SCSI

tuxis

Well-Known Member
Jan 3, 2014
161
75
48
Ede, NL
www.tuxis.nl
Some VMs get huge disk I/O performance increase, simply by switching the disk bus/device to VirtIO Block.

Results with SCSI:

Code:
[root@acrux testjes]# dd if=/dev/zero of=testfile bs=1M count=1000 oflag=dsync
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 93.3164 s, 11.2 MB/s
[root@acrux testjes]# dd if=/dev/zero of=testfile bs=1G count=1 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 20.9574 s, 51.2 MB/s


Results with VirtIO block:

Code:
[root@acrux testjes]# dd if=/dev/zero of=testfile bs=1M count=1000 oflag=dsync
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 42.9133 s, 24.4 MB/s
[root@acrux testjes]# dd if=/dev/zero of=testfile bs=1G count=1 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 5.74577 s, 187 MB/s

Can this huge performance increase be expected by using VirtIO? It seems odd that such a seemingly small change has such a large impact :)

Background information: Ceph cluster with 3 nodes. SCSI controller used is VirtIO SCSI (in both tests).
 

wolfgang

Proxmox Staff Member
Oct 1, 2014
6,496
496
103
Hi,

generally, a test with dd is not really meaningful.
Use fio instead.

What I can tell is that the scsi virtio is better maintained and virtio-blk is the older one.
 

tuxis

Well-Known Member
Jan 3, 2014
161
75
48
Ede, NL
www.tuxis.nl
Hi,

generally, a test with dd is not really meaningful.
Use fio instead.

What I can tell is that the scsi virtio is better maintained and virtio-blk is the older one.
I used `fio` afterwards to test the best-performing setups according to `dd`. VirtIO SCSI controller with SCSI bus for disk with SSD emulation and writeback cache is a winner. Actually, stuff like IO thread with VirtIO SCSI single controller makes things perform worse. I was also surprised to see that SSD emulation makes a significant difference..

https://pve.proxmox.com/wiki/Performance_Tweaks states: "Use virtIO for disk and network for best performance."

Do they mean VirtIO controller or VirtIO bus?
 
Last edited:

lilszi

New Member
Jun 25, 2020
1
0
1
34
I used `fio` afterwards to test the best-performing setups according to `dd`. VirtIO SCSI controller with SCSI bus for disk with SSD emulation and writeback cache is a winner. Actually, stuff like IO thread with VirtIO SCSI single controller makes things perform worse. I was also surprised to see that SSD emulation makes a significant difference..

https://pve.proxmox.com/wiki/Performance_Tweaks states: "Use virtIO for disk and network for best performance."

Do they mean VirtIO controller or VirtIO bus?

I just signed up to say thank you for this post, I was having performance issues in the VM's on 6g SSD's pulling only 50MB/s
With your Recommendations, it's now up to 150MB/s
 

gb00s

Member
Aug 4, 2017
28
2
23
42
... stuff like IO thread with VirtIO SCSI single controller makes things perform worse.
On a single bench maybe, but on a real-life system with several i/o-intensive workloads (VM's and databases), why i/o-threading should be a disadvantage if i/o-threading give you several uncorrelated i/o-venues? Without i/o-threading everything stays in line and has to wait until it is to be handled.

I'm curious now ...
 

christian.g

Member
Jun 4, 2020
43
17
13
I know this is an old thread but i'm currently investigating IOPs on our Ceph cluster.

Test VM Debian 10 (Kernel 4.19)
Test Suite fio with 4k randrw test
Every test was repeated 3 times

First tests to raw rdb block devices from within the VM to identify the best BUS (SCSI vs Virtio) with no cache and writeback cache

BUSCacheIOThreadSSD EmulationDiscardIOPS Read (k)IOPS Write (k)BW Read (MB/s)BW Write (MB/s)
SCSINone00014 k6 k24 MB/s24 MB/s
SCSIWriteback11140 k17 k164 MB/s71 MB/s
SCSIWriteback01140 k17 k165 MB/s71 MB/s
SCSIWriteback00040 k17 k165 MB/s71 MB/s
SCSIWriteback00139 k17 k163 MB/s70 MB/s
VirtioNone00014 k6 k54 MB/s22 MB/s
VirtioWriteback10146 k20 k187 MB/s79 MB/s
VirtioWriteback00145 k19 k185 MB/s79 MB/s
VirtioWriteback00045 k19 k185 MB/s79 MB/s

I dropped "None" Cache combinations as the first test showed that writeback is definitely the winner.

So far IOThread didn't slow down anything.

Next i did tests on a mounted ext4 filesystem (not rootfs) and suprinsingly IOThread indeed did slow down things.

BUSCacheIOThreadSSD EmulationDiscardIOPS Read (k)IOPS Write (k)BW Read (MB/s)BW Write (MB/s)
VirtioWriteback10136 k16 k148 MB/s64 MB/s
VirtioWriteback00141 k18 k169 MB/s72 MB/s
VirtioWriteback00045 k19 k183 MB/s78 MB/s
VirtioWriteback10038 k16 k157 MB/s67 MB/s

I was also surprised that activating discard did slow down things too.

It looks like Virtio with writeback cache and all other options disabled provides the best performance for ext4.
Other filesystems may behave differently.

But is using writeback cache really safe when using live migration? What are the possible problems?

Update:
Doing the raw device tests on Debian 11 (Kernel 5.10) did increase overall performance by 10-15%

BUSCacheIOThreadSSD EmulationDiscardIOPS Read (k)IOPS Write (k)BW Read (MB/s)BW Write (MB/s)
VirtioWriteback00056 k24 k230 MB/s99 MB/s

Doing the filesystem tests on Debian 11 (Kernel 5.10) did decrease overall performance by 10-15% which is very frustrating.

BUSCacheIOThreadSSD EmulationDiscardIOPS Read (k)IOPS Write (k)BW Read (MB/s)BW Write (MB/s)
VirtioWriteback00038 k17 k160 MB/s68 MB/s
 
Last edited:
  • Like
Reactions: Sudodude

guletz

Famous Member
Apr 19, 2017
1,556
245
83
Brasov, Romania
It looks like Virtio with writeback cache and all other options disabled provides the best performance for ext4.
Other filesystems may behave differently.

But is using writeback cache really safe when using live migration? What are the possible problems?

Hi,

Write-back cache is not so safe, because the vDisk will cache in RAM all this data(if buffer will be sufficient large) and will tell to the applications ... OK, I write the data on disk, go on .... When buffers will be full or at X seconds(OS dependent, for ex. Windows use 60 sec. if I remember, and around 5-10 sec. for linux) will be flushed on the vDisk. The real danger is that betheen 2 consecutive data flushes , if anything bad will be happend(like OS crash, host crash, power lose, and so on) then you lost this buffered data(who reside in RAM) .

Live Migration of any VM WITH qemu-guest active and operational, will be ok(live migration will start to flush any buffered data on vDisk, and after that will start to move the vDisk). Without qemu-guest ... yes, you can lose your data if during the proccess(if the VM or the node will crash).

Also note that discard option disabled, is not so good as you think, because the backend storage of the vDisk, will not be able to delete any unused blocks who are deleted by the OS VM filesystem.

Good luck / Bafta !
 

christian.g

Member
Jun 4, 2020
43
17
13
Write-back cache is not so safe, because the vDisk will cache in RAM all this data(if buffer will be sufficient large) and will tell to the applications ... OK, I write the data on disk, go on .... When buffers will be full or at X seconds(OS dependent, for ex. Windows use 60 sec. if I remember, and around 5-10 sec. for linux) will be flushed on the vDisk. The real danger is that betheen 2 consecutive data flushes , if anything bad will be happend(like OS crash, host crash, power lose, and so on) then you lost this buffered data(who reside in RAM) .
Sure, that's what writeback does. :)

Live Migration of any VM WITH qemu-guest active and operational, will be ok(live migration will start to flush any buffered data on vDisk, and after that will start to move the vDisk). Without qemu-guest ... yes, you can lose your data if during the proccess(if the VM or the node will crash).
Ok, so the usual writeback drawbacks. Thanks.

Also note that discard option disabled, is not so good as you think, because the backend storage of the vDisk, will not be able to delete any unused blocks who are deleted by the OS VM filesystem.
Yes, but this only applies to flash storage. I just wanted to point out that discard impacts performance. In my case i need discard but this does not automatically applies to others. I've tested on flash storage, would be interesting to see if the "discard" options also impacts performance on spinning disks.
 

christian.g

Member
Jun 4, 2020
43
17
13
Not true!
True but i would appreciate if instead of just saying "Not true" you would provide usable information. :)

I wrongly interpreted the Discard Option! It is about thin provisioning and not about flash storage.

"Disk images in Proxmox are sparse regardless of the image type, meaning the disk image grows slowly as more data gets stored in it. Over time, data gets created and deleted within the filesystem of the disk image. But in a sparse disk image, even after data is deleted, it never reclaims the free space. The VM may report the correct available storage space but Proxmox storage will show higher storage usage. The Discard option allows the node to reclaim the free space that does not have any data. This is equivalent to the TRIM option that was introduced in SSD drives. Before this option can be used, we have to ensure that the VM uses the VirtIO SCSI controller. We can set the SCSI Controller Type under virtual machine's Options tab: ..."
https://www.oreilly.com/library/vie...05/03431488-8696-41e3-92e2-a60482b6e4e9.xhtml

Or
https://gist.github.com/hostberg/86bfaa81e50cc0666f1745e1897c0a56
 

LnxBil

Famous Member
Feb 21, 2015
6,273
773
163
Saarland, Germany
Keep also in mind that any host-side caching settings (everything besides NONE) will need additional RAM on the host/hypervisor and often people complain about PVE using too much memory. That's why. It can also be the case that you cache on multiple layers, e.g. with ZFS, NONE is the default and should also be the default if you don't want to cache twice (only on the host, there may also be a cache inside of your guest OS)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!