VIRTIO SCSI vs VIRTIO SCSI single

mir

Famous Member
Apr 14, 2012
3,570
131
133
Copenhagen, Denmark
Can anybody explain the big performance difference between VIRTIO SCSI and VIRTIO SCSI single especially when using iotread=0 and iothread=1?

Code:
read : io=1637.7MB, bw=39596KB/s, iops=6491, runt= 42350msec  VIRTIO SCSI iotread=0
read : io=1637.7MB, bw=41136KB/s, iops=6744, runt= 40765msec  VIRTIO SCSI single iotread=0
read : io=1637.7MB, bw=36911KB/s, iops=6051, runt= 45431msec  VIRTIO SCSI iotread=1
read : io=1637.7MB, bw=53724KB/s, iops=8808, runt= 31213msec  VIRTIO SCSI single iotread=1

write: io=420263KB, bw=9923.6KB/s, iops=1625, runt= 42350msec VIRTIO SCSI iotread=0
write: io=420263KB, bw=10309KB/s, iops=1689, runt= 40765msec  VIRTIO SCSI single iotread=0
write: io=420263KB, bw=9250.6KB/s, iops=1515, runt= 45431msec VIRTIO SCSI iotread=1
write: io=420263KB, bw=13464KB/s, iops=2206, runt= 31213msec  VIRTIO SCSI single iotread=1
 
So iothread with virtio scsi is pointless or do I misunderstand you?
Yes pointless (at least un current qemu implémentation ) vitioscsi single has been added for iothread and multiqueues , which are both controller properties . (Same for virtioblk, which by default have akways 1 controller by disk)
 
...pretty old thread but best search result for "VIRTIO SCSI vs VIRTIO SCSI single"

My question: "Shrink Qcow2 Disk Files" says controller must be set to "VIRTIO SCSI". Is that just outdated info and "VIRTIO SCSI single" is fine as well or does it have to be "VIRTIO SCSI"?
 
...pretty old thread but best search result for "VIRTIO SCSI vs VIRTIO SCSI single"

My question: "Shrink Qcow2 Disk Files" says controller must be set to "VIRTIO SCSI". Is that just outdated info and "VIRTIO SCSI single" is fine as well or does it have to be "VIRTIO SCSI"?
That's a good question. Make a backup and test it for us... I'm curious myself...
 
  • Like
Reactions: cymesa
Also curious about this, would be nice to have better documentation for this "virtio scsi" vs "virtio scsi single".
 
Seems as thought this has become a hot topic as of late. I am here to see if it makes a difference on a virtualized instance of TrueNas.... Cover me.. I'm gonna start pulling levers and throwing switches.
 
  • Like
Reactions: SInisterPisces
Please let me know if this is a solid and comprehensive understanding of this topic:

VirtIO SCSI vs VirtIO SCSI Single boils down to a simple architectural choice that has real performance implications:

Standard VirtIO SCSI uses one controller that handles up to 16 disks, while Single dedicates one controller per disk. This matters most when using IOThreads (iothread=1), because threads work at the controller level.

When using IOThreads, Single shows significantly better performance (often 30-50% improvement) because each disk gets its own dedicated processing thread. Without IOThreads, the performance difference is minimal (typically less than 5%).

So the choice is straightforward:
  • Want maximum disk performance? Use Single + iothread=1
  • Managing lots of disks with limited resources? Standard might be better to avoid thread overhead
  • From the VM's perspective, they work exactly the same
This explains why benchmarks consistently show better I/O performance with Single + iothread=1, while keeping the underlying architectural differences clear.
 
Drivers installed but shutdown and flipped back but went back to the 2012 error blue menu. Back in IDE maybe it needed a reboot to compete drivers then shutdown and try Virtio bus again.
 
It seems to run fine with drives in IDE mode with controller set to Virtio SCSI single. This one is just an admin server so I'll leave it like this. Thanks again
 
FYI... IDE mode severally hinders performance - according to this link VIRTIO is ~5x faster. I believe there might be other limitations such as online expansion, I think to expand an IDE you have to shut down your VM, adding a new Virtio can be done live, or expanded live, make sure to run the full installer on the Guest ISO and install the Guest Agent. You can easily switch to virtio in 2 minutes:

1. Make sure the SCSI controller is set to (double click to change):
1735878339021.png
2. Add a 1GB temp drive to your VM as Virtio Block,

3. Attach the virtio ISO to the VM in PVE GUI and load the drivers for the 1GB drive. Shut down.

4. In PVE GUI remove your main IDE drive, it will show up as an unused drive,

5. Double click it to add it back and select type Virtio Block, since the drivers were pre-loaded in the windows driver store when you attached the temp drive it will already have them, reboot and test it. You can then remove the 1GB temp drive.

Driver ISO here if you dont have it:
https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso
 

Attachments

  • 1735878307740.png
    1735878307740.png
    1.8 KB · Views: 10
FYI... IDE mode severally hinders performance - according to this link VIRTIO is ~5x faster. I believe there might be other limitations such as online expansion, I think to expand an IDE you have to shut down your VM, adding a new Virtio can be done live, or expanded live, make sure to run the full installer on the Guest ISO and install the Guest Agent. You can easily switch to virtio in 2 minutes:

1. Make sure the SCSI controller is set to (double click to change):
View attachment 80131
2. Add a 1GB temp drive to your VM as Virtio Block,

3. Attach the virtio ISO to the VM in PVE GUI and load the drivers for the 1GB drive. Shut down.

4. In PVE GUI remove your main IDE drive, it will show up as an unused drive,

5. Double click it to add it back and select type Virtio Block, since the drivers were pre-loaded in the windows driver store when you attached the temp drive it will already have them, reboot and test it. You can then remove the 1GB temp drive.

Driver ISO here if you dont have it:
https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso
Thanks yea I got it working. I had to use the 189 iso though as the latest doesn't work on server 2012 r2. Tried 190 as well but it caused the network card to dissappear. Had to do some modding with a almalinux migration too, I forgot the command but I found it here. Something regenerate and grub in rescue mode
 
Hi @verulian ,

Here are a few notes and clarifications regarding the virtio-scsi controllers and their behavior:
  • virtio-scsi-single: Each disk can optionally have a dedicated I/O thread, depending on whether the I/O threads are configured for that specific disk. This allows the hypervisor to offload storage operations from the main event loop to a separate thread. As a result, you can configure VMs with a mix of disks that either use or don’t use I/O threads.
  • virtio-scsi: In contrast, I/O threads are not used with this controller. Even if you check the option to enable I/O threads, no dedicated thread is created.
  • virtio-scsi does not have a 16 disk limitation.
For a detailed system efficiency analysis of each controller type, configuration, and mode, please refer to this KB article.

Note: In our findings, we refer to virtio-scsi as virtio-scsi-multi to better differentiate it from virtio-scsi-single.

Quick Summary:
  • When using aio=native, the context switch overhead (which primarily affects latency) is similar across virtio-scsi, virtio-scsi-single, and virtio-scsi-single+iothread.
  • In terms of CPU cycles per IOP, the consumption is also comparable across these configurations when using aio=native.
  • As highlighted in our KB article, the choice of aio mode can lead to significant differences in system overhead and behavior, while the controller type itself has minimal impact on performance.
Given that virtio-scsi does not use an I/O thread, it is safe to assume that much of the performance benefit of virtio-scsi-single comes from the ability to utilize an I/O thread. You can read more about these performance benefits here.

I hope this clears things up! Please feel free to reach out if you have any further questions.

Cheers


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!