Proxmox 7.3-3 default SCSI Controller

Mecanik

Well-Known Member
Mar 2, 2017
173
5
58
33
Hi,

Why is the default SCSI controller for Windows VM's single now? I have a previous Proxmox (below 7.3) and the default controller is just SCSI and not single.

Any specific reason for this? From my testing they have the same performance, and on top of this the single controller with IO Thread consumes more host CPU.

Thanks
 
Because the default for SCSI disks is now to also have IO Threading enabled, which needs the Virtio SCSI single controller. The main benefit is that the handling of the disk IO is done in a separate thread outside the main thread for the VM. This can improve performance and reduce problems that might appear if the disk IO would be slowing down the main thread or vice versa.
 
Because the default for SCSI disks is now to also have IO Threading enabled, which needs the Virtio SCSI single controller. The main benefit is that the handling of the disk IO is done in a separate thread outside the main thread for the VM. This can improve performance and reduce problems that might appear if the disk IO would be slowing down the main thread or vice versa.

Thanks, however as I mentioned extra CPU usage has been noticed on the host. Also, is this still necessary when you have NVMe SSD? I/O should not be an issue, so I suppose IO Thread is not really required?
 
I/O should not be an issue, so I suppose IO Thread is not really required?
I/O is always the issue, especially parallel I/Os for SSDs. They're not good for sequential, single threaded data. SSDs shine if there is multiple and parallel IO threads working on them yielding more IOPS than any single thread application can do. Just benchmark this for yourself with e.g. fio.

This multiple-controller-setup is also common practice for any VMware environment I worked in, but has to be done manually, PVE is here one step ahead and enables this per default!
 
I/O is always the issue, especially parallel I/Os for SSDs. They're not good for sequential, single threaded data. SSDs shine if there is multiple and parallel IO threads working on them yielding more IOPS than any single thread application can do. Just benchmark this for yourself with e.g. fio.

This multiple-controller-setup is also common practice for any VMware environment I worked in, but has to be done manually, PVE is here one step ahead and enables this per default!

So in essence, what you are saying is that it's better to use scsi single with IO thread?
 
If you have more than (or equal to) two virtual disks attached to your VM. Only then you will get better performance, because every disk is then handeld by its own IO thread.

Ah I see... so no then. Each VM has only one disk. So this won't make a difference then.
 
Ah I see... so no then. Each VM has only one disk. So this won't make a difference then.
The virtual network device and other (emulated) virtual devices could also using that single QEMU thread. Putting disk I/O on other threads can help with those as well. In the end, it all depends on your specific work-load whether it makes a difference for you.
If you have few but high-speed single threaded CPU cores, a single thread might not be an issue. If you have many but slower cores, you'll benefit from distributing the various kinds of I/O over more of them. The increase in CPU usage you noticed might be a good thing if your tasks finish sooner as well.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!