VirtIO-SCSI but for NVME

Ivan Maglica

New Member
Mar 31, 2025
4
1
3
We are currently using VirtIO-SCSI to block storage protocol for VMs. If I understand correctly this presents the storage as virtual SCSI drive to a VM. Is there a way (or a plan) to present it as an NVMe device to a VM? I know it's a lot more complex, but there would probably be some benefits?

I know of PCI passthrough, but passing whole drive is not my goal.
Also slicing a NVMe drive into more virtual ones and passing them to VM is also not the goal.

Thank you for your input,
Ivan
 
QEMU/KVM supports virtual NVMe disks but Proxmox does not (yet). I think support for this would make a SteamOS VM easier. You can always do it manually using the args: setting in the VM configuration file, but Proxmox will not back it up and you might run into unexpected complications.
 
QEMU/KVM supports virtual NVMe disks but Proxmox does not (yet). I think support for this would make a SteamOS VM easier. You can always do it manually using the args: setting in the VM configuration file, but Proxmox will not back it up and you might run into unexpected complications.
Thank you. I saw it mentioned some time ago that KVM was planning to add it, but I've not kept up with it's development. I wonder if performance is any better.
 
Thank you. I saw it mentioned some time ago that KVM was planning to add it, but I've not kept up with it's development. I wonder if performance is any better.
There are always trade-offs. In general, NVMe creates multiple queues for I/O processing compared to iSCSI/SCSI. However, each queue consumes CPU. So you may get better I/O performance, but in a shared virtual environment, you may adversely affect other aspects of the infrastructure.

We've done a comprehensive analysis of iSCSI vs NVMe: https://kb.blockbridge.com/technote/proxmox-iscsi-vs-nvmetcp/index.html

As well as ESX vs PVE with NVMe: https://kb.blockbridge.com/technote/proxmox-vs-vmware-nvmetcp/

The summary is that PVE can achieve excellent performance as is, and leave the competition in the dust, if combined with the right storage system.

If you are in the elite subset of users who depend on squeezing an extra microsecond from I/O, your best gain will come from selecting the right storage, and not from the cutting edge of virtualization layer implementation.

PS if you are tuning for performance, you may find this helpful: https://kb.blockbridge.com/technote/proxmox-tuning-low-latency-storage/index.html


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
There are always trade-offs. In general, NVMe creates multiple queues for I/O processing compared to iSCSI/SCSI. However, each queue consumes CPU. So you may get better I/O performance, but in a shared virtual environment, you may adversely affect other aspects of the infrastructure.

We've done a comprehensive analysis of iSCSI vs NVMe: https://kb.blockbridge.com/technote/proxmox-iscsi-vs-nvmetcp/index.html

As well as ESX vs PVE with NVMe: https://kb.blockbridge.com/technote/proxmox-vs-vmware-nvmetcp/

The summary is that PVE can achieve excellent performance as is, and leave the competition in the dust, if combined with the right storage system.

If you are in the elite subset of users who depend on squeezing an extra microsecond from I/O, your best gain will come from selecting the right storage, and not from the cutting edge of virtualization layer implementation.

PS if you are tuning for performance, you may find this helpful: https://kb.blockbridge.com/technote/proxmox-tuning-low-latency-storage/index.html


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Thank you very much. This helps alot.
 
To answer your initial question directly: in current versions of QEMU/Proxmox, emulating an NVMe device is unlikely to deliver a dramatic performance improvement.

The reason lies in understanding what "virtio" actually is. Virtio is a paravirtualized I/O framework that enables efficient communication between the guest and QEMU via shared memory queues, called virtqueues. In many ways, it behaves similarly to an NVMe queue pair, providing an optimized path for I/O requests and completions.

The virtio-scsi controller exposes a SCSI-compatible interface on top of virtio queues when handling storage requests and completions. Because it presents itself as a SCSI device to the guest OS, there is additional overhead for managing SCSI command buffers, responses, and asynchronous event notifications.

This overhead can slightly impact latency and CPU efficiency inside the guest.

If your goal is to minimize overhead for performance-critical virtual machines, virtio-blk can sometimes be a better choice. Like virtio-scsi, it uses virtio message queues but avoids the SCSI protocol layer, resulting in lower latency and slightly higher throughput in some workloads. That said, the latency differences are modest, typically just a few microseconds.

As of QEMU 9, virtio-blk supports multiple queues, leveling the playing field with virtio-scsi. We generally recommend virtio-scsi unless your workload has unusual requirements, mostly because we like the well-defined semantics that SCSI provides.

As Bob pointed out, NVMe emulation in QEMU is primarily intended for development and testing. Even if fully optimized, it is unlikely to outperform virtio-blk significantly, since the backend work, shared memory message passing, is nearly identical. Some minor gains could come from bypassing the traditional Linux block layer, but multiplexing host queue pairs introduces additional overhead in QEMU.

The main potential advantage of NVMe emulation is supporting one I/O thread per host queue pair. However, this will increase CPU utilization, which inevitably draws complaints.

I hope this helps!


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox