QEMU - Can we Add emulated NVME Devices to Guests?

Apr 24, 2020
22
4
8
28
My servers all run HGST260 enterprise PCIe NVMe drives, mirrored. The drives have great performance, but my guests seem to be limited by queue depth. Are we able to use emulated NVMe devices to increase the parallelization of disk IO, and would this help relative to the standard SCSI devices?
I always enable SSD emulation and SCSI IO Thread.

See the following, as it seems to be supported in QEMU:
http://blog.frankenmichl.de/2018/02/13/add-nvme-device-to-vm/

Research dumping ground:

Seems like virtio-scsi is a good all-around solution with good performance, if you don't hate yourself and decide not to work with SPDK.
https://events19.lfasiallc.com/wp-c...uning-for-FAST-Virtual-Machines_Fam-Zheng.pdf

using virtio-blk for a smaller number of high-performance systems [notes on older version of virtio-scsi, with lots of improvements since this was posted]:
https://stackoverflow.com/questions/39031456/why-is-virtio-scsi-much-slower-than-virtio-blk-in-my-experiment-over-and-ceph-r#:~:text=VirtIO Block exposes one PCI,of LUN's on that device.&text=VirtIO SCSI exposes devices as,uses a special device major.

SPDK, which seems masochistic.
https://events19.linuxfoundation.or...-Solution-Ziye-Yang-_-Changpeng-Liu-Intel.pdf
 
Last edited:

spirit

Famous Member
Apr 2, 2010
5,769
673
133
www.odiso.com
  • Like
Reactions: grepler
Apr 24, 2020
22
4
8
28
Honestly, I'm just trying to cut down on guest latency, the guest I'm currently thinking about is a Windows server 2019 OS running SQLite databases.
I generally see 0.5-1.4ms latencies on the SCSI drives and I'm just trying to eke out more performance. Looking at the server's iowait, the NVMe drives are only at about 8% io usage when servicing the requests generated by this VM.

I have another server running MSSQL, but no real performance complaints on that one.
 

spirit

Famous Member
Apr 2, 2010
5,769
673
133
www.odiso.com
the virtio drivers are not so optimal on windows than linux, I don't known if it's really possible to have lower latencies without virtualisation overhead. (or we need some kind of passthrough or spdk, but it's a nightmare to manage).

io_uring could help, but it's not yet enough stable.
I have foudn interesting slides here, I need to take time to read them to see if some news options exist to improve perf with nvme.
https://vmsplice.net/~stefan/stefanha-kvm-forum-2020.pdf


for now, you could also try to use virtio-scsi-single controller + iothread option on scsi disk, it should lower a little bit latency.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!