[TUTORIAL] Proxmox: iSCSI and NVMe/TCP shared storage comparison


Distinguished Member
Nov 20, 2020
Hello Everyone. We received excellent feedback from the previous storage performance investigations, particularly the technotes on optimal disk configuration settings (i.e., aio native, io_uring, and iothreads) and the deep dive into optimizing guest storage latency.
Several community members asked us to quantify the difference between iSCSI and NVMe/TCP initiators in Proxmox. So, we ran a battery of tests using both protocols and here's the TLDR:
  • In almost all workloads, NVMe/TCP outperforms iSCSI in terms of IOPS while simultaneously offering lower latency.
  • For workloads with smaller I/O sizes, you can expect an IOPS improvement of 30% and a latency improvement of 20%.
  • Workloads with little or no concurrency (i.e., QD1) see an 18% performance improvement.
  • For 4K workloads, peak IOPS gains are 51% with 34% lower latency.
You can find the full testing description, analysis, and graphs here:
As always, questions, corrections, and ideas for new experiments are welcome.

Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!