[TUTORIAL] Proxmox: iSCSI and NVMe/TCP shared storage comparison

bbgeek17

Distinguished Member
Nov 20, 2020
5,126
1,615
228
Blockbridge
www.blockbridge.com
Hello Everyone. We received excellent feedback from the previous storage performance investigations, particularly the technotes on optimal disk configuration settings (i.e., aio native, io_uring, and iothreads) and the deep dive into optimizing guest storage latency.
Several community members asked us to quantify the difference between iSCSI and NVMe/TCP initiators in Proxmox. So, we ran a battery of tests using both protocols and here's the TLDR:
  • In almost all workloads, NVMe/TCP outperforms iSCSI in terms of IOPS while simultaneously offering lower latency.
  • For workloads with smaller I/O sizes, you can expect an IOPS improvement of 30% and a latency improvement of 20%.
  • Workloads with little or no concurrency (i.e., QD1) see an 18% performance improvement.
  • For 4K workloads, peak IOPS gains are 51% with 34% lower latency.
You can find the full testing description, analysis, and graphs here:
As always, questions, corrections, and ideas for new experiments are welcome.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox