[TUTORIAL] Low latency storage optimizations for Proxmox, KVM & QEMU


Distinguished Member
Nov 20, 2020
Several customers have asked us how to get the best possible storage latency out of Proxmox and QEMU (without sacrificing consistency or durability). Typically, the goal is to maximize database performance and improve benchmark results when moving from VMware to Proxmox. In these cases, application performance matters more than CPU cycles.
We recently spent a few weeks analyzing QD1 performance in a VPS environment with Ryzen 5950X servers running Proxmox 7.3. We identified the primary factors affecting latency, tested optimizations, and quantified the performance impacts. Here's what we found:
  • It is possible to achieve QD1 guest latencies within roughly 10 microseconds of bare metal.
  • For network-attached storage, the interaction of I/O size and MTU has surprising results: always test a range of I/O sizes.
  • Tuning can reduce inline latency by 40% and increase IOPS by 65% on fast storage.
Here's a link to the data, analysis, and hardware theory relevant to tuning for performance. If you find this helpful, please let me know. If we missed an important optimization, send me a DM: and we'll see if we can get it tested. Questions, comments, and corrections are encouraged.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!