Several customers have asked us how to get the best possible storage latency out of Proxmox and QEMU (without sacrificing consistency or durability). Typically, the goal is to maximize database performance and improve benchmark results when moving from VMware to Proxmox. In these cases, application performance matters more than CPU cycles.
We recently spent a few weeks analyzing QD1 performance in a VPS environment with Ryzen 5950X servers running Proxmox 7.3. We identified the primary factors affecting latency, tested optimizations, and quantified the performance impacts. Here's what we found:
https://kb.blockbridge.com/technote/proxmox-tuning-low-latency-storage/
Enjoy!
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
We recently spent a few weeks analyzing QD1 performance in a VPS environment with Ryzen 5950X servers running Proxmox 7.3. We identified the primary factors affecting latency, tested optimizations, and quantified the performance impacts. Here's what we found:
- It is possible to achieve QD1 guest latencies within roughly 10 microseconds of bare metal.
- For network-attached storage, the interaction of I/O size and MTU has surprising results: always test a range of I/O sizes.
- Tuning can reduce inline latency by 40% and increase IOPS by 65% on fast storage.
https://kb.blockbridge.com/technote/proxmox-tuning-low-latency-storage/
Enjoy!
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox