Article: "NFSv3 vs NFSv4 Storage on Proxmox: The Latency Clash That Reveals More Than You Think"

Sep 1, 2022
362
125
48
41
Source: https://gyptazy.com/nfsv3-vs-nfsv4-...tency-clash-that-reveals-more-than-you-think/

In this post, we’re focusing on the differences between NFSv3 and NFSv4, especially when it comes to latency. While high throughput can be achieved by running multiple VMs in parallel or doing sequential reads, latency is a whole different challenge. And latency is becoming more and more important. Whether you’re running databases, machine learning workloads, or any latency-sensitive applications, it’s not anymore just about how fast you can push and pull data in bulk – it’s about how quickly the system responds.


That’s where NFSv4 really shines. Compared to NFSv3, it brings major improvements not just in functionality but also in how efficiently it handles operations. Features like n-connect, which allows multiple TCP connections per mount, and pNFS (Parallel NFS), which enables direct data access from clients to storage nodes, provide serious performance gains. On top of that, NFSv4 has better locking mechanisms, improved security, stateful protocol design, and more efficient metadata handling, all of which reduce round-trip times and overall latency. One major issue in such setups often relies in the default behavior, where many ones simply use NFSv3 and start complaining about the performance which is indeed often not that good. Therefore, you should take the time to properly configure NFSv4 in your environment.

Latencies by Storage-Type
Before we’re starting with our tests on NFSv3 and NFSv4, we should have a rough idea what kind of latencies we should expect during our tests.
1751654257862.png

Reading this now. I wanted to post the link here because in the few years I've been working with Proxmox and network storage for VM disks and NAS-based data shares, I haven't seen much discussion of actual benefits of using properly tuned NFSv4 vs NFSv3. (Or, how to properly tune NFSv4 for Proxmox, come to that.)

I'm sure not all of the features he discusses are applicable to all infrastructures, especially smaller office/homelab environments, but it's still a great explanation of what benefits they're meant to bring.

EDIT: I really don't want to excerpt so much that the article doesn't get the hits it deserves, but there's a very nice summary table at the end that I think is helpful to include here. Please click through to see the test setup that actually generates these numbers.
1751655171615.png
 
Last edited:
NFSv3 is still widely used in homelab and SMB setups due to defaults or legacy configurations. I've seen noticeable improvements in VM responsiveness and lower latencies after switching these configurations into properly tuned NFSv4.2. gyptazy's article does a great job showing why the upgrade is worth it.
 
  • Like
Reactions: SInisterPisces
NFSv3 is still widely used in homelab and SMB setups due to defaults or legacy configurations. I've seen noticeable improvements in VM responsiveness and lower latencies after switching these configurations into properly tuned NFSv4.2. gyptazy's article does a great job showing why the upgrade is worth it.
I'm still a bit cloudy on how to do the "properly tuned" part.

Any reading material you could suggest?
 
It mainly comes down to using NFSv4.2 explicitly in your Proxmox storage config, ensuring you're on a reliable high bandwidth and low latency network, and enabling nconnect=N to open multiple TCP sessions per mount. You can expect jumps from 1.5 GB/s to 4.5 GB/s read throughput just by enabling nconnect=16.

You can check these links for more information:
 

Attachments

Last edited: