Proxmox NFS Storage Is Slow

jdorny

New Member
May 30, 2024
2
0
1
Ok so after a ton of debuging on Proxmox 8.0.3 with slow nfs storage performance here is what I found:

Setup 3 nodes 10Gbe network
Truenas server with mirrored NVME drives
iperf shows full 10Gbe between nodes

debian bare metal host on same network gets full NFS speed to truenas server nvme share roughtly 600MiB/sec

Observations / Question:
1. When a Storage device is setup for a NFS share (on the Truenas server) via the web gui and then from the actual PVE host shell you run fio test against /mnt/pve/<mount name> max speed is about 40MiB/sec.
2. When a NFS mount is created manually on the pve host via the shell (with /test mount point) to the same truenas server the speed is roughly 600MiB/sec

Any idea what I have not done correctly?
 
Updated tonight as well to latest pve version. No change. Also looked at /proc/mounts to see if my temp nfs mount had any different parameters the the ones created by pve in /mnt/pve.

Same parameters I'm stumped how can two nfs mounts with same parameters to same location have different speeds?

Thoughts?
 
I have the same problem. Upgraded from 7.4 to 8.3, fresh install and that when the FUN started.
I have each of the 3 hosts export local 1TB NVME drive via NFS, so that the VM can be migrated between hosts, without the need to migrate storage.

In 7.4 this setup had run without problems for couple of years, but as soon as I switched to 8.4 the performance became incredibly sluggish. with network speed on NFS share dropping down to 40-50 mbs, sometimes completely stalling the physical host. Eventhough the NFS config did not change.
 
I cannot complain about /mnt/pve mount performance in general as (for us normal xfs) mounts get 300-1000MB/s. There's 1 zfs raidz1 (3x2TB) mount with the lxc templates and os isos which just gives about 1MB/s with peaks up to 40MB/s but just tuesday updated to 6.8.12-5 kernel and no access was since reboot and arc completely empty and as known without help of arc zfs performance is terrible so I'm not wondered about the behave of that mount yet.
 
What is interesting, this seems to happen only if I mount NFS share via Proxmox Storage menu.
If I mount the share via NFS in fstab and add the Directory Storage, everything runs smoothly, at least today I have written and read couple of terrabytes in and the 10G link saturated to 100% utilisation. strange.
 
Yes, that's strange but I tested exactly your problem over /mnt/pve mounts instead of manual mount and can't reproduce your problem bahavior like this.
 
We have 8.3.1 with 6.8.12-5 now and last week was 8.3.0 with 6.8.12-4 even without showing any misperformance.
 
We have 8.3.1 with 6.8.12-5 now and last week was 8.3.0 with 6.8.12-4 even without showing any misperformance.
I am on 8.3.1 and kernel 6.11.0-2-pve

Will keep an eye and will try to test again in couple of days, as I have updated and rebooted all three hosts today.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!