Safe to edit storage.cfg?

Apr 29, 2021
26
3
8
46
Hi.
I have a 7 node cluster with NFS connected storage.

I'd like to try out the nconnect option for NFS mounts, and I understand that I need to change storage.cfg and add "options vers=4.x,nconnect=8" to the mount in question.

I have several other nfs exports mounted.

Is it safe to add that line to this mount, while other vm's are running on other nfs mounts?

here is the block i want to change

nfs: NFSEXAMPLE
export /volume1/pm-cluster01
path /mnt/pve/NFSEXAMPLE
server XXX.XXX.XXX.XXX
content images
prune-backups keep-all=1

to this:

nfs: NFSEXAMPLE
export /volume1/pm-cluster01
path /mnt/pve/NFSEXAMPLE
server XXX.XXX.XXX.XXX
content images
options vers=4.1,nconnect=8
prune-backups keep-all=1

while other vm:s are running on other nfs mounts?
 
Yes it's save but first I would use "vers=4.2" (instead of 4.1) if your nfs server support that.
Second there is this nfs4 "cleverness" that further mounts after the first mount from one fileserver are ip and options replicated for the other shares of it too but that is not related to other share from other fileserver. There is indead one new nfs4 mount option available to disturb this behave but don't know yet as it's this time ok that all shares from one server get same options this time for me. If you need other behave you need to google nfs multihome.
Btw. nconnect will only help you on really fast ip (40...400Gb eth/ib/opa) connections and if your I/O system can saturate the connection also !!
 
  • Like
Reactions: markusbernhard
You can also use the command pvesm set <storage> --options <options> to change the storage.cfg. Keep in mind that those parameters are only set when the NFS is mounted again, so it won't affect the mounted NFS as far as I'm aware.

Just to be sure, what version are you currently running (you can check with mount -v | grep "type nfs" and looking for a "vers" option)? By default, NFS probes the version from 4.2 and negotiating downwards if the request fails until it succeeds.
 
Thanks for your answers. It just feels safer to use the pvesm command :)

The best option would indeed be to be able to use nfs with multipath - both for redundancy and throughput reasons, but it seems a bit tricky to get it working.

This particular SAN is an Dell EMC san with nfs export for proxmox, and iscsi for our vmware environment (yes, we're moving...).
When I run Crystal diskmark on a vm residing on vmware (connected by iscsi), I hit about 13-1400MB/s (read), while running on proxmox and NFS export I get around 450MB/S on the same test. So, the disks and network are able to move alright - I just can't seem to take advantage of it from proxmox with the standard tools in GUI.
 
So you even don't need nconnect nor does it help as you don't exceed 2,5 GB/s on your nfs mount but in the other way it doesn't hurt.
So you just implemented the mount options for your next nfs storage today ... :cool:
 
@waltar
Are we talking about the same units? The measurements are in MegaBytes per second, not megabit. Just making sure..
 
Last edited:
Yeah, you need nconnect (or rdma (which brings other problems)) if you have a >=100Gbit interface and faster I/O performance than 2,5 GBytes/s and be limited on nfs throughput then.
 
  • Like
Reactions: markusbernhard

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!