NFS option nconnect

aghadjip

Well-Known Member
Jan 22, 2019
33
7
48
44
Hey Folks,

I wanted to try out the nconnect option for NFS mounts and am having trouble adding this correctly.

nfs: XXXXX_nfs_image_store export /mnt/XXXXXX path /mnt/pve/XXXXXXXX server 10.0.XXXXX content iso,vztmpl,backup,images options vers=3,nconnect=16 prune-backups keep-last=2


I unmounted as root and cd'ed into the directory to mount it, however

10.0.XXXX:/mnt/XXXXXX /mnt/pve/XXXXX rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.0.XXXXX,mountvers=3,mountport=841,mountproto=udp,local_lock=none,addr=10.0.XXXX 0 0

proc mounts doesnt show the new option. What am i missing?
 
Last edited:
Hey,

could you post the output of pveversion -v? After unmounting make sure it is actually unmounted mount | grep nfs.
 
Code:
root@manifold:/mnt/pve# pveversion -v
proxmox-ve: 7.1-1 (running kernel: 5.13.19-1-pve)
pve-manager: 7.1-4 (running version: 7.1-4/ca457116)
pve-kernel-5.13: 7.1-4
pve-kernel-helper: 7.1-4
pve-kernel-5.4: 6.4-7
pve-kernel-5.13.19-1-pve: 5.13.19-2
pve-kernel-5.4.143-1-pve: 5.4.143-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph-fuse: 14.2.21-1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-14
libpve-guest-common-perl: 4.0-3
libpve-http-server-perl: 4.0-3
libpve-storage-perl: 7.0-15
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-4
lxcfs: 4.0.8-pve2
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.14-1
proxmox-backup-file-restore: 2.0.14-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.4-2
pve-cluster: 7.1-2
pve-container: 4.1-2
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-3
pve-ha-manager: 3.3-1
pve-i18n: 2.6-1
pve-qemu-kvm: 6.1.0-2
pve-xtermjs: 4.12.0-1
qemu-server: 7.1-3
smartmontools: 7.2-pve2
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.1-pve3

Edit, Nevermind, it seems to be enabled now when i went to give some output.... weird.

nfs rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,nconnect=16,timeo=600,retrans=2,sec=sys,mountaddr=1
 
Last edited:
Hi @aghadjip ,

Do you or anyone else have any insight on how to make this work out ?

I'm actually trying to do the same exact thing but the nconnect options does not show up for me.

storage.cfg :
Code:
nfs: Data-2
        export /xxx/yyy
        path /mnt/pve/yyy
        server X.X.X.X
        content images
        options vers=4.1,nconnect=8
        prune-backups keep-all=1

findmnt :
Code:
...
/mnt/pve/YYY    X.X.X.X:/XXX/YYY      nfs4       rw,relatime,vers=4.1,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=Z.Z.Z.Z,local_lock=none,addr=X.X.X.X
...

pveversion -v :
Code:
proxmox-ve: 7.1-1 (running kernel: 5.13.19-2-pve)
pve-manager: 7.1-8 (running version: 7.1-8/5b267f33)
pve-kernel-helper: 7.1-6
pve-kernel-5.13: 7.1-5
pve-kernel-5.13.19-2-pve: 5.13.19-4
pve-kernel-5.13.19-1-pve: 5.13.19-3
ceph-fuse: 15.2.15-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-14
libpve-guest-common-perl: 4.0-3
libpve-http-server-perl: 4.0-4
libpve-storage-perl: 7.0-15
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.1.2-1
proxmox-backup-file-restore: 2.1.2-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-4
pve-cluster: 7.1-2
pve-container: 4.1-3
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-3
pve-ha-manager: 3.3-1
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.0-3
pve-xtermjs: 4.12.0-1
qemu-server: 7.1-4
smartmontools: 7.2-1
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.1-pve3

EDIT : added findmnt output
 
Last edited:
Hello, nconnect is available on nfs version 4.1 and 4.2 on Kernel 5.3 or higher. You may edit /etc/pve/storage.cfg and update the options line as follows:
options vers=4.1,fsc,noatime,nodiratime,nconnect=16
or
options vers=4.2,fsc,noatime,nodiratime,nconnect=16
depending on which 4.X version is supported by your nfs server. Also, please take into consideration that nconnect=16 is a bit high thus may saturate the net line. You may look into testing from lower values such as 2, 4 and up to 8.

You need to unmount and re-mount after applying the changes. You may check /proc/mounts to validate new settings took place.
Note: Mount protocol must be set to tcp
 
Last edited:
Was it only usefull if you have a multi nic system and.. thoses nic are not bond together ? As having 1 server with 1 port.. with or not the nconnect option, i guess we will only get the max speed of the nic present...
 
I'm curious when this should be enabled, as well.

I have a 2x10 Gbps LACP bond set up for Proxmox to use to talk to my NAS. From PVE's perspective, that's a single network connection (VLAN over a bond). Will increasing nconnect from the default result in PVE itself trying to open more TCP connections to the NAS when it needs to do more than one NFS thing simultaneously, or does it require more than one connection to the NAS to have an impact?

EDIT: The docs and the wiki just refer to the NFS man page for info about NFS options; I was curious if anyone here had more practical experience with nconnect and PVE.

EDIT 2: I did find this: https://medium.com/@emilypotyraj/use-nconnect-to-effortlessly-increase-nfs-performance-4ceb46c64089

It's from about 4 years ago, but it discusses saturating a single NIC (on the NAS) 10 Gbps connection with a single NFS TCP connection, which makes me think this might only be useful where you have more than one NIC? It's a bit unclear.
 
Last edited: