I'm a little disappointed that no one from the team or the developers has commented on the problem yet.
Usually, that means there's nothing concrete to say yet.I'm a little disappointed that no one from the team or the developers has commented on the problem yet.
Are you accessing nfs from a vm, or datacentre storage?Worst of all - is a deadlock. There are no way to kill NFS driver process ("kill 9" or etc). Only reboot is needed...
Is any way to avoid deadlock? soft mounting, timeout, or etc?
prox:~# uname -r
6.14.11-4-pve
mount -t nfs 1**.***.**.***:/mnt/tank/hdd_import /mnt/hdd_import/
vi /etc/nfs.conf#
# This is a general configuration for the
# NFS daemons and tools
#
[general]
pipefs-directory=/run/rpc_pipefs
#
[nfsrahead]
# nfs=15000
# nfs4=16000
#
[exports]
# rootdir=/export
#
[exportfs]
# debug=0
#
[gssd]
# verbosity=0
# rpc-verbosity=0
# use-memcache=0
# use-machine-creds=1
# use-gss-proxy=0
# avoid-dns=1
# limit-to-legacy-enctypes=0
# context-timeout=0
# rpc-timeout=5
# keytab-file=/etc/krb5.keytab
# cred-cache-directory=
# preferred-realm=
# set-home=1
# upcall-timeout=30
# cancel-timed-out-upcalls=0
#
[lockd]
port=32803
udp-port=32769
# port=0
# udp-port=0
#
[exportd]
# debug="all|auth|call|general|parse"
# manage-gids=n
# state-directory-path=/var/lib/nfs
# threads=1
# cache-use-ipaddr=n
# ttl=1800
[mountd]
manage-gids=y
# debug="all|auth|call|general|parse"
manage-gids=y
# descriptors=0
# port=0
# threads=1
# reverse-lookup=n
# state-directory-path=/var/lib/nfs
# ha-callout=
# cache-use-ipaddr=n
# ttl=1800
#
[nfsdcld]
# debug=0
# storagedir=/var/lib/nfs/nfsdcld
#
[nfsdcltrack]
# debug=0
# storagedir=/var/lib/nfs/nfsdcltrack
#
[nfsd]
vers3=y
vers4=n
vers4.0=n
vers4.1=n
vers4.2=n
# debug=0
# threads=8
# host=
# port=0
# grace-time=90
# lease-time=90
# udp=n
# tcp=y
# vers3=y
# vers4=y
# vers4.0=y
# vers4.1=y
# vers4.2=y
# rdma=n
# rdma-port=20049
[statd]
port=32765
outgoing-port=32766
# debug=0
# port=0
# outgoing-port=0
# name=
# state-directory-path=/var/lib/nfs/statd
# ha-callout=
# no-notify=0
#
[sm-notify]
# debug=0
# force=0
# retry-time=900
# outgoing-port=
# outgoing-addr=
# lift-grace=y
#
[svcgssd]
# principal=
vi /etc/default/nfs-kernel-server# Number of servers to start up
RPCNFSDCOUNT=8
# Runtime priority of server (see nice(1))
RPCNFSDPRIORITY=0
# Options for rpc.mountd.
# If you have a port-based firewall, you might want to set up
# a fixed port here using the --port option. For more information,
# see rpc.mountd(8) or http://wiki.debian.org/SecuringNFS
# To disable NFSv4 on the server, specify '--no-nfs-version 4' here
#RPCMOUNTDOPTS="--manage-gids"
RPCNFSDOPTS="--no-nfs-version 4 --nfs-version 3 --threads=32"
RPCMOUNTDOPTS="--manage-gids --port 32767"
# Do you want to start the svcgssd daemon? It is only required for Kerberos
# exports. Valid alternatives are "yes" and "no"; the default is "no".
NEED_SVCGSSD=""
# Options for rpc.svcgssd.
RPCSVCGSSDOPTS=""
rebootvi /etc/pve/storage.cfgnfs: backup
export /mnt/backup/proxmox
path /mnt/pve/backup
server 192.168.XXX.XXX
content backup
options noatime,vers=3,hard,intr,timeo=600,retrans=5
prune-backups keep-all=1
umount /mnt/pve/backupsystemctl restart pvedaemonumount /mnt/pve/backuppvesm statuspvesm statuspvesm statusmountYou have two:Thanks @tlobo for sharing.
I tested it on myself. Unfortunately, there was no improvement.
------------------------------------------------------------------------------------------------
My NFS-Server
Raspberry Pi 3 Model B
1TB USB-C SSD
Raspbian 12
nfs-kernel-server 1:2.6.2-4+deb12u1
vi /etc/nfs.conf
Code:# # This is a general configuration for the # NFS daemons and tools # [general] pipefs-directory=/run/rpc_pipefs # [nfsrahead] # nfs=15000 # nfs4=16000 # [exports] # rootdir=/export # [exportfs] # debug=0 # [gssd] # verbosity=0 # rpc-verbosity=0 # use-memcache=0 # use-machine-creds=1 # use-gss-proxy=0 # avoid-dns=1 # limit-to-legacy-enctypes=0 # context-timeout=0 # rpc-timeout=5 # keytab-file=/etc/krb5.keytab # cred-cache-directory= # preferred-realm= # set-home=1 # upcall-timeout=30 # cancel-timed-out-upcalls=0 # [lockd] port=32803 udp-port=32769 # port=0 # udp-port=0 # [exportd] # debug="all|auth|call|general|parse" # manage-gids=n # state-directory-path=/var/lib/nfs # threads=1 # cache-use-ipaddr=n # ttl=1800 [mountd] manage-gids=y # debug="all|auth|call|general|parse" manage-gids=y # descriptors=0 # port=0 # threads=1 # reverse-lookup=n # state-directory-path=/var/lib/nfs # ha-callout= # cache-use-ipaddr=n # ttl=1800 # [nfsdcld] # debug=0 # storagedir=/var/lib/nfs/nfsdcld # [nfsdcltrack] # debug=0 # storagedir=/var/lib/nfs/nfsdcltrack # [nfsd] vers3=y vers4=n vers4.0=n vers4.1=n vers4.2=n # debug=0 # threads=8 # host= # port=0 # grace-time=90 # lease-time=90 # udp=n # tcp=y # vers3=y # vers4=y # vers4.0=y # vers4.1=y # vers4.2=y # rdma=n # rdma-port=20049 [statd] port=32765 outgoing-port=32766 # debug=0 # port=0 # outgoing-port=0 # name= # state-directory-path=/var/lib/nfs/statd # ha-callout= # no-notify=0 # [sm-notify] # debug=0 # force=0 # retry-time=900 # outgoing-port= # outgoing-addr= # lift-grace=y # [svcgssd] # principal=
vi /etc/default/nfs-kernel-server
Code:# Number of servers to start up RPCNFSDCOUNT=8 # Runtime priority of server (see nice(1)) RPCNFSDPRIORITY=0 # Options for rpc.mountd. # If you have a port-based firewall, you might want to set up # a fixed port here using the --port option. For more information, # see rpc.mountd(8) or http://wiki.debian.org/SecuringNFS # To disable NFSv4 on the server, specify '--no-nfs-version 4' here #RPCMOUNTDOPTS="--manage-gids" RPCNFSDOPTS="--no-nfs-version 4 --nfs-version 3 --threads=32" RPCMOUNTDOPTS="--manage-gids --port 32767" # Do you want to start the svcgssd daemon? It is only required for Kerberos # exports. Valid alternatives are "yes" and "no"; the default is "no". NEED_SVCGSSD="" # Options for rpc.svcgssd. RPCSVCGSSDOPTS=""
reboot
------------------------------------------------------------------------------------------------
Proxmox Node
VE 9.0.17
Linux proxmox 6.17.2-1-pve #1 SMP PREEMPT_DYNAMIC PMX 6.17.2-1 (2025-10-21T11:55Z) x86_64 GNU/Linux
vi /etc/pve/storage.cfg
Code:nfs: backup export /mnt/backup/proxmox path /mnt/pve/backup server 192.168.XXX.XXX content backup options noatime,vers=3,hard,intr,timeo=600,retrans=5 prune-backups keep-all=1
umount /mnt/pve/backup
systemctl restart pvedaemon
umount /mnt/pve/backup
pvesm status
pvesm status
pvesm status
mount
------------------------------------------------------------------------------------------------
Then I started the backup on the NFS share, but unfortunately nothing changed. The web interface rarely responds and is sometimes unavailable. Some VMs are unavailable, and so on.
What a shame! I had high hopes.
------------------------------------------------------------------------------------------------
EDIT:
I then updated to the current Proxmox VE 9.1.1 with kernel 6.17.2-2-pve, but unfortunately without success.
You have two:
manage-gids=y
# debug="all|auth|call|general|parse"
manage-gids=y
Since it's a Raspberry Pi, try lowering the backup workers to 4 or 8
Actually, I haven't had any problems with external NFS servers. The problem was with those mounted on the node itself or within a virtual machine, and this configuration solved the problems; it's still stable.@tlobo Thanks! Unfortunately, that doesn't change anything.
Thanks! Unfortunately, that doesn't change anything.Actually, I haven't had any problems with external NFS servers. The problem was with those mounted on the node itself or within a virtual machine, and this configuration solved the problems; it's still stable.
For a Raspberry Pi, I would start with a conservative configuration. In /etc/nfs.conf, I would add the option threads=8 in the [nfsd] section. Then, in /etc/default/nfs-kernel-server, I would also change the threads to 8, leaving it like this:
RPCNFSDOPTS="--no-nfs-version 4 --nfs-version 3 --threads=8"
I would try with small buffers on the mount:
options noatime,vers=3,hard,intr,timeo=600,retrans=5,rsize=8192,wsize=8192
And for backups, I would start by setting the number of workers to 1. Then, if this works, I would gradually increase the number to see how much performance I could get.
We use essential cookies to make this site work, and optional cookies to enhance your experience.