TrueNAS NFS server not responding/retry issues in PVE cluster

srotig

New Member
Mar 6, 2023
9
1
1
I'm having issues mounting a NFS share on a PVE cluster.
The initial mapping works but after some time I get connection issues and multiple retries.

My PVE cluster is running with two nodes both with the same version and having 172.16.10.2 and 172.16.10.6 IP addresses.
The TrueNas is configured with NFS service on 172.16.20.3, and I access it via IP not via domain name.

I'm able to mount the NFS service to other clients without any issue.

PVE versions:
Code:
proxmox-ve: 7.3-1 (running kernel: 5.15.85-1-pve)
pve-manager: 7.3-6 (running version: 7.3-6/723bb6ec)
pve-kernel-helper: 7.3-5
pve-kernel-5.15: 7.3-2
pve-kernel-5.15.85-1-pve: 5.15.85-1
pve-kernel-5.15.74-1-pve: 5.15.74-1
ceph-fuse: 15.2.17-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.3-1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-2
libpve-guest-common-perl: 4.2-3
libpve-http-server-perl: 4.1-5
libpve-storage-perl: 7.3-2
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.3.3-1
proxmox-backup-file-restore: 2.3.3-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.5
pve-cluster: 7.3-2
pve-container: 4.4-2
pve-docs: 7.3-1
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-7
pve-firmware: 3.6-3
pve-ha-manager: 3.5.1
pve-i18n: 2.8-3
pve-qemu-kvm: 7.2.0-5
pve-xtermjs: 4.16.0-1
qemu-server: 7.3-4
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.9-pve1

Code:
rpcinfo -p 172.16.20.3
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  37857  status
    100005    3   udp  60795  mountd
    100024    1   tcp  36025  status
    100005    3   tcp  55845  mountd
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    3   tcp   2049
    100003    3   udp   2049  nfs
    100227    3   udp   2049
    100021    1   udp  58758  nlockmgr
    100021    3   udp  58758  nlockmgr
    100021    4   udp  58758  nlockmgr
    100021    1   tcp  35399  nlockmgr
    100021    3   tcp  35399  nlockmgr
    100021    4   tcp  35399  nlockmgr

Code:
showmount -e 172.16.20.3
Export list for 172.16.20.3:
/mnt/storagepoolssd/test-speed   172.16.20.0/24,172.16.10.0/24
/mnt/storagepool/securestorage   172.16.50.3/32,172.16.20.0/24
/mnt/storagepool/proxmox-storage 172.16.10.2,172.16.10.6

I mapped it via GUI and using storage.cfg, sample storage.cfg is below:
Code:
nfs: test-speed
        path /mnt/pve/backup
        server 172.16.20.3
        export /mnt/storagepoolssd/test-speed
        options vers=3,soft
        content iso,vztmpl

I tried modifying options parameters will NFS v4 and other attributes but the outcome is always the same:

Code:
Mar  6 17:01:07 pve pvestatd[1181]: unable to activate storage 'backup' - directory '/mnt/pve/backup' does not exist or is unreachable
Mar  6 17:01:16 pve pvestatd[1181]: got timeout
Mar  6 17:01:16 pve pvestatd[1181]: unable to activate storage 'backup' - directory '/mnt/pve/backup' does not exist or is unreachable
Mar  6 17:01:25 pve pvestatd[1181]: got timeout
Mar  6 17:01:25 pve pvestatd[1181]: unable to activate storage 'backup' - directory '/mnt/pve/backup' does not exist or is unreachable
Mar  6 17:01:36 pve pvestatd[1181]: got timeout
Mar  6 17:01:36 pve pvestatd[1181]: unable to activate storage 'backup' - directory '/mnt/pve/backup' does not exist or is unreachable
Mar  6 17:01:45 pve pvestatd[1181]: got timeout
Mar  6 17:01:45 pve pvestatd[1181]: unable to activate storage 'backup' - directory '/mnt/pve/backup' does not exist or is unreachable
[...]
Mar  6 17:24:28 pve kernel: [ 4230.410991] NFS: state manager: check lease failed on NFSv4 server 172.16.20.3 with error 93
Mar  6 17:24:33 pve kernel: [ 4235.530165] NFS: state manager: check lease failed on NFSv4 server 172.16.20.3 with error 93
Mar  6 17:24:38 pve kernel: [ 4240.650104] NFS: state manager: check lease failed on NFSv4 server 172.16.20.3 with error 93
Mar  6 17:24:44 pve kernel: [ 4245.770071] NFS: state manager: check lease failed on NFSv4 server 172.16.20.3 with error 93
Mar  6 17:24:49 pve kernel: [ 4250.890089] NFS: state manager: check lease failed on NFSv4 server 172.16.20.3 with error 93
Mar  6 17:24:54 pve kernel: [ 4256.010351] NFS: state manager: check lease failed on NFSv4 server 172.16.20.3 with error 93
Mar  6 17:24:59 pve kernel: [ 4261.130289] NFS: state manager: check lease failed on NFSv4 server 172.16.20.3 with error 93
Mar  6 17:26:03 pve kernel: [ 4324.879658] NFS: state manager: check lease failed on NFSv4 server 172.16.20.3 with error 93
Mar  6 17:26:08 pve kernel: [ 4329.994727] NFS: state manager: check lease failed on NFSv4 server 172.16.20.3 with error 93
Mar  6 17:26:13 pve kernel: [ 4335.114681] NFS: state manager: check lease failed on NFSv4 server 172.16.20.3 with error 93
Mar  6 17:26:18 pve kernel: [ 4340.234822] NFS: state manager: check lease failed on NFSv4 server 172.16.20.3 with error 93
Mar  6 17:26:23 pve kernel: [ 4345.354947] NFS: state manager: check lease failed on NFSv4 server 172.16.20.3 with error 93
Mar  6 17:26:28 pve kernel: [ 4350.475137] NFS: state manager: check lease failed on NFSv4 server 172.16.20.3 with error 93
Mar  6 17:26:33 pve kernel: [ 4355.595057] NFS: state manager: check lease failed on NFSv4 server 172.16.20.3 with error 93
Mar  6 17:28:10 pve kernel: [ 4451.852338] NFS: state manager: check lease failed on NFSv4 server 172.16.20.3 with error 93
Mar  6 17:28:15 pve kernel: [ 4456.971910] NFS: state manager: check lease failed on NFSv4 server 172.16.20.3 with error 93
Mar  6 17:28:20 pve kernel: [ 4462.091911] NFS: state manager: check lease failed on NFSv4 server 172.16.20.3 with error 93
Mar  6 17:28:25 pve kernel: [ 4467.211986] NFS: state manager: check lease failed on NFSv4 server 172.16.20.3 with error 93
Mar  6 17:28:30 pve kernel: [ 4472.332195] NFS: state manager: check lease failed on NFSv4 server 172.16.20.3 with error 93
Mar  6 17:28:35 pve kernel: [ 4477.451952] NFS: state manager: check lease failed on NFSv4 server 172.16.20.3 with error 93
Mar  6 17:28:40 pve kernel: [ 4482.572038] NFS: state manager: check lease failed on NFSv4 server 172.16.20.3 with error 93

I have these logs on /var/log/syslog, can you please provide support to fix this issue?
 
Last edited:
Hi, it's a bit weird that the syslog refers to NFSv4, while according to the storage configuration, the storage is mounted as NFSv3.
Could you post the output of nfsstat -m?

Mar 6 17:28:40 pve kernel: [ 4482.572038] NFS: state manager: check lease failed on NFSv4 server 172.16.20.3 with error 93
Errno 93 refers to EPROTONOSUPPORT, which hints at some kind of NFS version mismatch. Could you try mounting with vers=4.1?
 
The output of nfsstat -m is a little bit confusing as it shows NFS shares that I tried to mount previously.
Code:
root@pve:~# nfsstat -m
/mnt/pve/backup from 172.16.20.4:/mnt/volume1/proxmox/pve-backups
 Flags: rw,relatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=172.16.10.6,local_lock=none,addr=172.16.20.4

/mnt/pve/iso from 172.16.20.4:/mnt/volume1/proxmox/pve-ISO
 Flags: rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=172.16.10.6,local_lock=none,addr=172.16.20.4

/mnt/pve/test-speed from 172.16.20.3:/mnt/storagepoolssd/test-speed
 Flags: rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=172.16.10.6,local_lock=none,addr=172.16.20.3

/mnt/pve/test-speed from 192.168.55.3:/mnt/HD/HD_a2/bckp-esxi
 Flags: rw,relatime,vers=3,rsize=524288,wsize=524288,namlen=255,soft,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.55.3,mountvers=3,mountport=54720,mountproto=udp,local_lock=none,addr=192.168.55.3

/mnt/pve/storage-test from 172.16.20.3:/mnt/storagepoolssd/test-speed
 Flags: rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=172.16.20.3,mountvers=3,mountport=60795,mountproto=udp,local_lock=none,addr=172.16.20.3

/mnt/pve/test from 172.16.20.3:/mnt/storagepoolssd/test-speed
 Flags: rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=172.16.10.6,local_lock=none,addr=172.16.20.3

/mnt/pve/backup from 172.16.20.3:/mnt/storagepoolssd/test-speed
 Flags: rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=172.16.10.6,local_lock=none,addr=172.16.20.3

The last entry in the above output match the current configuration file which I have in storage.cfg of the NFS share:
Code:
root@pve:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content backup,iso,vztmpl

lvmthin: local-lvm
        thinpool data
        vgname pve
        content images,rootdir

nfs: test-speed
        export /mnt/storagepoolssd/test-speed
        path /mnt/pve/backup
        server 172.16.20.3
        content vztmpl,snippets,images,iso,backup,rootdir
        options soft,vers=4.1

I tried modifying the version as you suggested, however the issue persists.
Moreover now I also see a grey question mark on my PVE2 even if the VM and containers are all running correctly:
1678717527821.png

Below output of syslogs:
Code:
Mar 13 15:28:39 pve pvestatd[1181]: unable to activate storage 'test-speed' - directory '/mnt/pve/backup' does not exist or is unreachable
Mar 13 15:28:49 pve pvestatd[1181]: got timeout
Mar 13 15:28:49 pve pvestatd[1181]: unable to activate storage 'test-speed' - directory '/mnt/pve/backup' does not exist or is unreachable
Mar 13 15:28:59 pve pvestatd[1181]: got timeout
Mar 13 15:28:59 pve pvestatd[1181]: unable to activate storage 'test-speed' - directory '/mnt/pve/backup' does not exist or is unreachable
Mar 13 15:29:09 pve pvestatd[1181]: got timeout
Mar 13 15:29:09 pve pvestatd[1181]: unable to activate storage 'test-speed' - directory '/mnt/pve/backup' does not exist or is unreachable
Mar 13 15:29:19 pve pvestatd[1181]: got timeout
Mar 13 15:29:19 pve pvestatd[1181]: unable to activate storage 'test-speed' - directory '/mnt/pve/backup' does not exist or is unreachable
Mar 13 15:29:29 pve pmxcfs[1058]: [status] notice: received log
 
Last edited:
Thanks for the information!
The output of nfsstat -m is a little bit confusing as it shows NFS shares that I tried to mount previously.
Yes, /mnt/pve/backup and /mnt/pve/test-speed even occur multiple times. To clean this up, I'd suggest to unmount everything -- either by simply rebooting, or by running umount /path on all mount points until it prints "umount /path: not mounted". You can do this even for Proxmox-managed mounts, as it will re-mount them automatically after a few seconds.

Regarding the check lease failed on NFSv4 server ... with error 93 errors, I managed to reproduce them with the following sequence:

1) Setup NFSv3+4 server
2) Mount an NFSv4 share in Proxmox
3) Disable NFSv4 in NFS server (leaving only NFSv3), reboot
4) After some time, the Proxmox syslog fills with check lease failed messages, because the mount from (2) is still active, but the server does not speak NFSv4 anymore.

Hence, I'd suspect there is something wrong with NFSv4, maybe even on the NFSv4 server side. Would it be an option for you to use NFSv3 instead? If yes, you could try setting your Proxmox-managed NFS storage to version 3 and unmounting everything, as described above.
 
Initially when I tried to umount I got device busy error for some shares:
Code:
root@pve:~# umount -f -vvv /mnt/pve/iso
/mnt/pve/iso: nfs4 mount point detected
/mnt/pve/iso: umounted
root@pve:~# umount -f -vvv /mnt/pve/backup
/mnt/pve/backup: nfs4 mount point detected
/mnt/pve/backup: umounted
root@pve:~# umount -f -vvv /mnt/pve/test-speed
/mnt/pve/test-speed: nfs mount point detected
/mnt/pve/test-speed: umounted
root@pve:~# umount -f -vvv /mnt/pve/storage-test
/mnt/pve/storage-test: nfs mount point detected
umount.nfs: /mnt/pve/storage-test: device is busy
/mnt/pve/storage-test: umount failed
root@pve:~# umount -f -vvv /mnt/pve/test
/mnt/pve/test: nfs4 mount point detected
/mnt/pve/test: umounted
root@pve:~# umount -f -vvv /mnt/pve/backup
/mnt/pve/backup: nfs4 mount point detected
umount.nfs4: /mnt/pve/backup: device is busy
/mnt/pve/backup: umount failed

I commented out the part where I map the NFS on /etc/pve/storage.cfg as shown below:
Code:
root@pve:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content backup,iso,vztmpl

lvmthin: local-lvm
        thinpool data
        vgname pve
        content images,rootdir

#nfs: test-speed
#       export /mnt/storagepoolssd/test-speed
#       path /mnt/pve/backup
#       server 172.16.20.3
#       content vztmpl,snippets,images,iso,backup,rootdir
#       options soft,vers=4.1

In these cases I logged on pve2 and run umount -f -vvv and I was able to umount.

Now when I run nfsstat -m on both PVE and PVE2 instance I get no output.
Then I rebooted both PVE and PVE2 instances.

Once the two PVE instances are up, I don't see the grey question mark issue.

Then I tried to map again the NFS share by setting the version to v3 instead of v4.
Please find some screenshots of TrueNAS NFS configuration below:
1678799907445.png
1678799918764.png
1678799942268.png

Then I mapped the NFS share again using the GUI interface and set the version to v3:
1678800023258.png

However the issue persists, I still get timeout logs:
Code:
Mar 14 14:19:54 pve pvestatd[1177]: got timeout
Mar 14 14:19:54 pve pvestatd[1177]: unable to activate storage 'test-speed' - directory '/mnt/pve/test-speed' does not exist or is unreachable
Mar 14 14:20:04 pve pvestatd[1177]: got timeout
Mar 14 14:20:04 pve pvestatd[1177]: unable to activate storage 'test-speed' - directory '/mnt/pve/test-speed' does not exist or is unreachable
Mar 14 14:20:14 pve pvestatd[1177]: got timeout
Mar 14 14:20:14 pve pvestatd[1177]: unable to activate storage 'test-speed' - directory '/mnt/pve/test-speed' does not exist or is unreachable
Mar 14 14:20:25 pve pvestatd[1177]: got timeout
Mar 14 14:20:25 pve pvestatd[1177]: unable to activate storage 'test-speed' - directory '/mnt/pve/test-speed' does not exist or is unreachable
Mar 14 14:20:34 pve pvestatd[1177]: got timeout
Mar 14 14:20:34 pve pvestatd[1177]: unable to activate storage 'test-speed' - directory '/mnt/pve/test-speed' does not exist or is unreachable
Mar 14 14:20:44 pve pvestatd[1177]: got timeout
Mar 14 14:20:44 pve pvestatd[1177]: unable to activate storage 'test-speed' - directory '/mnt/pve/test-speed' does not exist or is unreachable
 
Thanks! So, do I understand correctly that the check lease failed on NFSv4 server ... with error 93 errors are gone now?

In that case, this sounds like a connectivity issue between Proxmox and TrueNAS. Could you post the output of the following commands, issued on the PVE host?
Code:
cat /etc/pve/storage.cfg
showmount -e [IP of your NFS server]
rpcinfo -p [IP of your NFS server]
mount | grep nfs
ls -nl /mnt/pve

Do you have a firewall between PVE and TrueNAS? There is an older thread [1] where the culprit apparently was an IP/routing issue.

Does mounting the share on the PVE host to some location outside /mnt/pve, e.g. /mnt/test (with nfsvers=3) work?

[1] https://forum.proxmox.com/threads/unable-to-activate-storage-smb.108835/post-476832
 
The check lease failed on NFSv4 server ... with error 93 error codes are gone, however the unreachable issue is still there.
Please find below the requested outputs:
Code:
root@pve:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content iso,backup,vztmpl

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

nfs: test-speed
       export /mnt/storagepoolssd/test-speed
       path /mnt/pve/backup
       server 172.16.20.3
       content vztmpl,snippets,images,iso,backup,rootdir
       options soft,vers=3

Code:
root@pve:~# showmount -e 172.16.20.3
Export list for 172.16.20.3:
/mnt/storagepoolssd/test-speed   172.16.20.0/24,172.16.10.0/24
/mnt/storagepool/securestorage   172.16.50.3/32,172.16.20.0/24
/mnt/storagepool/proxmox-storage 172.16.10.2,172.16.10.6

Code:
root@pve:~# rpcinfo -p 172.16.20.3
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100005    3   udp  34919  mountd
    100024    1   udp  50037  status
    100005    3   tcp  38251  mountd
    100024    1   tcp  38205  status
    100003    3   tcp   2049  nfs
    100227    3   tcp   2049
    100003    3   udp   2049  nfs
    100227    3   udp   2049
    100021    1   udp  58130  nlockmgr
    100021    3   udp  58130  nlockmgr
    100021    4   udp  58130  nlockmgr
    100021    1   tcp  33405  nlockmgr
    100021    3   tcp  33405  nlockmgr
    100021    4   tcp  33405  nlockmgr

Code:
root@pve:~# mount | grep nfs
172.16.20.3:/mnt/storagepoolssd/test-speed on /mnt/pve/backup type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=172.16.20.3,mountvers=3,mountport=34919,mountproto=udp,local_lock=none,addr=172.16.20.3)

Code:
root@pve:~# ls -nl /mnt/pve
total 37
drwxrwxr-x 7 1002 1001    7 Mar  6 17:11 backup
drwxr-xr-x 2    0    0 4096 Mar  2 18:17 backup2
drwxr-xr-x 2    0    0 4096 Mar  6 16:24 iso
drwxr-xr-x 2    0    0 4096 Mar  2 15:47 storage
drwxr-xr-x 2    0    0 4096 Mar  2 15:34 storage.lab.gitors.com
drwxr-xr-x 2    0    0 4096 Mar  6 17:49 storage-test
drwxr-xr-x 2    0    0 4096 Mar  7 16:32 test
drwxr-xr-x 2    0    0 4096 Mar  6 17:11 test-speed

The unreachable issue still exists:

Code:
root@pve:~# tail -f /var/log/syslog
Mar 14 17:20:34 pve pvestatd[1177]: got timeout
Mar 14 17:20:34 pve pvestatd[1177]: unable to activate storage 'test-speed' - directory '/mnt/pve/backup' does not exist or is unreachable
Mar 14 17:20:34 pve pvedaemon[1234]: got timeout
Mar 14 17:20:34 pve pvedaemon[1234]: unable to activate storage 'test-speed' - directory '/mnt/pve/backup' does not exist or is unreachable
Mar 14 17:20:42 pve pvedaemon[1235]: got timeout
Mar 14 17:20:42 pve pvedaemon[1235]: unable to activate storage 'test-speed' - directory '/mnt/pve/backup' does not exist or is unreachable
Mar 14 17:20:44 pve pvestatd[1177]: got timeout
Mar 14 17:20:44 pve pvestatd[1177]: unable to activate storage 'test-speed' - directory '/mnt/pve/backup' does not exist or is unreachable
Mar 14 17:20:50 pve pvedaemon[1235]: got timeout
Mar 14 17:20:50 pve pvedaemon[1235]: unable to activate storage 'test-speed' - directory '/mnt/pve/backup' does not exist or is unreachable

I've tried to modify storage.cfg as shown below to map into a different path /media/test-speed:
Code:
nfs: test-speed
       export /mnt/storagepoolssd/test-speed
       path /media/test-speed
       server 172.16.20.3
       content vztmpl,snippets,images,iso,backup,rootdir
       options soft,vers=3

Code:
Mar 14 17:25:16 pve pvedaemon[1236]: got timeout
Mar 14 17:25:24 pve pvestatd[1177]: got timeout
Mar 14 17:25:24 pve pvedaemon[1235]: got timeout
Mar 14 17:25:32 pve pvedaemon[1236]: got timeout
Mar 14 17:25:34 pve pvestatd[1177]: got timeout
Mar 14 17:26:27 pve pvedaemon[1236]: got timeout
Mar 14 17:26:27 pve pvedaemon[1236]: unable to activate storage 'test-speed' - directory '/media/test-speed' does not exist or is unreachable

I've tried to map the same NFS share into a VM which is in the same network of the two PVE instances and I did not encounter any issues on the VM.

Code:
root@lab-vm1-mgmt:~# mount -t nfs 172.16.20.3:/mnt/storagepoolssd/test-speed /mnt/test-speed -v
mount.nfs: timeout set for Tue Mar 14 16:35:07 2023
mount.nfs: trying text-based options 'vers=4.2,addr=172.16.20.3,clientaddr=172.16.10.5'
mount.nfs: mount(2): Protocol not supported
mount.nfs: trying text-based options 'vers=4.1,addr=172.16.20.3,clientaddr=172.16.10.5'
mount.nfs: mount(2): Protocol not supported
mount.nfs: trying text-based options 'vers=4.0,addr=172.16.20.3,clientaddr=172.16.10.5'
mount.nfs: mount(2): Protocol not supported
mount.nfs: trying text-based options 'addr=172.16.20.3'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: trying 172.16.20.3 prog 100003 vers 3 prot TCP port 2049
mount.nfs: prog 100005, trying vers=3, prot=17
mount.nfs: trying 172.16.20.3 prog 100005 vers 3 prot UDP port 34919

root@lab-vm1-mgmt:/mnt/test-speed# ls -R
.:
dump  images  private  snippets  template

./dump:

./images:
510

./images/510:
vm-510-disk-0.qcow2

./private:

./snippets:

./template:
cache  iso

./template/cache:

./template/iso:
debian-live-11.6.0-amd64-gnome.iso  ubuntu-22.04.2-desktop-amd64.iso.tmp.48554
 
Thanks for checking!

These outputs:
Code:
root@pve:~# mount | grep nfs
172.16.20.3:/mnt/storagepoolssd/test-speed on /mnt/pve/backup type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=172.16.20.3,mountvers=3,mountport=34919,mountproto=udp,local_lock=none,addr=172.16.20.3)

Code:
root@pve:~# ls -nl /mnt/pve
total 37
drwxrwxr-x 7 1002 1001    7 Mar  6 17:11 backup
suggest that the mount (on the filesystem level) actually succeeded. However, these errors:
Code:
Mar 14 17:20:34 pve pvestatd[1177]: got timeout
Mar 14 17:20:34 pve pvestatd[1177]: unable to activate storage 'test-speed' - directory '/mnt/pve/backup' does not exist or is unreachable
are printed because it takes PVE more than 2 seconds to check whether /mnt/pve/backup is a directory.

So, is it possible that the storage is just very slow? To investigate this, could you post the outputs of the following commands?
Code:
ls -l /mnt/pve/backup
# this is the check performed by PVE:
time perl -e 'print (-d "/mnt/pve/backup");print "\n"'
time stat /mnt/pve/backup
 
Please find the output requested below:

Code:
root@pve:/# time perl -e 'print (-d "/mnt/pve/backup");print "\n"'
1

real    1m36.999s
user    0m0.002s
sys     0m0.000s

Code:
root@pve:/# time stat /mnt/pve/backup
  File: /mnt/pve/backup
  Size: 7               Blocks: 17         IO Block: 1048576 directory
Device: 2bh/43d Inode: 34          Links: 7
Access: (0775/drwxrwxr-x)  Uid: ( 1002/ UNKNOWN)   Gid: ( 1001/ UNKNOWN)
Access: 2023-03-06 17:08:51.847769989 +0100
Modify: 2023-03-06 17:11:05.898193286 +0100
Change: 2023-03-06 17:11:05.898193286 +0100
 Birth: -

real    0m0.002s
user    0m0.000s
sys     0m0.002s
root@pve:/#

The issue is that we don't see this low performance, packet loss, timeouts on other VMs mapped on the same network.
 
Thanks for checking! As these two commands are supposed to function similarly, it's extremely weird that the running times are so vastly different (>90secs vs <1s). Is this reproducible? To find out, could you try running each command five times and double-check that (1) perl ... always takes more than a couple of seconds and (2) stat ... always takes less than a second?

If the time discrepancy is reproducible, you could try to find out where the extra time is spent using strace, which monitors the syscalls performed by the command. You first need to install it using apt-get install, and then run and post the output of:
Code:
strace -C -r -- perl -e 'print (-d "/mnt/pve/backup");print "\n"'

Also, you mention that mounting the share on other machines works fine. Could you check the running time of perl -e 'print (-d "/path/to/your/mountpoint");print "\n"' there (filling out the correct path of course)?
 
Thanks for your support so far, fweber.

Please find below the output of first command after running it for consecutive times.
It seems that, after initial delay observed in the first run, the next ones are quite fast:

Code:
time perl -e 'print (-d "/mnt/pve/backup");print "\n"'
1

real    2m21.054s
user    0m0.002s
sys     0m0.000s

time perl -e 'print (-d "/mnt/pve/backup");print "\n"'
1

real    0m0.002s
user    0m0.002s
sys     0m0.000s

time perl -e 'print (-d "/mnt/pve/backup");print "\n"'
1

real    0m0.002s
user    0m0.002s
sys     0m0.000s

time perl -e 'print (-d "/mnt/pve/backup");print "\n"'
1

real    0m0.002s
user    0m0.000s
sys     0m0.002s

time perl -e 'print (-d "/mnt/pve/backup");print "\n"'
1

real    0m0.002s
user    0m0.002s
sys     0m0.000s

Here is the output of second command, same behaviour I would say:

Code:
time stat /mnt/pve/backup
  File: /mnt/pve/backup
  Size: 8               Blocks: 17         IO Block: 1048576 directory
Device: 4eh/78d Inode: 34          Links: 8
Access: (0775/drwxrwxr-x)  Uid: ( 1002/ UNKNOWN)   Gid: ( 1001/ UNKNOWN)
Access: 2023-03-06 17:08:51.847769989 +0100
Modify: 2023-03-22 12:57:41.180349241 +0100
Change: 2023-03-22 12:57:41.180349241 +0100
 Birth: -

real    1m31.647s
user    0m0.002s
sys     0m0.000s

time stat /mnt/pve/backup
  File: /mnt/pve/backup
  Size: 8               Blocks: 17         IO Block: 1048576 directory
Device: 4eh/78d Inode: 34          Links: 8
Access: (0775/drwxrwxr-x)  Uid: ( 1002/ UNKNOWN)   Gid: ( 1001/ UNKNOWN)
Access: 2023-03-06 17:08:51.847769989 +0100
Modify: 2023-03-22 12:57:41.180349241 +0100
Change: 2023-03-22 12:57:41.180349241 +0100
 Birth: -

real    0m0.002s
user    0m0.002s
sys     0m0.000s

time stat /mnt/pve/backup
  File: /mnt/pve/backup
  Size: 8               Blocks: 17         IO Block: 1048576 directory
Device: 4eh/78d Inode: 34          Links: 8
Access: (0775/drwxrwxr-x)  Uid: ( 1002/ UNKNOWN)   Gid: ( 1001/ UNKNOWN)
Access: 2023-03-06 17:08:51.847769989 +0100
Modify: 2023-03-22 12:57:41.180349241 +0100
Change: 2023-03-22 12:57:41.180349241 +0100
 Birth: -

real    0m0.002s
user    0m0.002s
sys     0m0.000s
 
Hi, thanks for checking, this makes more sense: Apparently, the first access is always very slow, and subsequent accesses are fast (probably due to some NFS client-side caching).

Unfortunately, I don't know what the cause for the slow accesses could be -- it does seem very much like a network issue? Have you done some network benchmarks?

Just out of curiosity -- are you running udisksd on the host? It is not installed by default, but might have been installed afterwards as a dependency or so. You can check using ps aux | grep udisks. I'm asking because udisks does cause problems occasionally [1].

[1] https://bugzilla.proxmox.com/show_bug.cgi?id=4629
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!