Unable to mount NFS storage

shantur

New Member
Dec 9, 2020
11
0
1
40
Hi all,

I am trying to mount an existing NFS server running on docker using container : https://hub.docker.com/r/itsthenetwork/nfs-server-alpine but it fails all the time.

When I try to manually mount NFS using shell it works fine

Code:
root@pve-macpro-16:~# mount -t nfs -o vers=4 10.187.20.140:/vsphere /NFSVMware

But it doesn't work with the storage configuration

Code:
nfs: NFSVMware
        export /vsphere
        path /mnt/pve/NFSVMware
        server 10.187.20.140
        content iso,images
        options vers=4

I have tried to read on forum which have similar issues but none was the solution for me. I read somewhere that PVE needs `showmount` to work. I think the server only supports NFS 4 which doesn't work with rpc and showmount.

Any workaround for this?

Thanks

Installation Details :

Code:
root@pve-macpro-16:~# pveversion  -v
proxmox-ve: 6.3-1 (running kernel: 5.4.73-1-pve)
pve-manager: 6.3-2 (running version: 6.3-2/22f57405)
pve-kernel-5.4: 6.3-1
pve-kernel-helper: 6.3-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph: 15.2.6-pve1
ceph-fuse: 15.2.6-pve1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.5
libproxmox-backup-qemu0: 1.0.2-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.2-6
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.3-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.5-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-3
pve-cluster: 6.2-1
pve-container: 3.3-1
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-7
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-1
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.5-pve1
 
What message do you get with pvesm status? And is there something in the logs?
 
@Alwin : Thanks for coming back to me

pvesm status

Code:
root@pve-macpro-14:~# pvesm status
storage 'NFSVMware' is not online
Name             Type     Status           Total            Used       Available        %
Images         cephfs     active      1171828736        35426304      1136402432    3.02%
NFSVMware         nfs   inactive               0               0               0    0.00%
local             dir     active        98559220         3063432        90446240    3.11%
vm_pool           rbd     active      1166471584        30067232      1136404352    2.58%

syslog

Code:
Dec 15 14:52:09 pve-macpro-14 pvestatd[1512]: storage 'NFSVMware' is not online
Dec 15 14:52:18 pve-macpro-14 pvestatd[1512]: storage 'NFSVMware' is not online
 
Hm... not much. :)
Code:
rpcinfo -u <server> nfs 3
rpcinfo -t <server> nfs 4
And what do the commands show?
 
@Alwin

Code:
root@pve-macpro-16:~# rpcinfo -u 10.187.20.140 nfs 4
10.187.20.140: RPC: Unable to receive

root@pve-macpro-16:~# rpcinfo -u 10.187.20.140 nfs 3
10.187.20.140: RPC: Unable to receive

root@pve-macpro-16:~# rpcinfo -t 10.187.20.140 nfs 4
10.187.20.140: RPC: Remote system error - No route to host

root@pve-macpro-16:~# rpcinfo -t 10.187.20.140 nfs 3
10.187.20.140: RPC: Remote system error - No route to host

As I mentioned that rpc isn't needed for NFS4+ so there is no rpc running in the NFS server container.
 
As I mentioned that rpc isn't needed for NFS4+ so there is no rpc running in the NFS server container.
The RFC states otherwise.
https://tools.ietf.org/html/rfc7530#section-1.4

Code:
~:# rpcinfo -t vm19180 nfs 4
program 100003 version 4 ready and waiting
And this is a debian VM with an NFSv4 only server. Can the port 2049 reached, at least?

EDIT: though, you are correct. showmount doesn't work with nfs v4 only.
 
Last edited:
  • Like
Reactions: shantur
The RFC states otherwise.
https://tools.ietf.org/html/rfc7530#section-1.4

Code:
~:# rpcinfo -t vm19180 nfs 4
program 100003 version 4 ready and waiting
And this is a debian VM with an NFSv4 only server. Can the port 2049 reached, at least?

EDIT: though, you are correct. showmount doesn't work with nfs v4 only.
Thanks for looking into it. If you want to reproduce you could easily spin up the docker container I mentioned. I am able to mount the NFS without any issues on ESXi and Linux command line. I think the showmount issue stops Proxmox
 
Last edited:
Yes, I have tested.

Code:
root@pve-macpro-16:~# mount -t nfs -o vers=4 10.187.20.140:/vsphere /NFSVMware

The command works perfectly.

Not sure if your patch will fix my issue.

Code:
root@pve-macpro-16:~# rpcinfo -t 10.187.20.140 nfs 4
10.187.20.140: RPC: Remote system error - No route to host
 
Last edited:
No, not really. Since the rpcbind is not running, a simple connection check can't be done. You will need to mount the share and configure a directory storage on top (use option is_mountpoint). Or run rpcbind in the container.
https://pve.proxmox.com/pve-docs/pvesm.1.html
Thanks for replying.

If I do this, will it still be considered as a shared storage by Proxmox?

Also just to point out that it is a perfectly valid use case of running a NFS4 only service without rpcbind

https://www.suse.com/support/kb/doc/?id=000019530
 
Last edited:
Also just to point out that it is a perfectly valid use case of running a NFS4 only service without rpcbind
I don't question that, besides the document doesn't talk about a use-case. ;)

But it doesn't make it easier either. Since the connection check can't be reliably done.
https://www.suse.com/support/kb/doc/?id=000019530 said:
2. Without rpc.mountd servicing v3/v2 calls, any machine attempting to do "showmount -e" (or similar calls) against this NFS Server (to get a lists of exports) will fail. Various applications, including some setups of autofs (automount) rely on such queries to discover available nfs shares.
 
  • Like
Reactions: Stoiko Ivanov
If I do this, will it still be considered as a shared storage by Proxmox?
To add, you will need to add shared to the storage.cfg.
 
@Alwin :

I was trying to debug my NFS server with rpcinfo and I came across this in rpcinfo's manpage.

Code:
-t' Make an RPC call to procedure 0 of prognum on the specified host using TCP, and report whether a response was received. This option is made obsolete by the -T option as shown in the third synopsis.

Maybe you would want to update your patch to

Code:
$cmd = ['/usr/sbin/rpcinfo', '-l', '-T', 'tcp', $server, 'nfs', '4']
 
My patch will not be added since it doesn't yield a real benefit for the current implementation. There is no reliable way to find out what NFS version is run, besides trying to mount. And in that case lots of handling has to be done for stuck or interrupted mount requests, that goes way further then a simple connection check.

So for now it stands, that either there is a NFS v3 running as well or NFS v4 needs to be mounted manually.
 
This is an old post. But I have encountered the same problem.
Solution:
Open ports in firewall (on nfs share host):
111 (portmapper - rpcbind)
2049 (nfs)

And from this post on how to set up a fixed port for mountd

So, I have set it to 33333 and opened this port in my firewall:
33333 (mountd)

Run
Code:
rpcinfo -p

Now it works.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!