NFS4 only servers for storage cannot be configured from interface

_ED

Member
Jul 11, 2012
5
0
21
Hi,

I switched my storage server from NFS 2,3,4 to only NFS4, now no NFS is exposed with the RPCBind thing (used only with NFS 2,3 as far as i understand)
and my proxmox servers are not able anymore to access my share, i tried to remove under the storage section my server and to re-add it
but it fails,
i tried:
Code:
pvesm nfsscan 10.10.10.199
clnt_create: RPC: Program not registered
command '/sbin/showmount --no-headers --exports 10.10.10.199' failed: exit code 1

and here i believe the problem is that showmount only work with nfs2,3 but not with 4 because it uses RPCbind that is not used with NFS4
I believe one solution to this is to not run showmount when a user specifies NFS4 like here: Screenshot_20200806_122407.png
but instead just run the mount command right away.

is there also a dirty temporary fix to add a nfs share manually with some commands? (I could also try to replace /sbin/showmount with /bin/true or some crap like that)
 
The idea in this case would be to mount via the fstab or systemd, whatever you feel more comfortable with. Then create a directory storage on the mount point. Once you have the directory storage defined, open the /etc/pve/storage.cfg and add the following lines to the directory storage:
Code:
is_mountpoint 1
mkdir 0
These tell PVE that it should not create the directory if it does not exist and that it should wait until something is mounted there.
 
  • Like
Reactions: AbsolutelyFree
how does one set the mountable flag in storage.cfg ?


Something like this

Code:
dir: local
    path /var/lib/vz
    content snippets,backup,rootdir,vztmpl,images,iso
    maxfiles 0
    shared 1
    is_mountpoint 1
     mkdir 0

and /etc/fstab
You have following entry


<ip of nfs>: <shared directory> /var/lib/vz nfs rw,hard 0 0
 
  • Like
Reactions: RobFantini
i figured out it was the is_mountpoint flag and after trying a little bit i understood it works on the "dir" type storage, not the nfs one,
added to fstab and now it works thanks!

Anyway i think my problem will get more common in the years to come since RPC has a few security problems and distros might actually default to nfs4 only that is pretty mature now (a part from ACLs...) so i think this might need to be adressed in the future, thanks!

P.S. looking forward to try the new proxmox backup thing as soon as i have some spare time :)
 
Just a note that the problem still seems to be here. Had to downgrade to nfsv3 to get NFS working.
 
I'm on the very same boat as well; exporting NFS4 share from an Ubuntu server and looks like the same issue still exists even in 2015, with v8.4.5
Any one managed to create a NFS4 storage successfully?
 
Last edited:
I have NFSv4 working properly, however note that this is a problem with the user/protocol, not Proxmox.

So various things off the top of my head:
NFSv4 uses the username@domain syntax, not uid, unless you disable uid mapping on the server side. So that is where many things go into the mist, both sides need to have the same idmapd config when it comes to domain.

Authentication happens over Kerberos, unless explicitly set to trust uid so both sides need to be joined and verify its Kerberos identities or server needs to disable authentication.

Root access (what Proxmox needs) is squashed typically or mapped to a disabled user on the server. Again, allow for root to mount.
 
  • Like
Reactions: uzumo
I have NFSv4 working properly, however note that this is a problem with the user/protocol, not Proxmox.
Thanks @guruevi for your reply! doesn't look like that's the problem here though.

10.1.20.62 == NFS4 Server
10.1.80.10 == Proxmox (client)

This is what I have on the server-side:
Code:
# exportfs -v|grep pox -A2
/nfs_exports/poxVMs 10.1.80.10(sync,wdelay,nohide,no_subtree_check,sec=sys,rw,insecure,no_root_squash,no_all_squash)
/nfs_exports        <world>(sync,wdelay,hide,crossmnt,no_subtree_check,fsid=0,sec=sys,ro,secure,no_root_squash,no_all_squash)

If I try manually from the CLI, it mounts the share without any issue:
Code:
root@pve:~# mount -t nfs4 10.1.20.62:/poxVMs /mnt/pve/remote-nfs
root@pve:~# mount -l|grep nfs
10.1.20.62:/poxVMs on /mnt/pve/remote-nfs type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.1.80.10,local_lock=none,addr=10.1.20.62)
root@pve:~#
root@pve:~# touch /mnt/pve/remote-nfs/testFile.txt
root@pve:~# ls -l /mnt/pve/remote-nfs
total 0
-rw-r--r-- 1 root root 0 Aug  2 00:32 testFile.txt

Now the issue is: on the GUI, it cannot list the shares at all:
1754092643767.png

looks like it internally try to use showmount --exports to list shares and it fails, as NFSv4 doesn't use the MOUNT RPC protocol at all.
Code:
root@pve:~# pvesm scan nfs 10.1.20.62
clnt_create: RPC: Program not registered
command '/sbin/showmount --no-headers --exports 10.1.20.62' failed: exit code 1

How did you get around that issue? You sure you not running NFSv3 as well, along with v4?

-S
 
Last edited:
okay, made some progress..........
If I add this section to /etc/pve/storage.cfg:
Code:
nfs: remote-nfs
    export /poxVMs
    path /mnt/pve/remote-nfs
    server 10.1.20.62
    content backup,vztmpl
    options soft,vers=4.2

then it actually works and shows up under the Storage as well:
1754094128447.png

Also managed to send some VM backups in there too. I'm pretty sure it's a bug in the GUI, unless I'm missing something here.

-S
 
Yes, I do use NFSv4 to mount. So in my implementation (Isilon), all NFS mounts show up on showmount regardless of protocol or permission, mountd is on a different port, using RPC, so you can theoretically implement the same on other file system providers having mountd show your NFSv4 mounts (according to the docs it's just a bunch of text in /var/lib/nfs/rmtab)

You should be able to just fill in the field, the drop down is just a suggestion. Note that showmount never was guaranteed to work properly or be correct (see the docs on it).

So this works (test is an obviously invalid server name), if you do not select 'enable' it won't even test whether it's online. If you do have 'enable' enabled, then it does a test.
Screenshot 2025-08-01 at 8.35.16 PM.png

I agree that the Linux implementation and 'standard' for NFSv4 does not "require" or "use" an RPC info service like v3 did. This is not a bug.
 
Last edited:
  • Like
Reactions: uzumo
So in my implementation (Isilon), all NFS mounts show up on showmount regardless of protocol or permission,
Thats not what I see here, when the server is running on Ubuntu/Debian. I tried with three diffrent servers and I got the very same result.

This when NFSv3 is running along with v4:
Code:
root@dvcwse57:~# cat /proc/fs/nfsd/versions
-2 +3 +4 +4.1 +4.2
root@dvcwse57:~# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  33335  status
    100024    1   tcp  33335  status
    100005    3   udp  33334  mountd
    100005    3   tcp  33334  mountd
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    3   tcp   2049
    100021    1   udp  33333  nlockmgr
    100021    3   udp  33333  nlockmgr
    100021    4   udp  33333  nlockmgr
    100021    1   tcp  33333  nlockmgr
    100021    3   tcp  33333  nlockmgr
    100021    4   tcp  33333  nlockmgr

compare to the output, when only running NFSv4
Code:
root@dvcwse57:~# cat /proc/fs/nfsd/versions
-2 -3 +4 +4.1 +4.2
root@dvcwse57:~# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  33335  status
    100024    1   tcp  33335  status
    100003    4   tcp   2049  nfs

With no mountd daemon running, there is no way showmount would work on client side and looks like Proxmox is exactly trying to do that to get the list of shares on the GUI. It should not be doing that for NFSv4 servers.

As far as I can see, if you haven't masked rpcbind.service and rpcbind.socket, and didn't set RPCNFSDOPTS="-N 2 -N 3", NFSv3 is still running on the system.
 
Last edited:
Yes, I'm not running a Linux server, Isilon is proprietary based on FreeBSD. There is a mountd-like API running that simply sends back "all" mounts regardless of what the client may have access to.
 
Last edited:
pve mount default nfs in v4.2 but still needs v3 on nfs server enabled to get the exported share for storage status in the gui, that's all.
This. NFSv4 works great out of the box. I use it for everything. I specify NFS 4.2 when I add NFS shares in the GUI.

But I still need NFSv3 enabled on the server so PVE can use the RPC calls to enumerate available shares in the GUI.

Fixing this in a sensible way is a non-trivial issue from the PVE side.
 
But I still need NFSv3 enabled on the server so PVE can use the RPC calls to enumerate available shares in the GUI.
it should be mentioned somewhere, IMO, espeially for the newcomers like me. I spent half a day re-building the NFS server number of times thinking something wrong with my v4 configuration, until I noticed that pvesm nfsscan actually trying to use showmount under the hood.
 
Last edited:
Agreed. I spent about 3 days trying to adjust TrueNAS and Proxmox and verifying firewall settings before I realized it was working as intended.

The docs on using the NFS storage would be a good place to add this information. A feature request needs to be opened on the Proxmox Bugzilla site.
 
  • Like
Reactions: Johannes S