Mount no longer works in Proxmox 6 - NFS - Synology

root@proxmox:~# mount -vvvvt nfs 192.168.x.x:/volume1/nfs /mnt/pve/NFS2
mount.nfs: timeout set for Tue Mar 24 15:41:37 2020
mount.nfs: trying text-based options 'vers=4.2,addr=192.168.x.x,clientaddr=192.168.x.x'
mount.nfs: mount(2): Protocol not supported
mount.nfs: trying text-based options 'vers=4.1,addr=192.168.x.x,clientaddr=192.168.x.x'

Doesn't look like it's trying NFS v3. And NFS version 4.1 is enabled in the NAS.
 
Hi guys,
I'm in the same boat but I believe my conditions are a bit different.
Maybe this can help:
I have the latest Proxmox v6.1-8 and a very old IBM v7000 Unified storage.
The storage works well with several dozens machines so there's no problem with it.
But it only supports NFSv2 and V3!

I also have a centos with nfs server on it and it supports NFSv3 and V4.
So the centos mounts within a second while the IBM does not.
Trying to manipulate with adding to storage.conf and everything - doesn't give the right results.
The manual mount works just fine.

I tried tcpdump and compared it to a Proxmox v5.3-9.
It seems like in version 5 the showmount command tries v2 and the goes up to try v3.
In version 6 it starts with version 4.

But even if I choose version 3 in GUI or put it manually in storage.conf - it doesn't help.
The storage is always with a question mark.

I also see another post about version 4.2 against 4.1
Maybe this is the direction?

Any ideas?

addition:
mount -tvvvv nfs 192.168.10.10:/ibm/Prox /mnt/pve/Prox
mount: bad usage
Try 'mount --help' for more information.
root@pve2n:~# mount -vvvvt nfs 192.168.10.10:/ibm/Prox /mnt/pve/Prox
mount.nfs: timeout set for Tue Mar 24 18:17:08 2020
mount.nfs: trying text-based options 'vers=4.2,addr=192.168.10.10,clientaddr=192.168.10.10'
mount.nfs: mount(2): Protocol not supported
mount.nfs: trying text-based options 'vers=4.1,addr=192.168.10.10,clientaddr=192.168.10.10'
mount.nfs: mount(2): Protocol not supported
mount.nfs: trying text-based options 'vers=4.0,addr=192.168.10.10,clientaddr=192.168.10.10'
mount.nfs: mount(2): Protocol not supported
mount.nfs: trying text-based options 'addr=192.168.10.10'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: trying 192.168.10.10 prog 100003 vers 3 prot TCP port 2049
mount.nfs: prog 100005, trying vers=3, prot=17
mount.nfs: trying 192.168.10.10 prog 100005 vers 3 prot UDP port 32767
 
Last edited:
root@proxmox:~# mount -vvvvt nfs 192.168.x.x:/volume1/nfs /mnt/pve/NFS2
Mount on its own is not the issue reported. The showmount doesn't seem to work properly. And with that usually rpcinfo isn't able to connect either. The mount will try to negotiate the nfs version from 4 -> 3 -> 2, till it gives up. Once it finds one, the mount proceeds. Just check with mount.

Whatever it is that prohibits the mounting, is very circumstantial. And I can't reproduce it.

I tried tcpdump and compared it to a Proxmox v5.3-9.
It seems like in version 5 the showmount command tries v2 and the goes up to try v3.
In version 6 it starts with version 4.
showmount still tries the other protocol versions.
 
Mount on its own is not the issue reported. The showmount doesn't seem to work properly. And with that usually rpcinfo isn't able to connect either. The mount will try to negotiate the nfs version from 4 -> 3 -> 2, till it gives up. Once it finds one, the mount proceeds. Just check with mount.

Whatever it is that prohibits the mounting, is very circumstantial. And I can't reproduce it.


showmount still tries the other protocol versions.
I have a NAS exposed to web. Would it help you to have access to it for testing purposes? My best guess - it's a bug in Debian's implementation of rpcbind, but i may be wrong.
 

On the sourceforge-link it seems "steved" removed default functionality of random port opening in July 2018 and made it a kernel option. He intends to remove the functionality entirely later on when nobody uses it because people complained about the functionality, but from what can read we're going to have to recompile the kernel with --enable-rmtcalls if that is the actual source of the problem. I don't know if I'm looking at the correct repository but I see no such fixes in here;

https://git.proxmox.com/?p=pve-kernel.git;a=log

Assuming this is the cause I doubt we can get that as default compiled behavior in the pve-kernel unless someone can offer up an environment where you can reproduce the issue. I personally cannot.
 
actually, here are a number of links, proving it's a problem with ubuntu kernel after all.
this problem is present in ubuntu 19.10 or even earlier:
If it is the case, then running a kernel prior 5.3 would fix the issue. But I don't believe it ATM.

On the sourceforge-link it seems "steved" removed default functionality of random port opening in July 2018 and made it a kernel option. He intends to remove the functionality entirely later on when nobody uses it because people complained about the functionality, but from what can read we're going to have to recompile the kernel with --enable-rmtcalls if that is the actual source of the problem. I don't know if I'm looking at the correct repository but I see no such fixes in here;
I don't think it is that, since I tried with older server versions (jessie, stretch, DMS6.2) and it still worked.

But I found a commit that could give a hint. There is a initial UDP packet for getport send and that might hang, if the udp is blocked. Debian Buster is on libtirpc3 1.1.4.
http://git.linux-nfs.org/?p=steved/libtirpc.git;a=commit;h=5e7b57bc20bd

I have a NAS exposed to web. Would it help you to have access to it for testing purposes? My best guess - it's a bug in Debian's implementation of rpcbind, but i may be wrong.
Thanks for the offer, but without a clear reproducer it can't be fixed, for future versions to come.
 
Good news. I think I may have solved my issue at least.

On my Synology there are two ethernet ports. Once a fairly long time ago I configured both of them and forgot about it, but only connected the secondary interface. When doing 'strace rpcinfo -p 192.168.0.1" which is the interface that currently has a cable connected, it complains about not being connected to 192.168.10.2 which is my secondary interface which I never physically connected with a cable.

So as a test I remove the IP-configuration for the nonconnected port on my synology and set it to 'dhcp' instead which is the only valid alternative configuration in the NAS webGUI.

Then all of a sudden my NFS drives popped up in proxmox again.

I still don't know about why it looks at the non-connected configured interface as a target in rpcinfo but apparently Synology NAS informs about a seemingly irrevelevant IP and rpcinfo is therefore mislead and can't connect.
 
  • Like
Reactions: gparrilla
This thread is quite old but I am just curious if anything definitive has been figured out with Proxmox 6 and NFS?

We are in a similar situation. We have two Proxmox clusters running v4 and v5 with no issues connecting via NFS to a Synology NAS that we use for VM backups. Our Proxmox 6 cluster has been having all kinds of weird issues connecting via NFS to the same Synology NAS.

The Promox 6 server can mount the synology NAS but trying to pass traffic basically causes NFS mount to show offline.
 
I found a work around for this issue. In my case, I just removed the NFS mounts and re-set them up for NFS v3 and they worked like a charm. There must be something with NFS v4 and our NAS that was not compatible or working correctly.
 
This is still an issue. It happens on a fresh install of proxmox 6. 5.4 it doesn't happen.
 
We had the same issue with Proxmox VE 6.2-4 on Debian 10 and were able to implement a workaround for this problem.
After some research, we were able to conclude that the problem is an issue with the "showmount" command.
When executing the command "showmount <IP of NAS>" we always recieved an error "No route to Host", but we were able to mount the NFS-share using "mount -t nfs <ip>:<path> <mountpoint>" just fine.
Furthermore, the issue only exists on Proxmox 6 (Debian 10), not Proxmox 5 (Debian 9). (Btw: The problem is also reproducable on clean Debian 10 installations without Proxmox being invloved. It works just fine on Debian 9 though).

Nevertheless, we fixed the problem by replacing the /sbin/showmount file with a custom bash-script that simply echos the output we expect.
First, we installed a fresh version of Proxmox 5, where we knew that showmountgives a valid output. We ran showmount <ip> on this installation and wrote the output down.
Next, we went back to our Proxmox 6 installation and renamed showmount in /sbin/ to showmount_orig and created a new file called showmount (simply using nano showmount).
This file should be a simple bash script which echos the output we expect.
I.e. the file could look like this:

Bash:
#!/bin/bash
echo "Export list for 192.168.90.10"
echo "/volume5/Proxmox6_Cluster_Test 192.168.40.0/24"

After you save the script, just run chmod 777 showmount to make it executable.

After we implemented this very hacky fix, all expected export locations were displayed correctly in the Proxmox-Web-UI and we successfully added a working NFS-Synology-Share to our Cluster.
If this solution is too ugly for you, you can simply expand the bash script a bit by adding logic like "if the first parameter is our NFS-IP, echo the valid result; otherwise trigger the real showmount".

I hope this helps anyone :)

Sincerely,
André Schärpf
SophisTex GmbH
 
Last edited:
We had the same issue with Proxmox VE 6.2-4 on Debian 10 and were able to implement a workaround for this problem.
After some research, we were able to conclude that the problem is an issue with the "showmount" command.
When executing the command "showmount <IP of NAS>" we always recieved an error "No route to Host", but we were able to mount the NFS-share using "mount -t nfs <ip>:<path> <mountpoint>" just fine.
Furthermore, the issue only exists on Proxmox 6 (Debian 10), not Proxmox 5 (Debian 9). (Btw: The problem is also reproducable on clean Debian 10 installations without Proxmox being invloved. It works just fine on Debian 9 though).

Nevertheless, we fixed the problem by replacing the /sbin/showmount file with a custom bash-script that simply echos the output we expect.
First, we installed a fresh version of Proxmox 5, where we knew that showmountgives a valid output. We ran showmount <ip> on this installation and wrote the output down.
Next, we went back to our Proxmox 6 installation and renamed showmount in /sbin/ to showmount_orig and created a new file called showmount (simply using nano showmount).
This file should be a simple bash script which echos the output we expect.
I.e. the file could look like this:

Bash:
#!/bin/bash
echo "Export list for 192.168.90.10"
echo "/volume5/Proxmox6_Cluster_Test 192.168.40.0/24"

After you save the script, just run chmod 777 showmount to make it executable.

After we implemented this very hacky fix, all expected export locations were displayed correctly in the Proxmox-Web-UI and we successfully added a working NFS-Synology-Share to our Cluster.
If this solution is too ugly for you, you can simply expand the bash script a bit by adding logic like "if the first parameter is our NFS-IP, echo the valid result; otherwise trigger the real showmount".

I hope this helps anyone :)

I just reproduced on a fresh 6.0 box and did this exact fix. Worked like a charm for me. I think we finally have a root cause to this issue boys.
 
We had the same issue with Proxmox VE 6.2-4 on Debian 10 and were able to implement a workaround for this problem.
After some research, we were able to conclude that the problem is an issue with the "showmount" command.
When executing the command "showmount <IP of NAS>" we always recieved an error "No route to Host", but we were able to mount the NFS-share using "mount -t nfs <ip>:<path> <mountpoint>" just fine.
Furthermore, the issue only exists on Proxmox 6 (Debian 10), not Proxmox 5 (Debian 9). (Btw: The problem is also reproducable on clean Debian 10 installations without Proxmox being invloved. It works just fine on Debian 9 though).

Nevertheless, we fixed the problem by replacing the /sbin/showmount file with a custom bash-script that simply echos the output we expect.
First, we installed a fresh version of Proxmox 5, where we knew that showmountgives a valid output. We ran showmount <ip> on this installation and wrote the output down.
Next, we went back to our Proxmox 6 installation and renamed showmount in /sbin/ to showmount_orig and created a new file called showmount (simply using nano showmount).
This file should be a simple bash script which echos the output we expect.
I.e. the file could look like this:

Bash:
#!/bin/bash
echo "Export list for 192.168.90.10"
echo "/volume5/Proxmox6_Cluster_Test 192.168.40.0/24"

After you save the script, just run chmod 777 showmount to make it executable.

After we implemented this very hacky fix, all expected export locations were displayed correctly in the Proxmox-Web-UI and we successfully added a working NFS-Synology-Share to our Cluster.
If this solution is too ugly for you, you can simply expand the bash script a bit by adding logic like "if the first parameter is our NFS-IP, echo the valid result; otherwise trigger the real showmount".

I hope this helps anyone :)

Sincerely,
André Schärpf
SophisTex GmbH

It's not the showmount; it's actually the rpcbind 1.2.5-0.3+deb10u1 at fault.
On Ubuntu the issue does not appear. Even when using an Ubuntu container (that shares the kernel with Proxmox).
I guess the best we can do is wait.

Btw, if you're looking for a solution, there is a simpler approach than the one suggested above. Edit this file:
/usr/share/perl5/PVE/Storage/NFSPlugin.pm

by commenting the following line in check_connection function:
Code:
if (my $err = $@) {
# return 0;  ### this line needs to be commented out
}

then reload the service via systemctl restart pvedaemon
 
I had the same problem with NFS mount, but with a FreeNAS. This workaround worked for me too.
Thanks Vladimir.
 
# return 0; ### this line needs to be commented out

Thankyou Vladimir. It took a lot of googling and searching to find but your solution worked for an NFS share from Ubuntu 20.04 to Proxmox.
 
Hi,

I've got the same problem with NFS (Synology DS218). I have 3 Proxmoxs.
HOME - 6.2-4 NVM - 6.2-11 SSD - 5.4-13
On SSD works fine. On NVM NFS doesn't work BUT on HOME works fine. It's weird because I don't change configuration.
My workaround for this is copy NFS definition from SSD Proxmox from
/etc/pve/storage.cfg
and paste to the same file but on NVM.

Works perfectly :)
 
I was ale to "hack" it to work. It seems that ( at lest in our case) mount works but what fails is detection if storage is online. I was able to bypass this by editing file /usr/share/perl5/PVE/Storage/NFSPlugin.pm and replacing:

Code:
 my $cmd = ['/sbin/showmount', '--no-headers', '--exports', $server];

by

Code:
my $cmd = ['/usr/sbin/rpcinfo', $server];

And then restarting services pvestatd and pvedaemon

It seems that problem occurs when NFS share is behind NAT. Commands showmountand rpcinfo -p fails because they tries to connect to share local address not it public address. Using just rpcinfo solves this problem for us.
 
  • Like
Reactions: scracha

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!