Mount no longer works in Proxmox 6 - NFS - Synology

iamnotarobot

New Member
Mar 22, 2020
6
0
1
38
root@proxmox:~# mount -vvvvt nfs 192.168.x.x:/volume1/nfs /mnt/pve/NFS2
mount.nfs: timeout set for Tue Mar 24 15:41:37 2020
mount.nfs: trying text-based options 'vers=4.2,addr=192.168.x.x,clientaddr=192.168.x.x'
mount.nfs: mount(2): Protocol not supported
mount.nfs: trying text-based options 'vers=4.1,addr=192.168.x.x,clientaddr=192.168.x.x'

Doesn't look like it's trying NFS v3. And NFS version 4.1 is enabled in the NAS.
 

Ozz

Member
Nov 29, 2017
12
0
6
43
Hi guys,
I'm in the same boat but I believe my conditions are a bit different.
Maybe this can help:
I have the latest Proxmox v6.1-8 and a very old IBM v7000 Unified storage.
The storage works well with several dozens machines so there's no problem with it.
But it only supports NFSv2 and V3!

I also have a centos with nfs server on it and it supports NFSv3 and V4.
So the centos mounts within a second while the IBM does not.
Trying to manipulate with adding to storage.conf and everything - doesn't give the right results.
The manual mount works just fine.

I tried tcpdump and compared it to a Proxmox v5.3-9.
It seems like in version 5 the showmount command tries v2 and the goes up to try v3.
In version 6 it starts with version 4.

But even if I choose version 3 in GUI or put it manually in storage.conf - it doesn't help.
The storage is always with a question mark.

I also see another post about version 4.2 against 4.1
Maybe this is the direction?

Any ideas?

addition:
mount -tvvvv nfs 192.168.10.10:/ibm/Prox /mnt/pve/Prox
mount: bad usage
Try 'mount --help' for more information.
root@pve2n:~# mount -vvvvt nfs 192.168.10.10:/ibm/Prox /mnt/pve/Prox
mount.nfs: timeout set for Tue Mar 24 18:17:08 2020
mount.nfs: trying text-based options 'vers=4.2,addr=192.168.10.10,clientaddr=192.168.10.10'
mount.nfs: mount(2): Protocol not supported
mount.nfs: trying text-based options 'vers=4.1,addr=192.168.10.10,clientaddr=192.168.10.10'
mount.nfs: mount(2): Protocol not supported
mount.nfs: trying text-based options 'vers=4.0,addr=192.168.10.10,clientaddr=192.168.10.10'
mount.nfs: mount(2): Protocol not supported
mount.nfs: trying text-based options 'addr=192.168.10.10'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: trying 192.168.10.10 prog 100003 vers 3 prot TCP port 2049
mount.nfs: prog 100005, trying vers=3, prot=17
mount.nfs: trying 192.168.10.10 prog 100005 vers 3 prot UDP port 32767
 
Last edited:

Alwin

Proxmox Staff Member
Staff member
Aug 1, 2017
3,867
361
88
root@proxmox:~# mount -vvvvt nfs 192.168.x.x:/volume1/nfs /mnt/pve/NFS2
Mount on its own is not the issue reported. The showmount doesn't seem to work properly. And with that usually rpcinfo isn't able to connect either. The mount will try to negotiate the nfs version from 4 -> 3 -> 2, till it gives up. Once it finds one, the mount proceeds. Just check with mount.

Whatever it is that prohibits the mounting, is very circumstantial. And I can't reproduce it.

I tried tcpdump and compared it to a Proxmox v5.3-9.
It seems like in version 5 the showmount command tries v2 and the goes up to try v3.
In version 6 it starts with version 4.
showmount still tries the other protocol versions.
 

Vladimir Bulgaru

Active Member
Jun 1, 2019
180
29
28
33
Moscow, Russia
Mount on its own is not the issue reported. The showmount doesn't seem to work properly. And with that usually rpcinfo isn't able to connect either. The mount will try to negotiate the nfs version from 4 -> 3 -> 2, till it gives up. Once it finds one, the mount proceeds. Just check with mount.

Whatever it is that prohibits the mounting, is very circumstantial. And I can't reproduce it.


showmount still tries the other protocol versions.
I have a NAS exposed to web. Would it help you to have access to it for testing purposes? My best guess - it's a bug in Debian's implementation of rpcbind, but i may be wrong.
 

iamnotarobot

New Member
Mar 22, 2020
6
0
1
38
On the sourceforge-link it seems "steved" removed default functionality of random port opening in July 2018 and made it a kernel option. He intends to remove the functionality entirely later on when nobody uses it because people complained about the functionality, but from what can read we're going to have to recompile the kernel with --enable-rmtcalls if that is the actual source of the problem. I don't know if I'm looking at the correct repository but I see no such fixes in here;

https://git.proxmox.com/?p=pve-kernel.git;a=log

Assuming this is the cause I doubt we can get that as default compiled behavior in the pve-kernel unless someone can offer up an environment where you can reproduce the issue. I personally cannot.
 

Alwin

Proxmox Staff Member
Staff member
Aug 1, 2017
3,867
361
88
actually, here are a number of links, proving it's a problem with ubuntu kernel after all.
this problem is present in ubuntu 19.10 or even earlier:
If it is the case, then running a kernel prior 5.3 would fix the issue. But I don't believe it ATM.

On the sourceforge-link it seems "steved" removed default functionality of random port opening in July 2018 and made it a kernel option. He intends to remove the functionality entirely later on when nobody uses it because people complained about the functionality, but from what can read we're going to have to recompile the kernel with --enable-rmtcalls if that is the actual source of the problem. I don't know if I'm looking at the correct repository but I see no such fixes in here;
I don't think it is that, since I tried with older server versions (jessie, stretch, DMS6.2) and it still worked.

But I found a commit that could give a hint. There is a initial UDP packet for getport send and that might hang, if the udp is blocked. Debian Buster is on libtirpc3 1.1.4.
http://git.linux-nfs.org/?p=steved/libtirpc.git;a=commit;h=5e7b57bc20bd

I have a NAS exposed to web. Would it help you to have access to it for testing purposes? My best guess - it's a bug in Debian's implementation of rpcbind, but i may be wrong.
Thanks for the offer, but without a clear reproducer it can't be fixed, for future versions to come.
 

iamnotarobot

New Member
Mar 22, 2020
6
0
1
38
Good news. I think I may have solved my issue at least.

On my Synology there are two ethernet ports. Once a fairly long time ago I configured both of them and forgot about it, but only connected the secondary interface. When doing 'strace rpcinfo -p 192.168.0.1" which is the interface that currently has a cable connected, it complains about not being connected to 192.168.10.2 which is my secondary interface which I never physically connected with a cable.

So as a test I remove the IP-configuration for the nonconnected port on my synology and set it to 'dhcp' instead which is the only valid alternative configuration in the NAS webGUI.

Then all of a sudden my NFS drives popped up in proxmox again.

I still don't know about why it looks at the non-connected configured interface as a target in rpcinfo but apparently Synology NAS informs about a seemingly irrevelevant IP and rpcinfo is therefore mislead and can't connect.
 

ejmerkel

Active Member
Sep 20, 2012
91
0
26
This thread is quite old but I am just curious if anything definitive has been figured out with Proxmox 6 and NFS?

We are in a similar situation. We have two Proxmox clusters running v4 and v5 with no issues connecting via NFS to a Synology NAS that we use for VM backups. Our Proxmox 6 cluster has been having all kinds of weird issues connecting via NFS to the same Synology NAS.

The Promox 6 server can mount the synology NAS but trying to pass traffic basically causes NFS mount to show offline.
 

ejmerkel

Active Member
Sep 20, 2012
91
0
26
I found a work around for this issue. In my case, I just removed the NFS mounts and re-set them up for NFS v3 and they worked like a charm. There must be something with NFS v4 and our NAS that was not compatible or working correctly.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!