[SOLVED] NFS mount- wont "online"

Rob4224

Active Member
Sep 5, 2017
6
0
41
40
When attempting to set up NFS, I am unable to get mounts to load via the GUI or through editing /etc/pve/storage.cfg
Code:
nfs: VMDKs
        export /mnt/tank/VMDK
        path /mnt/pve/VMDKs
        server 192.168.0.3
        content images,vztmpl,rootdir,backup
        maxfiles 5
        options vers=4,tcp
I AM able to get the NFS mounts to load via mount:
Code:
mount -t nfs -o vers=4,tcp 192.168.0.3:/mnt/tank/VMDKs /mnt/pve/VMDKs
mount -t nfs -o vers=4,tcp 192.168.0.3:/mnt/tank/Storage/OS_Disks /mnt/pve/OS_Disks

Log is puking out the following every few seconds:
Sep 05 02:46:27 pmox1 pvestatd[1771]: storage 'VMDKs' is not online

I'm sure I am missing something really simple and stupid but alas, I'm missing it. Can anyone point me the right direction, even if it's just search terms?

Edit: vixed my typo. I also fixed it, but spelling things correctly is totally overrated.
 
Last edited:
What does
Code:
# pvesm nfsscan
show? You followed the wiki?
 
Pabernethy:
I have followed the wiki. The current version of storage.cfg was generated by the GUI but previously, I have used the wiki to edit storage.cfg directly.
Code:
# pvesm nfsscan 192.168.0.3
/mnt/tank/VMDK    (everyone)
/mnt/tank/Storage (everyone)
Thanks for the quick reply!

Edit: I'm also idling in IRC if that is convenient for you to chat there.
 
Last edited:
The only thing I see is that you mount manually with vers=4 but configure the storage with vers=3. You probably want to use version 4.
 
  • Like
Reactions: Rob4224
I tried changing the options on storage.cfg a couple times earlier tonight.
tcp, soft, vers=4, vers=3

Also tried tinkering with removing the content and maxfiles lines (because I don't know any better). No pez thusfar. It's just odd that the mount command is working but storage.cfg isnt...I'm not sure what to make of it.
 
Please post the output of
Code:
# time pvesm nfsscan 192.168.0.3
 
Here ya go:

Code:
# time pvesm nfsscan 192.168.0.3
/mnt/tank/VMDK    (everyone)
/mnt/tank/Storage (everyone)

real    0m10.337s
user    0m0.704s
sys     0m0.144s
 
Ok, you're getting a timeout. To check whether the storage is online we start showmount with a timeout of 2 seconds. So the next question is: "Why is that NFS so slow?" It may be DNS related. Could you make an entry in /etc/hosts for the client machine in the NFS server host?
 
  • Like
Reactions: Rob4224
That was it! I completely forgot about nfsv4 getting cranky when dns isnt playing nice. Thanks for sitting with me through this. you were a great help!

And a screenshot just to make you shudder a bit :p
 

Attachments

  • Screen Shot 2017-09-05 at 4.59.45 AM.png
    Screen Shot 2017-09-05 at 4.59.45 AM.png
    42.6 KB · Views: 183
Great. Please don't forget to mark the thread as 'solved' so others with a similar problem may easily find a possible solution.

That's enough storage for a few games, or whatever you're hoarding there ;)
 
  • Like
Reactions: Rob4224
Hi,

I have the same problem.

Proxmox VE 5.2-6

Content of /etc/pve/storage.cfg
[...]
nfs: NFS
export /mnt/usb/nfs
path /mnt/pve/nfs
server 79.189.***.***
content backup
maxfiles 8
options vers=4,soft

I'm able to mount NFS filesystem from command line:

time mount -v -o nfsvers=4,soft -t nfs 79.189.***.***:/mnt/usb/nfs /mnt
mount.nfs: timeout set for Fri Jul 27 06:33:31 2018
mount.nfs: trying text-based options 'soft,vers=4,addr=79.189.***.***,clientaddr=37.187.***.***'

real 0m0.956s
user 0m0.001s
sys 0m0.004s

mount
[...]
79.189.***.***:/mnt/usb/nfs on /mnt type nfs4 (rw,relatime,vers=4.0,rsize=262144,wsize=262144,namlen=255,soft,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=37.187.***.***,local_lock=none,addr=79.189.***.***)

Unfortunately I don't be able to mount this storage from Proxmox GUI
I got error storage 'NFS' is not online (500)

I added some debugging on the client side
rpcdebug -m nfs -s proc

In order to mount from command line there are some messages in /var/log/message:

Jul 27 06:38:57 pkiui kernel: [1761662.300253] NFS call setclientid auth=UNIX, 'Linux NFSv4.0 37.187.***.***/79.189.***.*** tcp'
Jul 27 06:38:57 pkiui kernel: [1761662.329036] NFS reply setclientid: 0
Jul 27 06:38:57 pkiui kernel: [1761662.329067] NFS call setclientid_confirm auth=UNIX, (client ID af305b5b13000000)
Jul 27 06:38:57 pkiui kernel: [1761662.360257] NFS reply setclientid_confirm: 0
Jul 27 06:38:58 pkiui kernel: [1761662.628145] NFS call lookup mnt
Jul 27 06:38:58 pkiui kernel: [1761662.656516] NFS reply lookup: 0
Jul 27 06:38:58 pkiui kernel: [1761662.656550] NFS call lookup mnt
Jul 27 06:38:58 pkiui kernel: [1761662.684655] NFS reply lookup: 0
Jul 27 06:38:58 pkiui kernel: [1761662.860051] NFS call lookup usb
Jul 27 06:38:58 pkiui kernel: [1761662.888463] NFS reply lookup: 0
Jul 27 06:38:58 pkiui kernel: [1761662.888496] NFS call lookup usb
Jul 27 06:38:58 pkiui kernel: [1761662.916911] NFS reply lookup: 0
Jul 27 06:38:58 pkiui kernel: [1761663.092358] NFS call lookup nfs
Jul 27 06:38:58 pkiui kernel: [1761663.120813] NFS reply lookup: 0

Unfortunately there is no such message in /var/log/message in order to mount from Proxmox GUI

May I ask for some suggestion?

Thank you in advance

Peter
 
Hi,

I have no access to this firewall now but as far as I can remember this was related to
LOCKD_TCPPORT
LOCKD_UDPPORT
STATD_PORT

Peter
 
I truly appreciate the work that has been done on proxmox as it remains one of the best hypervisors for homelabs. I mean homelabs and not production environment. Given the work that already has been invested it is very hard to verify every single feature before new version release, this is also a reason why I believe proxmox is more popular in homelabs. I am also having issues with NFS shares and been trying to resolve for couple of weeks. Hopefully a patch or update is going to be released.
 
Hi,

Sorry for upping old themes, but I have same issue, but with some exceptions.

Fresh installed Proxmox 6-2.4 (with all latest updates) 4 nodes cluster connected through 10G switch, Synology NAS in same broadcast domain and IP network (no FW, no ACL on switch side).

Before cluster re-install (it was 6-0.1 PVE) I was able to connect to NFS share.
After re-install I can`t create NFS datastore via GUI (error 500) and CLI:
Code:
pvesm add nfs ISO2 --path /mnt/pve/ISO --server 10.xxx.xxx.xxx --options vers=3,nolock,tcp --export /volume1/ISO --content iso,vztmpl
create storage failed: error during cfs-locked 'file-storage_cfg' operation: storage 'ISO2' is not online
Also I can`t scan NFS share (I use IP, not FQDN of NAS):
Code:
pvesm nfsscan <nas-ip>
rpc mount export: RPC: Timed out

Telnet from all nodes to NAS on port 111 works. All nodes able to mount NFS share by "mount" command:
Code:
mount -t nfs -O uid=1000,iocharset=utf-8 <nas-ip>:/volume1/ISO2 /home

I already check:
- connectivity from all nodes to NAS (ping, telnet, ping with MTU 9000) response time less than 0.2 ms
- NFS folder permissions nodes IP or nodes subnet (even create rule allow *)
- connection from any other *nix VM\servers directly with "mount" command working

I really can`t understand what is going on. Everything working on same hardware before re-install. Just one difference in switch configuration is VLAN. It was no VLAN, now all ports are tagged, but it is also was checked.

Any idea?
 
Hi,

Sorry for upping old themes, but I have same issue, but with some exceptions.

Fresh installed Proxmox 6-2.4 (with all latest updates) 4 nodes cluster connected through 10G switch, Synology NAS in same broadcast domain and IP network (no FW, no ACL on switch side).
<snip>

Any idea?

Try the thread that seems to match your issue and exceptions: mount-no-longer-works-in-proxmox-6-nfs-synology
 
  • Like
Reactions: Pravednik
Hi,

Sorry for upping old themes, but I have same issue, but with some exceptions.

Fresh installed Proxmox 6-2.4 (with all latest updates) 4 nodes cluster connected through 10G switch, Synology NAS in same broadcast domain and IP network (no FW, no ACL on switch side).

Before cluster re-install (it was 6-0.1 PVE) I was able to connect to NFS share.
After re-install I can`t create NFS datastore via GUI (error 500) and CLI:
Code:
pvesm add nfs ISO2 --path /mnt/pve/ISO --server 10.xxx.xxx.xxx --options vers=3,nolock,tcp --export /volume1/ISO --content iso,vztmpl
create storage failed: error during cfs-locked 'file-storage_cfg' operation: storage 'ISO2' is not online
Also I can`t scan NFS share (I use IP, not FQDN of NAS):
Code:
pvesm nfsscan <nas-ip>
rpc mount export: RPC: Timed out

Telnet from all nodes to NAS on port 111 works. All nodes able to mount NFS share by "mount" command:
Code:
mount -t nfs -O uid=1000,iocharset=utf-8 <nas-ip>:/volume1/ISO2 /home

I already check:
- connectivity from all nodes to NAS (ping, telnet, ping with MTU 9000) response time less than 0.2 ms
- NFS folder permissions nodes IP or nodes subnet (even create rule allow *)
- connection from any other *nix VM\servers directly with "mount" command working

I really can`t understand what is going on. Everything working on same hardware before re-install. Just one difference in switch configuration is VLAN. It was no VLAN, now all ports are tagged, but it is also was checked.

Any idea?

Are you using the same name as before the NFS storage?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!