NFS Share of Synology

idijoost

New Member
Feb 6, 2023
5
0
1
Hello everyone,

I recently troubleshooted a problem with an NFS mount on proxmox VE. My proxmox was a fresh an newly installed server. I then tried to mount a Synology NFS share in the Proxmox VE. Proxmox coudn't load the export. And when I filled in the export manual proxmox gave me an error saying that the host is offline.

I then made a post on Reddit about this issue. And a lot of people tried to help and gave some opinions on the problem. I eventually posted pictures here.

I noticed the following:
  1. Proxmox can ping the Synology NAS (even though when trying to mount proxmox said server is oflline (error code 500).
  2. I can mount the NFS share in a VM on the Proxmox host.
  3. I can mount the NFS on a Linux PC other in the network.
  4. No firewalls that are blocking the connection.
  5. Tried fiddling with the versions of NFS - no luck there.
  6. The showmount -e command (and an other that I can't remember) causes the Shell to freeze with no output. Until I CTRL+C out of it.
Eventually I decided to mount the NFS mount in the CLI (Shell) of proxmox. And guess what. Mounted without any Issues. Now I could call it a day, and keep it the way it is now. The only thing I really cant wrap my head around is why is this not working in the GUI? Is this is a bug or something?

- To rule out my install wans't broken or something I reinstalled proxmox on a node and tried to NFS mount from there. Same issue (host is offline).
- The firewall in the pictures and such were all set to open for testing purposes
- I run version 7.3-3

I hope to hear form you soon.
 
Historically NFS consisted of many services that worked together - https://web.mit.edu/rhel-doc/3/rhel-rg-en-3/ch-nfs.html
NFSv4 no longer requires rpcbind, some NAS providers allow users to disable it, or dropped it unconditionally.
Up until recently PVE has relied on RPC reply to determine the health of the NFS service. As the conversation in that thread indicates, PVE developers have amended the health check to be move flexible.
The change has not been released yet.

Whether or not its what you are experiencing can be confirmed by running the commands mentioned on above page.
To be pedantic, the code Functions As Designed, so its not a "bug". But it certainly needed an adjustment for current common configurations.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
  • Like
Reactions: Johannes S
Historically NFS consisted of many services that worked together - https://web.mit.edu/rhel-doc/3/rhel-rg-en-3/ch-nfs.html
NFSv4 no longer requires rpcbind, some services allow users to disable it, or drop it unconditionally.
Up until recently PVE has relied on RPC reply to determine the health of the NFS service. As the conversation in that thread indicates, PVE developers have amended the health check to be move flexible.
The change has not been released yet.

Whether or not its what you are experiencing can be confirmed by running the commands mentioned on above page.
To be pedantic, the code Functions As Designed, so its not a "bug". But it certainly needed an adjustment for current common configurations.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Thank you very much, so basically (because I can mount it trough CLI) I am not going crazy but probably going to be fixed in the future?

EDIT: any clue on when it is going to be released?
 
Last edited:
EDIT: any clue on when it is going to be released?
Thats a question for developers. They may decide to answer it here, or you can post the question to bugzilla.
You can also apply the code change as a patch manually on your system and see if that resolves your issue.
https://www.howtogeek.com/415442/how-to-apply-a-patch-to-a-file-and-create-patches-in-linux/


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Johannes S
This is still an issue with Proxmox 8.3. So basically, NFS backups to Synology NAS systems are off the table which, from what I can tell, seems to be a pretty big deal. I would have thought that it would've been fixed by now, since this seems to be a trivial issue with "rpcinfo" and "showmount" not reporting from Synology units. One very simple solution would be to add a new "Synology NFS" storage type that just doesn't do the "showmount" check.

For anyone having this issue, here is a "hacked" solution (the only one that worked for me): https://forum.proxmox.com/threads/mount-no-longer-works-in-proxmox-6-nfs-synology.56503/page-6

FTA (Credit to @Daywalker):

Nevertheless, we fixed the problem by replacing the /sbin/showmount file with a custom bash-script that simply echos the output we expect. First, we installed a fresh version of Proxmox 5, where we knew that showmountgives a valid output. We ran showmount <ip> on this installation and wrote the output down. Next, we went back to our Proxmox 6 installation and renamed showmount in /sbin/ to showmount_orig and created a new file called showmount (simply using nano showmount). This file should be a simple bash script which echos the output we expect. I.e. the file could look like this:

Bash:
#!/bin/bash
echo "Export list for 192.168.90.10"
echo "/volume5/Proxmox6_Cluster_Test 192.168.40.0/24"

After you save the script, just run chmod 777 showmount to make it executable.

After we implemented this very hacky fix, all expected export locations were displayed correctly in the Proxmox-Web-UI and we successfully added a working NFS-Synology-Share to our Cluster.

If this solution is too ugly for you, you can simply expand the bash script a bit by adding logic like "if the first parameter is our NFS-IP, echo the valid result; otherwise trigger the real showmount".
 
  • Like
Reactions: Johannes S