NFS Share wont mount

Donovan Hoare

Well-Known Member
Nov 16, 2017
30
6
48
44
Good morning all

I have a synology nas.
Where i have bonded the 4 Gigabit ports.
the Ip address of the interface is 192.168.14.231
this mounts perfectly and i can use the storage.

However i created a new cluster with Ceph that has 10GB Backend.
So i put a 10GB Card to the nas and configured it to be
10.0.45.231

I then updated the NFS permissions to include 10.0.45.0/24
So now on the cluster i already have the share mounted


Code:
From storage.cfg
nfs: SYN_12Bay
        export /volume1/Proxmox
        path /mnt/pve/SYN_12Bay
        server 192.168.14.231
        content snippets,rootdir,iso,vztmpl,backup,images
        prune-backups keep-all=1

Now im trying to add the same NAS on the other IP so is uses the faster interface.
However its not showing the mount points.

However when i run this from the host it does mount using both version 3 and version 4
Code:
mount -t nfs -o vers=4 10.0.45.231:/volume1/Proxmox /mnt/temp/

df -h | grep temp
10.0.45.231:/volume1/Proxmox      37T   23T   15T  62% /mnt/temp

mount -t nfs -o vers=3 10.0.45.231:/volume1/Proxmox /mnt/temp/
10.0.45.231:/volume1/Proxmox      37T   23T   15T  62% /mnt/temp

Other test commands as per other websites
Code:
rpcinfo -u 10.0.45.231 nfs 4
rpcinfo: RPC: Unable to receive; errno = No route to host
program 100003 version 4 is not available

rpcinfo -u 10.0.45.231 nfs 3
rpcinfo: RPC: Unable to receive; errno = No route to host
program 100003 version 3 is not available

rpcinfo -t 10.0.45.231 nfs 3
10.0.45.231: RPC: Remote system error - No route to host

rpcinfo -t 10.0.45.231 nfs 4
10.0.45.231: RPC: Remote system error - No route to host

showmount -e 10.0.45.231
rpc mount export: RPC: Unable to receive; errno = No route to host

ping 10.0.45.231
PING 10.0.45.231 (10.0.45.231) 56(84) bytes of data.
64 bytes from 10.0.45.231: icmp_seq=1 ttl=64 time=1.06 ms
64 bytes from 10.0.45.231: icmp_seq=2 ttl=64 time=0.283 ms

Any help would be appreciated
 
path /mnt/pve/SYN_12Bay
Maybe try a different Storage name & resulting mountpoint for the NFS. So try SYN_12Bay_10g with path /mnt/pve/SYN_12Bay_10g

My thinking is that the cluster has somehow already used the storage name/mountpoint & cannot reset it presently cluster-wide.

Have you already tried removing the old NFS storage from the cluster & rebooting all nodes, & then adding it with new faster IP?
 
Proxmox uses "rpcinfo" to probe NFS for health/response. As you discovered the NAS is not responding to RPC on the new IP. This may be a matter of restarting services on the NAS. I presume the "rpcinfo" is responding on the 192.x IP?

When you mount the export directly no rpcinfo-like requests are being made.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox