NFS: storage 'bkp' is not online (500)

aasami

Renowned Member
Mar 9, 2016
55
9
73
24
Hello all,
I know this issue was covered several times on this forum, but my symptoms are a bit different.
I've used NFS storage for a few months but, now the storage shows offline in Proxmox GUI.
Although on 2 nodes out of three it is mounted and accessible when checked from command line.

Here are some more info. It all looks similar on all three nodes, except node 3 doesn't have NFS mounted as the other two.

Code:
[node1~]# ping -c1 nfs
PING nfs (10.1.0.105) 56(84) bytes of data.
64 bytes from nfs (10.1.0.105): icmp_seq=1 ttl=64 time=0.369 ms

--- nfs ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms
[node1 ~]#
[node1 ~]# df -h /mnt/pve/bkp
Filesystem      Size  Used Avail Use% Mounted on
nfs:/srv/bkp     465G  312G  154G  67% /mnt/pve/bkp
[node1 ~]#
[node1 ~]# ls /mnt/pve/bkp/
vzdump-qemu-100-2018_07_25-05_30_02.log
vzdump-qemu-100-2018_07_25-05_30_02.vma.lzo
vzdump-qemu-100-2018_07_26-05_30_02.log
vzdump-qemu-100-2018_07_26-05_30_02.vma.lzo
vzdump-qemu-100-2018_07_27-05_30_02.log
vzdump-qemu-100-2018_07_27-05_30_02.vma.lzo
vzdump-qemu-100-2018_07_30-05_30_02.log
vzdump-qemu-100-2018_07_30-05_30_02.vma.lzo
vzdump-qemu-100-2018_07_31-05_30_02.log
vzdump-qemu-100-2018_07_31-05_30_02.vma.lzo
vzdump-qemu-146-2018_05_21-11_39_02.log
vzdump-qemu-146-2018_05_21-11_39_02.vma.lzo
vzdump-qemu-153-2018_07_25-05_30_02.log
vzdump-qemu-153-2018_07_25-05_30_02.vma.lzo
vzdump-qemu-153-2018_07_26-05_30_02.log
vzdump-qemu-153-2018_07_26-05_30_02.vma.lzo
vzdump-qemu-153-2018_07_27-05_30_02.log
vzdump-qemu-153-2018_07_27-05_30_02.vma.lzo
vzdump-qemu-153-2018_07_30-05_30_02.log
vzdump-qemu-153-2018_07_30-05_30_02.vma.lzo
vzdump-qemu-153-2018_07_31-05_30_02.log
vzdump-qemu-153-2018_07_31-05_30_02.vma.lzo
[node1 ~]#
[node1 ~]# grep -A 7 bkp /etc/pve/storage.cfg
nfs: bkp
        export /srv/bkp
        path /mnt/pve/bkp
        server nfs
        content backup
        maxfiles 5
        options vers=4

[node1 ~]#
[node1 ~]# rpcinfo -p nfs
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  36567  status
    100024    1   tcp  36763  status
    100005    1   udp  20048  mountd
    100005    1   tcp  20048  mountd
    100005    2   udp  20048  mountd
    100005    2   tcp  20048  mountd
    100005    3   udp  20048  mountd
    100005    3   tcp  20048  mountd
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    3   tcp   2049
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    3   udp   2049
    100021    1   udp  39868  nlockmgr
    100021    3   udp  39868  nlockmgr
    100021    4   udp  39868  nlockmgr
    100021    1   tcp  36237  nlockmgr
    100021    3   tcp  36237  nlockmgr
    100021    4   tcp  36237  nlockmgr
[node1 ~]#
[node1 ~]# showmount -e nfs
rpc mount export: RPC: Unable to receive; errno = No route to host
[node1 ~]#
[node1 ~]# mount -t nfs
nfs:/srv/bkp on /mnt/pve/bkp type nfs (rw,relatime,vers=3,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.1.0.105,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=10.1.0.105)
[node1 ~]#

Output from node 2 looks exactly the same so I won't paste it here.

Node 3 doesn't have 'bkp' mounted:

Code:
[node3 ~]#  ping -c1 nfs
PING nfs (10.1.0.105) 56(84) bytes of data.
64 bytes from nfs (10.1.0.105): icmp_seq=1 ttl=64 time=0.414 ms

--- nfs ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.414/0.414/0.414/0.000 ms
[node3 ~]#
[node3 ~]# grep -A 7 bkp /etc/pve/storage.cfg
nfs: bkp
        export /srv/bkp
        path /mnt/pve/bkp
        server nfs
        content backup
        maxfiles 5
        options vers=4

[node3 ~]#
[node3 ~]# rpcinfo -p nfs
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  36567  status
    100024    1   tcp  36763  status
    100005    1   udp  20048  mountd
    100005    1   tcp  20048  mountd
    100005    2   udp  20048  mountd
    100005    2   tcp  20048  mountd
    100005    3   udp  20048  mountd
    100005    3   tcp  20048  mountd
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    3   tcp   2049
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    3   udp   2049
    100021    1   udp  39868  nlockmgr
    100021    3   udp  39868  nlockmgr
    100021    4   udp  39868  nlockmgr
    100021    1   tcp  36237  nlockmgr
    100021    3   tcp  36237  nlockmgr
    100021    4   tcp  36237  nlockmgr
[node3 ~]#
[node3 ~]# showmount -e nfs
rpc mount export: RPC: Unable to receive; errno = No route to host
[node3 ~]#
[node3 ~]# mount -t nfs
[node3 ~]#

Thank you for any help.
 
Hi,

the problem seems that you nfs server is not answering to the rpc request.
But this request we use to check if the storage is online.

I know there are some vendors out there what stop answering rpc if the storage gets under load and same disable it may be by an update.
 
I've finally found the cause. It was the firewalld. I have added the nfs service and rpc-bind service, but there is one more that needs to be added, and that's the mountd service. When added, it works well since then.

firewall-cmd --zone work --add-service=nfs
firewall-cmd --zone work --add-service=mountd
firewall-cmd --zone work --add-service=rpc-bind
firewall-cmd --runtime-to-permanent
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!