Hello all,
I know this issue was covered several times on this forum, but my symptoms are a bit different.
I've used NFS storage for a few months but, now the storage shows offline in Proxmox GUI.
Although on 2 nodes out of three it is mounted and accessible when checked from command line.
Here are some more info. It all looks similar on all three nodes, except node 3 doesn't have NFS mounted as the other two.
Output from node 2 looks exactly the same so I won't paste it here.
Node 3 doesn't have 'bkp' mounted:
Thank you for any help.
I know this issue was covered several times on this forum, but my symptoms are a bit different.
I've used NFS storage for a few months but, now the storage shows offline in Proxmox GUI.
Although on 2 nodes out of three it is mounted and accessible when checked from command line.
Here are some more info. It all looks similar on all three nodes, except node 3 doesn't have NFS mounted as the other two.
Code:
[node1~]# ping -c1 nfs
PING nfs (10.1.0.105) 56(84) bytes of data.
64 bytes from nfs (10.1.0.105): icmp_seq=1 ttl=64 time=0.369 ms
--- nfs ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms
[node1 ~]#
[node1 ~]# df -h /mnt/pve/bkp
Filesystem Size Used Avail Use% Mounted on
nfs:/srv/bkp 465G 312G 154G 67% /mnt/pve/bkp
[node1 ~]#
[node1 ~]# ls /mnt/pve/bkp/
vzdump-qemu-100-2018_07_25-05_30_02.log
vzdump-qemu-100-2018_07_25-05_30_02.vma.lzo
vzdump-qemu-100-2018_07_26-05_30_02.log
vzdump-qemu-100-2018_07_26-05_30_02.vma.lzo
vzdump-qemu-100-2018_07_27-05_30_02.log
vzdump-qemu-100-2018_07_27-05_30_02.vma.lzo
vzdump-qemu-100-2018_07_30-05_30_02.log
vzdump-qemu-100-2018_07_30-05_30_02.vma.lzo
vzdump-qemu-100-2018_07_31-05_30_02.log
vzdump-qemu-100-2018_07_31-05_30_02.vma.lzo
vzdump-qemu-146-2018_05_21-11_39_02.log
vzdump-qemu-146-2018_05_21-11_39_02.vma.lzo
vzdump-qemu-153-2018_07_25-05_30_02.log
vzdump-qemu-153-2018_07_25-05_30_02.vma.lzo
vzdump-qemu-153-2018_07_26-05_30_02.log
vzdump-qemu-153-2018_07_26-05_30_02.vma.lzo
vzdump-qemu-153-2018_07_27-05_30_02.log
vzdump-qemu-153-2018_07_27-05_30_02.vma.lzo
vzdump-qemu-153-2018_07_30-05_30_02.log
vzdump-qemu-153-2018_07_30-05_30_02.vma.lzo
vzdump-qemu-153-2018_07_31-05_30_02.log
vzdump-qemu-153-2018_07_31-05_30_02.vma.lzo
[node1 ~]#
[node1 ~]# grep -A 7 bkp /etc/pve/storage.cfg
nfs: bkp
export /srv/bkp
path /mnt/pve/bkp
server nfs
content backup
maxfiles 5
options vers=4
[node1 ~]#
[node1 ~]# rpcinfo -p nfs
program vers proto port service
100000 4 tcp 111 portmapper
100000 3 tcp 111 portmapper
100000 2 tcp 111 portmapper
100000 4 udp 111 portmapper
100000 3 udp 111 portmapper
100000 2 udp 111 portmapper
100024 1 udp 36567 status
100024 1 tcp 36763 status
100005 1 udp 20048 mountd
100005 1 tcp 20048 mountd
100005 2 udp 20048 mountd
100005 2 tcp 20048 mountd
100005 3 udp 20048 mountd
100005 3 tcp 20048 mountd
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100227 3 tcp 2049
100003 3 udp 2049 nfs
100003 4 udp 2049 nfs
100227 3 udp 2049
100021 1 udp 39868 nlockmgr
100021 3 udp 39868 nlockmgr
100021 4 udp 39868 nlockmgr
100021 1 tcp 36237 nlockmgr
100021 3 tcp 36237 nlockmgr
100021 4 tcp 36237 nlockmgr
[node1 ~]#
[node1 ~]# showmount -e nfs
rpc mount export: RPC: Unable to receive; errno = No route to host
[node1 ~]#
[node1 ~]# mount -t nfs
nfs:/srv/bkp on /mnt/pve/bkp type nfs (rw,relatime,vers=3,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.1.0.105,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=10.1.0.105)
[node1 ~]#
Output from node 2 looks exactly the same so I won't paste it here.
Node 3 doesn't have 'bkp' mounted:
Code:
[node3 ~]# ping -c1 nfs
PING nfs (10.1.0.105) 56(84) bytes of data.
64 bytes from nfs (10.1.0.105): icmp_seq=1 ttl=64 time=0.414 ms
--- nfs ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.414/0.414/0.414/0.000 ms
[node3 ~]#
[node3 ~]# grep -A 7 bkp /etc/pve/storage.cfg
nfs: bkp
export /srv/bkp
path /mnt/pve/bkp
server nfs
content backup
maxfiles 5
options vers=4
[node3 ~]#
[node3 ~]# rpcinfo -p nfs
program vers proto port service
100000 4 tcp 111 portmapper
100000 3 tcp 111 portmapper
100000 2 tcp 111 portmapper
100000 4 udp 111 portmapper
100000 3 udp 111 portmapper
100000 2 udp 111 portmapper
100024 1 udp 36567 status
100024 1 tcp 36763 status
100005 1 udp 20048 mountd
100005 1 tcp 20048 mountd
100005 2 udp 20048 mountd
100005 2 tcp 20048 mountd
100005 3 udp 20048 mountd
100005 3 tcp 20048 mountd
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100227 3 tcp 2049
100003 3 udp 2049 nfs
100003 4 udp 2049 nfs
100227 3 udp 2049
100021 1 udp 39868 nlockmgr
100021 3 udp 39868 nlockmgr
100021 4 udp 39868 nlockmgr
100021 1 tcp 36237 nlockmgr
100021 3 tcp 36237 nlockmgr
100021 4 tcp 36237 nlockmgr
[node3 ~]#
[node3 ~]# showmount -e nfs
rpc mount export: RPC: Unable to receive; errno = No route to host
[node3 ~]#
[node3 ~]# mount -t nfs
[node3 ~]#
Thank you for any help.