Hi!
I have a small Proxmox cluster with 2 nodes running 6.3. On the weekend I installed the current updates, that the WebGUI offered.
I rebooted node 1, but after booting up the host it does not connect to the NFS share on my Synology NAS anymore... why that??
Node 2 was not rebooted, here the NFS share is still "online". Will it also be offline, when rebooting the host?
node 1 (pve1) = 192.168.1.3
node 2 (pve3) = 192.168.1.6
I googled a lot, but I do not find any solution:
However I can ping the NFS server from both nodes, but "rpcinfo -p 192.168.1.13" works only on node 2:
What's the problem here? I can not find any difference in the network configuration of both nodes?!
All hosts are in the same subnet on the same switch?!
I have a small Proxmox cluster with 2 nodes running 6.3. On the weekend I installed the current updates, that the WebGUI offered.
I rebooted node 1, but after booting up the host it does not connect to the NFS share on my Synology NAS anymore... why that??
Node 2 was not rebooted, here the NFS share is still "online". Will it also be offline, when rebooting the host?
node 1 (pve1) = 192.168.1.3
node 2 (pve3) = 192.168.1.6
I googled a lot, but I do not find any solution:
Code:
root@pve1:~# showmount -e 192.168.1.13
rpc mount export: RPC: Unable to receive; errno = No route to host
Code:
root@pve3:~# showmount -e 192.168.1.13
Export list for 192.168.1.13:
/volume1/vmbackup 192.166.1.3,192.168.1.6
However I can ping the NFS server from both nodes, but "rpcinfo -p 192.168.1.13" works only on node 2:
Code:
root@pve1:~# rpcinfo -p 192.168.1.13
192.168.1.13: RPC: Remote system error - No route to host
Code:
root@pve3:~# rpcinfo -p 192.168.1.13
program vers proto port service
100000 4 tcp 111 portmapper
100000 3 tcp 111 portmapper
100000 2 tcp 111 portmapper
100000 4 udp 111 portmapper
100000 3 udp 111 portmapper
100000 2 udp 111 portmapper
100005 1 udp 892 mountd
100005 1 tcp 892 mountd
100005 2 udp 892 mountd
100005 2 tcp 892 mountd
100005 3 udp 892 mountd
100005 3 tcp 892 mountd
100003 2 tcp 2049 nfs
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100021 1 udp 54715 nlockmgr
100021 3 udp 54715 nlockmgr
100021 4 udp 54715 nlockmgr
100021 1 tcp 46220 nlockmgr
100021 3 tcp 46220 nlockmgr
100021 4 tcp 46220 nlockmgr
100024 1 udp 60464 status
100024 1 tcp 35473 status
What's the problem here? I can not find any difference in the network configuration of both nodes?!
All hosts are in the same subnet on the same switch?!