[SOLVED] NFS mount read only on one node ?

Nov 30, 2020
32
5
28
45
Hello
I have a 3 nodes cluster and added nfs storage to store some backups. The third node was recently added, and do not have a valid subscription yet.
But i do not undestand why the nfs storage (mounted as rw) is read only for the new node ? It works perfectly on the 2 others nodes. Do i have to add subscription now and update the node ?
 
Hi,
no, this should not depend on the subscription status. Is the output for the NFS with findmnt the same on all nodes (except IP addresses of course)? Is the NFS export configured to allow write access from the new node's IP?
 
Thanks for your reply.
Here the output on the new node :
Bash:
/mnt/pve/backup-1  xxx.xxx.xx.xx:/datastore/disk1/share    nfs4    rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=yyy.yyy.y.y,local_lock=none,addr=xxx.xxx.xx.xx
Old node :
Bash:
/mnt/pve/backup-1  xxx.xxx.xx.xx:/datastore/disk1/share  nfs4  rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=zz.zz.zz.zzz,local_lock=none,addr=xxx.xxx.xx.xx
And the export file (the new node is ccc.ccc.c.c):
Bash:
/datastore/disk1/share    aa.aaa.aaa.a(rw,sync,no_subtree_check) bb.bb.b.bbb(rw,sync,no_subtree_check) ccc.ccc.c.c (rw,sync,no_subtree_check)
 
Last edited:
In the export file, there seems to be a space after the IP of the last host (maybe it just slipped in after masking out the IP)? Has the NFS server been restarted to load the new configuration?