NFS Share suddenly stopped being recognized by PVE

Jun 1, 2022
3
0
1
I had an NFS share to my Qnap all set up and working fine. However, all of the sudden I can't see any of the details in PVE. It keeps giving me the error:

Code:
mkdir /mnt/pve/nas_storage/private: Read-only file system at /usr/share/perl5/PVE/Storage/Plugin.pm line 1322. (500)

I don't know what I did or how to fix it and I'm hoping someone can help out. Strangely, the container I am currently using this share for (Plex) still has the share mounted and I can access it no problem from the container. I just can't access it from PVE for any backups or ISOs or anything.

Can someone help me get this fixed? Thank you in advance!
 
Could be:
https://pve.proxmox.com/wiki/Roadmap
Code:
Known Issues

Setups mounting a QNAP NFS share could not be mounted with NFS version 4.1 with kernel pve-kernel-5.15.30-2-pve - the issue has been mitigated in kernels pve-kernel-5.15.35-2 and above.
If your QNAP NFS share cannot be mounted upgrade the kernel and reboot.
As an alternative mitigation you can explicitly set the NFS version to 4 (not 4.1 or auto).

Or, its possible, the device really went r/o due to some condition. When you run "df" do you see it mounted to path in /mnt/pve/[storagename] ?
Can you "cd" there and "touch testfile;rm testfile" ?
What does "mount|egrep [storagename]" show?


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
This might be an issue. I did try to set the NFS version to 4 explicitly, but that didn't seem to make a difference.

I seem to be on 5.13.35-1, but not sure how to get to 35-2. When I try to install, I get:

Code:
root@prox:~# uname -r
5.15.35-1-pve
root@prox:~# apt install pve-kernel-5.15
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
pve-kernel-5.15 is already the newest version (7.2-3).
pve-kernel-5.15 set to manually installed.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

Hmmm it does seem to be r/o from PVE......not sure how it got that way. When I try and touch a file from the PVE console, I get the following:

Code:
root@prox:/mnt/pve/nas_storage# touch testfile
touch: cannot touch 'testfile': Read-only file system

However, if I run the same thing from the console in the Plex LXC, I can create and delete the testfile.


the command you specified to run produces:

Code:
root@prox:/mnt/pve/nas_storage# mount | egrep nas_storage
192.168.1.51:/Media on /mnt/pve/nas_storage type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.1.90,local_lock=none,addr=192.168.1.51)
 
Last edited:
You can examine your logs to find out when the mount went R/O: journalctl -b . That may help to understand why.
Generally, there could be many reasons. Google "nfs read only" or similar.
As for the container - you are most likely mounting it there directly and not via bind-mount from host.
You can try to force a remount: systemctl try-reload-or-restart pvedaemon pveproxy pvestatd
Or it may require a reboot.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I'm an idiot. Checked the host permissions in NFS again and my Proxmox node wasn't added as R/W. I have been moving stuff around, so IPs must have gotten mixed up. Fixed now. Thanks for your patience.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!