NFS storage with status unknown

m3a2r1

Active Member
Feb 23, 2020
162
5
38
47
I've added NFS storage to pve but in server view its shown with question mark and status unknown. When I click on it, it shows me everything correctly: vm disks/ct volume/iso images. So it shouldn't show status unknown, but it shows, why?
 
I've noticed that it changes randomly, when I add the same nfs share few times and restart pve, after restart I have some disabled shares which I can read and write.
 
That's odd. Which version of PVE are you running?
can you give a little more information about your setup?
 
Pve is the newest 6.3-3. I've tested on 2 nodes and it's the same situation, I think that's the problem with Freenas. Other nfs shares work good.
 
Could you please copy & paste the output of the following commands?
Code:
cat /etc/pve/storage.cfg
pvesm status
mount | grep nfs
 
Could you please copy & paste the output of the following commands?
Code:
cat /etc/pve/storage.cfg
pvesm status
mount | grep nfs

Storage with exclamation: wb-storage-nfs
IP of wb-storage-nfs is 172.16.1.88 so freenas=wb-storage-nfs , only 1 with exclamation, both can read and write.

Code:
root@proxmox:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content backup,iso,vztmpl

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

dir: VMs
        path /mnt/pve/VMs
        content backup,snippets,rootdir,vztmpl,images,iso
        is_mountpoint 1
        nodes proxmox

lvm: raid
        vgname pve
        content rootdir,images
        shared 0

lvm: NAS
        vgname freenas
        content rootdir,images
        shared 0

nfs: freenas
        export /mnt/NAS/NFS
        path /mnt/pve/freenas
        server 172.16.1.88
        content iso,images,vztmpl,snippets,rootdir,backup
        options vers=3
        prune-backups keep-last=1

nfs: STOR2
        export /mnt/NAS/NFS
        path /mnt/pve/STOR2
        server 172.16.1.115
        content images,iso,backup,snippets,rootdir
        prune-backups keep-last=5

nfs: wb-storage-nfs
        export /mnt/NAS/NFS
        path /mnt/pve/wb-storage-nfs
        server wb-storage
        content vztmpl,backup,snippets,rootdir,iso,images
        options vers=3
        prune-backups keep-all=1

Code:
root@proxmox:~# pvesm status
Name                  Type     Status           Total            Used       Available        %
NAS                    lvm     active     36335235072     36335235072               0  100.00%
STOR2                  nfs     active     18930489728      5728724224     13201765504   30.26%
VMs                    dir     active     10402030284       500182436      9377543520    4.81%
freenas                nfs     active     32215149184      4406821248     27808327936   13.68%
local                  dir     active        15158232         5036040         9332484   33.22%
local-lvm          lvmthin     active        29356032               0        29356032    0.00%
raid                   lvm     active        62386176        54652928         7733248   87.60%
wb-storage-nfs         nfs   inactive               0               0               0    0.00%

Lastline is deleted share, Proxmox didn't unmount it.
Code:
root@proxmox:~# mount | grep nfs
172.16.1.115:/mnt/NAS/NFS on /mnt/pve/STOR2 type nfs (rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=172.16.1.115,mountvers=3,mountport=630,mountproto=udp,local_lock=none,addr=172.16.1.115)
172.16.1.88:/mnt/NAS/NFS on /mnt/pve/freenas type nfs (rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=172.16.1.88,mountvers=3,mountport=626,mountproto=udp,local_lock=none,addr=172.16.1.88)
172.16.1.88:/mnt/NAS/NFS on /mnt/pve/wb-storage-nfs type nfs (rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=172.16.1.88,mountvers=3,mountport=626,mountproto=udp,local_lock=none,addr=172.16.1.88)
172.16.1.88:/mnt/NAS/NFS on /mnt/pve/wb-storage type nfs (rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=172.16.1.88,mountvers=3,mountport=626,mountproto=udp,local_lock=none,addr=172.16.1.88)
 
First of all, you can just unmount the deleted share /mnt/pve/wb-storage manually.

What happens if you change the server property of wb-storage-nfs from wb-storage to the respective IP address?
 
First of all, you can just unmount the deleted share /mnt/pve/wb-storage manually.
I know, nfs mounted shares are in /mnt/pve.

What happens if you change the server property of wb-storage-nfs from wb-storage to the respective IP address?
Storage changes status to active. That ip is in /etc/hosts so it should work.
 
I'm testing on another node, I've added 3 times the same share with dns local domain name not ip and it works.
When I add fourth share with dns name but without local domain, it shows status unknown. When I add domain, it works again.
I don't understand it at all.
 
Hello!

Same here. Added NFS share and the status is unknown.

cat /etc/pve/storage.cfg

Code:
dir: local
path /var/lib/vz
content iso,backup,vztmpl

zfspool: local-zfs
pool rpool/data
content images,rootdir
sparse 1

zfspool: VMs
pool r10-1TB/VMs
content rootdir,images
mountpoint /r10-1TB/VMs

dir: ISOs
path /r0-500GB/ISOs
content iso
prune-backups keep-all=1
shared 0

nfs: verified-ISOs
export /volume1/proxmox/ISOs/verified/
path /mnt/pve/verified-ISOs
server 172.25.20.18
content iso
prune-backups keep-all=1

nfs: unverified-ISOs
export /volume1/proxmox/ISOs/unverified/
path /mnt/pve/unverified-ISOs
server 172.25.20.18
content iso
prune-backups keep-all=1

pvesm status

Code:
Name                   Type     Status           Total            Used       Available        %
ISOs                    dir     active       471334272             128       471334144    0.00%
VMs                 zfspool     active      1885338868             192      1885338676    0.00%
local                   dir     active       202142336         1496192       200646144    0.74%
local-zfs           zfspool     active       200646336              96       200646240    0.00%
unverified-ISOs         nfs   inactive               0               0               0    0.00%
verified-ISOs           nfs   inactive               0               0               0    0.00%


mount | grep nfs

Code:
172.25.20.18:/volume1/proxmox/ISOs/unverified on /mnt/pve/unverified-ISOs type nfs4 (rw,relatime,vers=4.1,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=172.25.20.201,local_lock=none,addr=172.25.20.18)
172.25.20.18:/volume1/proxmox/ISOs/verified on /mnt/pve/verified-ISOs type nfs4 (rw,relatime,vers=4.1,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=172.25.20.201,local_lock=none,addr=172.25.20.18)

How should i make it to be active? This is a fresh install. I even tried with local domain name, but that is not good either. Before the install it just worked. I can read and write the share.

Thanks any help!
 
Last edited: