nfs storage mount shows up inactive on some members

Tobbe

Member
Oct 4, 2021
21
7
8
Hello.

i have a pve cluster with 3 nodes, one of them got an nfs export that is setup as a shared storage area for backups in the cluster.
on the node with the export it shows up correctly (icon in the front) and i can see disk usage over time and so on.

on all the other members of the cluster, they also get this mounted but the icon shows up with a questionmark and "pvesm status" shows this storage as "inactive"
the nfs gets mounted correctly at boot of each node and can be used just fine.
but why do 2 of my 3 nodes show inactive on the nfs when it is clearly active and usable?
 
At the very least you need to provide:
1) context of /etc/pve/storage.conf
2) output of "pvesm status" from each node
3) journalctl output during the probes or any relevant time on the nodes where storage is not active
4) explain what exactly is meant by "mounts on boot and works fine" - are you not using PVE NFS storage and just define the location as directory storage?

You should include as much relevant information as possible so others can help.


Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Tmanok
Relevant entry from storage.cfg
Code:
nfs: backup
    export /mnt/data/backup/proxmox
    path /mnt/pve/backup
    server 192.168.xx.yy
    content backup
    prune-backups keep-all=1

from node with the nfs export:
Code:
# pvesm status
Name             Type     Status           Total            Used       Available        %
backup            nfs     active     12296099840      8263827456      4032256000   67.21%

from another node in the same cluster:
Code:
# pvesm status
Name             Type     Status           Total            Used       Available        %
backup            nfs   inactive               0               0               0    0.00%
this node still, despite showing up as inactive, get the storage mounted during boot by proxmox itself and backups to this inactive storage works.
the web interface can see and browse the storage on each node just fine, but only the one marked with active can see the disk usage.

and no, as you can see above it is setup as nfs storage and not a directory.
 
Unfortunately you omitted N3 logs, which is generally one of the most important parts in troubleshooting the system.
Based on everything you presented so far my guess is that a firewall on your NFS host is blocking at least one NFS RPC service that is used by PVE to health check the NFS storage beyond initial mount. That would explain why 1st host works - local traffic is not affected by the firewall.
This is just a guess based on the available information.


Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
as i said, the nfs export gets correctly mounted by proxmox itself on all members of the cluster and works, including doing backups to it via proxmox itself on all nodes.
there is no firewalling done between them and i do not mount this myself via fstab or similar.

i'm not sure what you mean by N3.

i've gone thru the system journal and only relevant things is ocational output from pvestatd that the storage is not online.
Code:
pvestatd[2110]: storage 'backup' is not online


and the problems i have is mainly cosmetic and not in any way functional.
as i said, storage in pvesm shows inactive but i can still see the storage under each node including its contents, disk utilization graphs is not working on any node with pvesm showing inactive.
 
Your NFS server is not responding to "pvestatd" on remote nodes in allotted time or at all.
You can take a network trace to troubleshoot.
You may be able to get some more data from:
a) pvestatd stop && pvestatd start --debug 1
b) pvestatd stop && perl -MCarp::Always /usr/bin/pvestatd start --debug 1


Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I think i've found the cause but it still is odd.

As you can see above the path on the nfs server itself is: /mnt/data/backup/proxmox
the exported folder on this server is actually /mnt/data.
So the nfs storage mount points to a subdirectory where i have my backups

So far so good.
BUT i do mount the root of the export /mnt/data directly on to another folder for other purposes too.
so there is two mounts to the same export, one for backup purposes for use via proxmox and another on the same nfs export but to another folder (mounted via fstab)

This is what gets proxmox confused and makes the storage show up as a question mark.
This also explains why it works on the server that hosts the nfs export itself since there is no extra mount there.

So then my question is this:
Why can i not mount the same (but different subfolder) of an nfs export without getting proxmox confused?
 
I think i've found the cause but it still is odd.

As you can see above the path on the nfs server itself is: /mnt/data/backup/proxmox
the exported folder on this server is actually /mnt/data.
So the nfs storage mount points to a subdirectory where i have my backups

So far so good.
BUT i do mount the root of the export /mnt/data directly on to another folder for other purposes too.
so there is two mounts to the same export, one for backup purposes for use via proxmox and another on the same nfs export but to another folder (mounted via fstab)

This is what gets proxmox confused and makes the storage show up as a question mark.
This also explains why it works on the server that hosts the nfs export itself since there is no extra mount there.

So then my question is this:
Why can i not mount the same (but different subfolder) of an nfs export without getting proxmox confused?
I'm not sure why there would be an issue, but in my experience with mounting NFS- and actually most shared file systems (AFP, SMB), mounting a parent directory and a child directory separately using the same user account often causes issues (even on Windows Server or as simple as a workstation). My advice is to move your mounts to the same "tier" on the file system hierarchy or a completely separate parent:

/a/b/c
/a/d/e

Or

/a/b/c
/d/e/f

Cheers,

Tmanok
 
I have the same issue here. In my case it was related to the PVE Node ending up using the wrong IP address for accessing the NFS-Server. I'm using a dedicated subnet for NFS-Traffic to the server, but for an unknown reason the other ip address is being used. Downgrading to NFSv4.1 resolved the issue.

See my Post in the Tread https://forum.proxmox.com/threads/nfs-mounts-using-wrong-source-ip-interface.70754/.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!