NFS storage not work but on shell work

abkrim

Well-Known Member
Sep 5, 2009
97
1
48
Zamora (España)
castris.com
On one of my Proxmox system, NFS does not work natively

If try mount after activate on ftsab, work perfectly

Code:
cat /etc/fstab
# <file system>    <mount point>    <type>    <options>    <dump>    <pass>
/dev/sda1    /    ext4    errors=remount-ro    0    1
/dev/sda2    swap    swap    defaults    0    0
proc            /proc   proc    defaults        0       0
sysfs           /sys    sysfs   defaults        0       0
mynfsserver:/srv/storage/backup/templates /mnt/pve/templates nfs bg,hard,timeo=1200,rsize=1048576,wsize=1048576 0 0
mynfsserver:/srv/storage/backup/pro16 /mnt/pve/backupremote nfs bg,hard,timeo=1200,rsize=1048576,wsize=1048576 0 0

root@pro01:~# df -h
Filesystem                                         Size  Used Avail Use% Mounted on
udev                                                10M     0   10M   0% /dev
tmpfs                                               26G  770M   25G   3% /run
/dev/sda1                                           20G  2.8G   16G  15% /
tmpfs                                               63G   40M   63G   1% /dev/shm
tmpfs                                              5.0M     0  5.0M   0% /run/lock
tmpfs                                               63G     0   63G   0% /sys/fs/cgroup
tmpfs                                               13G     0   13G   0% /run/user/0
/dev/fuse                                           30M   28K   30M   1% /etc/pve
mynfsserver:/srv/storage/backup/templates   30T   19T  9.4T  67% /mnt/pve/templates
mynfsserver/srv/storage/backup/pro16       30T   19T  9.4T  67% /mnt/pve/backupremote

On my storage.cfg
Code:
cat /etc/pve/storage.cfg
dir: local
    path /var/lib/vz
    content rootdir,images,vztmpl,iso
    maxfiles 0

nfs: backupremote
    export /srv/storage/backup/pro16
    path /mnt/pve/backupremote
    server mynfsserver
    content backup
    maxfiles 1
    options vers=3

nfs: templates
    export /srv/storage/backup/templates
    path /mnt/pve/templates
    server mynfsserver
    content images,vztmpl,iso
    maxfiles 0
    nodes pro01
    options vers=3

lvm: lvm
    vgname lvm
    content rootdir,images
    shared 1

If try pvesm...
Code:
 pvesm nfsscan IP_or_FDQN_of_NFS_SERVER
clnt_create: RPC: Port mapper failure - Unable to receive: errno 111 (Connection refused)
command '/sbin/showmount --no-headers --exports IP_or_FDQN_of_NFS_SERVER' failed: exit code 1

On my firewall of course, IP is open
 
This is most probably because the rpcbind service on your nfs server is not running.
IIRC this service is needed for showing a list of exports.

By setting the exports directly in /etc/fstab you bypass the need for the exports discovery , but the storage will probably marked as offline in the status report, as we use the very same check for testing storage availability.

You can check with if rpcbind is active on your nfsserver.
(111 is the rpcbind port and -sU for udp port scan)

nmap -p 111 -sU nfs_server

in my testlab it reports

Nmap scan report for ibsd.local (192.168.16.24)
Host is up (0.00014s latency).
PORT STATE SERVICE
111/udp open rpcbind

if state is closed, check the status of the rpcbind service on the host
 
  • Like
Reactions: abkrim
Hi.

After I see rpcbind.service stolen, restart.

Code:
root@stor01 ~# systemctl |grep rpc
run-rpc_pipefs.mount                                                                      loaded active mounted   /run/rpc_pipefs
rpcbind.service                                                                           loaded active running   LSB: RPC portmapper replacement
rpcbind.target                                                                            loaded active active    RPC Port Mapper

Now, Proxmox mount auto nfs partitions, but not see content and get error. See images attached

pro01_-_Proxmox_Virtual_Environment2.jpg pro01_-_Proxmox_Virtual_Environment.jpg
 
Hi Abkrim

did you see the error message ? PVE could not mount the NFS share because you have it already mounted.
so :
* disable the NFS storage in PVE
* unmout the NFS share and remove the fstab entry
* reactivate the storage in PVE
then you should have your NFS share working :)
 
Hi.

This way its first way that I used.

But still not work.

  • Got to proxmox interface and deactivate remote NFS
  • Got to shell and verify not any entry on ftsab
    Code:
    root@pro01:~# cat /etc/fstab
    # <file system>    <mount point>    <type>    <options>    <dump>    <pass>
    /dev/sda1    /    ext4    errors=remount-ro    0    1
    /dev/sda2    swap    swap    defaults    0    0
    proc            /proc   proc    defaults        0       0
    sysfs           /sys    sysfs   defaults        0       0

  • Verify is not mounted any NFS on system
    Code:
    root@pro01:~# df -h
    Filesystem      Size  Used Avail Use% Mounted on
    udev             10M     0   10M   0% /dev
    tmpfs            26G  890M   25G   4% /run
    /dev/sda1        20G  2.8G   16G  15% /
    tmpfs            63G   40M   63G   1% /dev/shm
    tmpfs           5.0M     0  5.0M   0% /run/lock
    tmpfs            63G     0   63G   0% /sys/fs/cgroup
    tmpfs            13G     0   13G   0% /run/user/0
    /dev/fuse        30M   28K   30M   1% /etc/pve

  • Go to promox interface for "activate" NFS Storages
  • Verifiy status rcpbind
    Code:
    nmap -p 111 -sU MYSTORNFS
    
    Starting Nmap 6.47 ( http://nmap.org ) at 2017-05-29 18:31 CEST
    Nmap scan report forMYSTORNFS (IP)
    Host is up (0.0042s latency).
    PORT    STATE SERVICE
    111/udp open  rpcbind
    
    Nmap done: 1 IP address (1 host up) scanned in 0.55 seconds

  • And problem persists.... proxmox not see content of disks mounted... andpro01_deactivate_nfs2.jpg pro01_deactivate_nfs.jpg pro01_deactivate_nfs2.jpg pro01_deactivate_nfs.jpg show NOT MOUNTED
 
> Go to promox interface for "activate" NFS Storages

After activating the storage, is the mount taking place ?

ie what is the output of

pvesm status

pvesm list templates

pvesm nfsscan nfs_server
 
Hi.

yes after activate on proxmox, (click activate) adn after several seconds, I can see mounted nfs
Code:
root@pro01:~# df -h
Filesystem                                         Size  Used Avail Use% Mounted on
udev                                                10M     0   10M   0% /dev
tmpfs                                               26G  890M   25G   4% /run
/dev/sda1                                           20G  2.8G   16G  16% /
tmpfs                                               63G   37M   63G   1% /dev/shm
tmpfs                                              5.0M     0  5.0M   0% /run/lock
tmpfs                                               63G     0   63G   0% /sys/fs/cgroup
tmpfs                                               13G     0   13G   0% /run/user/0
/dev/fuse                                           30M   28K   30M   1% /etc/pve
stor01:/srv/storage/backup/pro16       30T   20T  8.5T  71% /mnt/pve/backupremote
stor01:/srv/storage/backup/templates   30T   20T  8.5T  71% /mnt/pve/templates


root@pro01:~# pvesm status
mount error: mount.nfs: /mnt/pve/templates is busy or already mounted
mount error: mount.nfs: /mnt/pve/backupremote is busy or already mounted
backupremote    nfs 0               0               0               0 100.00%
local           dir 1        20026236         2867356        16118548 15.60%
lvm             lvm 1      1539141632      1328939008       210202624 86.84%
templates       nfs 0               0               0               0 100.00%

root@pro01:~# pvesm list templates
mount error: mount.nfs: /mnt/pve/templates is busy or already mounted

root@pro01:~# pvesm nfsscan stor01 |grep 5.135.138.123
/srv/storage/backup/templates    163.172.32.133,5.135.138.123,164.132.167.179,5.39.71.46,37.59.219.197
/srv/storage/backup/pro16        5.135.138.123,37.59.219.197

Best regards
 
Last edited:
hum, it might be that the NFS mount point is not correctly detected. the mount is retried and fails

so please remove the storage from the command (it will keep the content untouched)
for instance

pvesm remove backupremote

stop processes accessing the share (you can list them with fuser -uvm), unmount the filesystem manually
umount /mnt/pve/backupremote

make sure the nfs mount point is not listed in /proc/mounts

then readd the storage
pvesm add nfs backupremote -export your_export_path -server store01 -content backup,rootdir,images

then call against pvesm status

if pvesm status is still complaining with
mount error: mount.nfs: /mnt/pve/templates is busy or already mounted

send the output of awk '$3 ~ /^nfs/ {print $0}' /proc/mounts
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!