Issue NFS between host and VM

Ernie95

New Member
Sep 1, 2025
17
0
1
Hi All,

I am trying to mount a nfs folder from a nfs server which is in a proxmox VM.

in the VM i see the exports :
Code:
nas4free: ~# showmount -e
Exports list on localhost:
/mnt/pool1/proxmox/save/toto       192.168.150.0
pool1/proxmox is a dataset
pool1/proxmox/save is a dataset
toto is a folder (I try also as only dataset was with error also)

On my node pve the command line:
Code:
pvesm scan nfs 192.168.150.21
displays
Code:
clnt_create: RPC: Program not registered
command '/sbin/showmount --no-headers --exports 192.168.150.21' failed: exit code 1

Iptables of the node show :
Code:
root@pve:~# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination     
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:nfs
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:sunrpc
ACCEPT     udp  --  anywhere             anywhere             udp dpt:nfs
ACCEPT     udp  --  anywhere             anywhere             udp dpt:sunrpc

I try to in datacenter in storage :
Capture d’écran_2025-09-06_12-17-39.png

and I have this error
Capture d’écran_2025-09-06_12-17-57.png

Same error if I export in the vm a folder or a dataset.

I tried to modify directly the file /etc/pve storage.conf
The storage appears but the item Save-NFS in the node as a ? and if I select the item, same error (500).

No firewall is active at this moment (datacenter level and vm level)

How can I solve this 500 error ?
Or what I need to look in order that host and the VM can have a common share ? NFS or other options ?

Thanks
 
Last edited:
Thanks
Done but same issue :

Code:
nas4free1: ~# showmount -e 192.168.150.21
Exports list on 192.168.150.21:
/mnt/pool1/proxmox/save       192.168.150.0
/mnt/pool1/proxmox                 192.168.150.0

I cannot have /mnt/pool1/proxmox/save + /mnt/pool1/proxmox/save/toto as i have the option for the subfolders display.
Error :
Code:
create storage failed: mount error: mount.nfs: access denied by server while mounting 192.168.150.21:/mnt/pool1/proxmox/save/toto (500)

If I modified the storage.cfg:
Code:
nfs: NFS-pve
        export /mnt/pool1/proxmox/save
        path /mnt/pve/Save
        server 192.168.150.21
        content backup
        options soft,vers=4.2
Same error 500 (also with default, 4 or 4.1 in option)

I can access to the nfs from other computer without issue
 
And manual mount : same issue
Code:
root@pve:~# mount -t nfs 192.168.150.21:/mnt/pool1/proxmox/save /mnt/pve/Save
mount.nfs: access denied by server while mounting 192.168.150.21:/mnt/pool1/proxmox/save
 
So what is freebsd as a fileserver worth for if it's denied access to it's client which tried to access as root ?
So as you wrote your nas is a vm and get it's disks from pci passthrough ctrl. what is that worth if you even cannot move it to other pve host ?
Give the disks to pve and give it a nfs-server and your main problem is gone while for the secondary you would need external storage enclosure.
 
Thanks, so the passthrough of the full disk to the VM blocks the access by the host. Am I right ?
I thought that the nfs share or samba share of the vm will allow the share.

Same issue with SMB share : no access.
 
No passthrough don't block the access as it's done by other protocol like nfs or smb.
"...access denied by server..." is definitive an export definition problem.
 
Maybe netmask missing in your export definitions (as you just show showmount -e IP outcome) ?
Like eg /mnt/pool1/proxmox 192.168.150.00/24(options)
 
I will investigate as same issue with smb.

Thanks again for your help
I am doing the reverse (NFS Export on the Proxmox VE Host to a Debian VM, Private Network in this Case otherwise NFS via Wireguard if remote Host) but I'd say that your /etc/exports seems way too simple ...

Here is mine for Comparison that ONLY grants access to the Host 172.30.1.2/32:
Code:
# Docker Mirror (Private Network)
/zdata/nfs/docker-mirror    172.30.1.2/32(rw,no_root_squash,nohide,sync,no_subtree_check,fsid=100)

With /etc/fstab on the Guest:
Code:
# Docker Mirror Data Storage
172.30.1.1:/zdata/nfs/docker-mirror        /mnt/docker-mirror            nfs    rw,user=podman,auto,nofail,x-systemd.automount,mountvers=4.0,proto=tcp        0       0

Important to remember to restart NFS Server & re-export all Shares whenever you add new or change existing Shares including their Options:
Code:
exportfs -a
exportfs -rv
systemctl restart nfs-kernel-server

Is 192.168.150.0 really the IP of your Proxmox Host ?

I usually always add the Netmask suffix to the /etc/exports Line, typically /32 (Single Host) or /24 for a 255.255.255.0 Netmask.
Not sure if that is a Problem or not in your Case, but worth checking out :) .
 
Thanks Silverstone.
My VM with passthrough (HBA and other specific disks) contains Xigmanas (based on freebsd).
I set up in Xigmanas a nfs server :
- nfs v4
- several exports : dataset and a folder as I have issue of access (in order to try also).

This results of command comes from the freebsd VM with Xigmanas:
Code:
nas4free1: ~# showmount -e 192.168.150.21
Exports list on 192.168.150.21:
/mnt/pool1/proxmox/save       192.168.150.0
/mnt/pool1/proxmox                 192.168.150.0

It is 192.168.150.0 for access to all local machine on this local network. It is a home server not enterprise purpose.

In my freebsd VM the export file is
Code:
/mnt/pool1/proxmox -alldirs -network 192.168.150.0/24
/mnt/pool1/proxmox/save -alldirs -network 192.168.150.0/24

I tested also options:
- -mapall="root"
- -maproot="root"
instead of -alldirs

In freebsd, mapall seems to be in Debian no_root_squash.
All the tests was not successfull (access denied, error 500).

I understand your reverse idea. I began to set up a dataset on my local disk : I enable sharenfs
Code:
ZFS_Mirror_PVE/share                     sharenfs  on        local

Then I tested this command : on proxmox IP adress (20) and on VM adress (21)
Code:
root@pve:~# pvesm scan nfs 192.168.150.20
/ZFS_Mirror_PVE/share *
root@pve:~# pvesm scan nfs 192.168.150.21
/ZFS_Mirror_PVE/share *

I am surprised to have the same results. Only host should show the nfs share not the vm.

I think I misunderstand a logic of proxmox.

Thanks for any advice
 
Last edited: