[SOLVED] Issue NFS between host and VM

Ernie95

Member
Sep 1, 2025
39
2
8
Hi All,

I am trying to mount a nfs folder from a nfs server which is in a proxmox VM.

in the VM i see the exports :
Code:
nas4free: ~# showmount -e
Exports list on localhost:
/mnt/pool1/proxmox/save/toto       192.168.150.0
pool1/proxmox is a dataset
pool1/proxmox/save is a dataset
toto is a folder (I try also as only dataset was with error also)

On my node pve the command line:
Code:
pvesm scan nfs 192.168.150.21
displays
Code:
clnt_create: RPC: Program not registered
command '/sbin/showmount --no-headers --exports 192.168.150.21' failed: exit code 1

Iptables of the node show :
Code:
root@pve:~# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination     
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:nfs
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:sunrpc
ACCEPT     udp  --  anywhere             anywhere             udp dpt:nfs
ACCEPT     udp  --  anywhere             anywhere             udp dpt:sunrpc

I try to in datacenter in storage :
Capture d’écran_2025-09-06_12-17-39.png

and I have this error
Capture d’écran_2025-09-06_12-17-57.png

Same error if I export in the vm a folder or a dataset.

I tried to modify directly the file /etc/pve storage.conf
The storage appears but the item Save-NFS in the node as a ? and if I select the item, same error (500).

No firewall is active at this moment (datacenter level and vm level)

How can I solve this 500 error ?
Or what I need to look in order that host and the VM can have a common share ? NFS or other options ?

Thanks
 
Last edited:
Thanks
Done but same issue :

Code:
nas4free1: ~# showmount -e 192.168.150.21
Exports list on 192.168.150.21:
/mnt/pool1/proxmox/save       192.168.150.0
/mnt/pool1/proxmox                 192.168.150.0

I cannot have /mnt/pool1/proxmox/save + /mnt/pool1/proxmox/save/toto as i have the option for the subfolders display.
Error :
Code:
create storage failed: mount error: mount.nfs: access denied by server while mounting 192.168.150.21:/mnt/pool1/proxmox/save/toto (500)

If I modified the storage.cfg:
Code:
nfs: NFS-pve
        export /mnt/pool1/proxmox/save
        path /mnt/pve/Save
        server 192.168.150.21
        content backup
        options soft,vers=4.2
Same error 500 (also with default, 4 or 4.1 in option)

I can access to the nfs from other computer without issue
 
I am thinking if it is an issue of mapall or maproot. I use mapall - root.
 
Last edited:
My server nfs in the VM is with Freebsd. I added maproot=root (equivalent to no_root_squash).

Same issue.
 
And manual mount : same issue
Code:
root@pve:~# mount -t nfs 192.168.150.21:/mnt/pool1/proxmox/save /mnt/pve/Save
mount.nfs: access denied by server while mounting 192.168.150.21:/mnt/pool1/proxmox/save
 
So what is freebsd as a fileserver worth for if it's denied access to it's client which tried to access as root ?
So as you wrote your nas is a vm and get it's disks from pci passthrough ctrl. what is that worth if you even cannot move it to other pve host ?
Give the disks to pve and give it a nfs-server and your main problem is gone while for the secondary you would need external storage enclosure.
 
Thanks, so the passthrough of the full disk to the VM blocks the access by the host. Am I right ?
I thought that the nfs share or samba share of the vm will allow the share.

Same issue with SMB share : no access.
 
No passthrough don't block the access as it's done by other protocol like nfs or smb.
"...access denied by server..." is definitive an export definition problem.
 
Maybe netmask missing in your export definitions (as you just show showmount -e IP outcome) ?
Like eg /mnt/pool1/proxmox 192.168.150.00/24(options)
 
I will investigate as same issue with smb.

Thanks again for your help
I am doing the reverse (NFS Export on the Proxmox VE Host to a Debian VM, Private Network in this Case otherwise NFS via Wireguard if remote Host) but I'd say that your /etc/exports seems way too simple ...

Here is mine for Comparison that ONLY grants access to the Host 172.30.1.2/32:
Code:
# Docker Mirror (Private Network)
/zdata/nfs/docker-mirror    172.30.1.2/32(rw,no_root_squash,nohide,sync,no_subtree_check,fsid=100)

With /etc/fstab on the Guest:
Code:
# Docker Mirror Data Storage
172.30.1.1:/zdata/nfs/docker-mirror        /mnt/docker-mirror            nfs    rw,user=podman,auto,nofail,x-systemd.automount,mountvers=4.0,proto=tcp        0       0

Important to remember to restart NFS Server & re-export all Shares whenever you add new or change existing Shares including their Options:
Code:
exportfs -a
exportfs -rv
systemctl restart nfs-kernel-server

Is 192.168.150.0 really the IP of your Proxmox Host ?

I usually always add the Netmask suffix to the /etc/exports Line, typically /32 (Single Host) or /24 for a 255.255.255.0 Netmask.
Not sure if that is a Problem or not in your Case, but worth checking out :) .
 
Thanks Silverstone.
My VM with passthrough (HBA and other specific disks) contains Xigmanas (based on freebsd).
I set up in Xigmanas a nfs server :
- nfs v4
- several exports : dataset and a folder as I have issue of access (in order to try also).

This results of command comes from the freebsd VM with Xigmanas:
Code:
nas4free1: ~# showmount -e 192.168.150.21
Exports list on 192.168.150.21:
/mnt/pool1/proxmox/save       192.168.150.0
/mnt/pool1/proxmox                 192.168.150.0

It is 192.168.150.0 for access to all local machine on this local network. It is a home server not enterprise purpose.

In my freebsd VM the export file is
Code:
/mnt/pool1/proxmox -alldirs -network 192.168.150.0/24
/mnt/pool1/proxmox/save -alldirs -network 192.168.150.0/24

I tested also options:
- -mapall="root"
- -maproot="root"
instead of -alldirs

In freebsd, mapall seems to be in Debian no_root_squash.
All the tests was not successfull (access denied, error 500).

I understand your reverse idea. I began to set up a dataset on my local disk : I enable sharenfs
Code:
ZFS_Mirror_PVE/share                     sharenfs  on        local

Then I tested this command : on proxmox IP adress (20) and on VM adress (21)
Code:
root@pve:~# pvesm scan nfs 192.168.150.20
/ZFS_Mirror_PVE/share *
root@pve:~# pvesm scan nfs 192.168.150.21
/ZFS_Mirror_PVE/share *

I am surprised to have the same results. Only host should show the nfs share not the vm.

I think I misunderstand a logic of proxmox.

Thanks for any advice
 
Last edited:
Hi All,

The verbose of mount shows port 33112, but not present iptables of PVE
Code:
root@pve:~# mount -v -t nfs 192.168.150.21:/mnt/pool1/proxmox/save /mnt/pve/Save
mount.nfs: timeout set for Tue Sep  9 11:50:05 2025
mount.nfs: trying text-based options 'vers=4.2,addr=192.168.150.21,clientaddr=192.168.150.21'
mount.nfs: mount(2): No such file or directory
mount.nfs: trying text-based options 'addr=192.168.150.21'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: trying 192.168.150.21 prog 100003 vers 3 prot TCP port 2049
mount.nfs: prog 100005, trying vers=3, prot=17
mount.nfs: trying 192.168.150.21 prog 100005 vers 3 prot UDP port 33112
mount.nfs: mount(2): Permission denied
mount.nfs: access denied by server while mounting 192.168.150.21:/mnt/pool1/proxmox/save

Do I must add it in iptables ?

BR
 
Hi All,

My issue is the communication between the host and the vm. The host don't see the nfs server of the vm. Or nfs server freebsd (in vm) cannot communicate with debian nfs client (I think no, but an assumption).

Why ? I tested to setup a nfs server in a computer (not in vm) on my network , and it works between host and the computer

Code:
root@pve:~# pvesm scan nfs 192.168.150.21 (IP of the vm on the hsot)

root@pve:~# pvesm scan nfs 192.168.150.57
/media/proxmox 192.168.150.20/24
root@pve:~#

I continue my research.

If in the console of the host I create a user NFStest (UID 1001) and in the nfs server in the vm I create the same user NFtest (UID 1001), does the host change the UID inside the VM for communication host with vm ?

Thanks for any advice
 
Last edited: