After update from 8.4 to 9.0 NFS doesn't work

p-user

New Member
Jan 26, 2024
27
2
3
Hi Fiona,

I've got a tiny proxmox cluster, two N100/N150 32GB nodes, with an extra one as a pbs, same specs (except 16GB).
The pbs is an extra quorum device to make up for the three node cluster.
For cluster storage I use an NFS server, which worked fine for version 8.4.
After the update the pbs storage came up fine, but not the nfs share, on both nodes.

Here's my storage.cfg:
root@pve1:/etc/pve# cat storage.cfg
dir: local
disable
path /var/lib/vz
content iso,vztmpl,backup
shared 0

lvmthin: local-lvm
disable
thinpool data
vgname pve
content images,rootdir

dir: Storage
path /mnt/pve/Storage
content import,snippets,iso,rootdir,vztmpl,images
nodes pve2
prune-backups keep-all=1
shared 0

pbs: Backup_General
datastore PBSStorage
server pbs.pauw.local
content backup
fingerprint ######################################
namespace Backup
prune-backups keep-all=1
username root@pam

pbs: Backup_Mail
datastore PBSStorage
server pbs.pauw.local
content backup
fingerprint ######################################
namespace MailBackup
prune-backups keep-all=1
username root@pam

nfs: Storage_NFS
export /mnt/user/nfs
path /mnt/pve/Storage_NFS
server nfs.pauw.local
content images,iso
nodes pve2,pve1
preallocation metadata
prune-backups keep-all=1

root@pve1:/etc/pve# showmount -e nfs.pauw.local
Export list for nfs.pauw.local:
/mnt/user/nfs 192.168.2.*

Here are the settings:
1754494477111.png
These settings haven't changed.
And here are the last bits of the /var/log/syslog on one of the nodes:
2025-08-06T17:31:40.346419+02:00 pve1 pvedaemon[24999]: mount error: exit code 32
2025-08-06T17:31:43.572887+02:00 pve1 pve-firewall[1307]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
2025-08-06T17:31:49.430577+02:00 pve1 pvestatd[1312]: mount error: exit code 32
2025-08-06T17:31:53.585853+02:00 pve1 pve-firewall[1307]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
2025-08-06T17:31:59.891675+02:00 pve1 pvestatd[1312]: mount error: exit code 32
2025-08-06T17:32:03.599348+02:00 pve1 pve-firewall[1307]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
2025-08-06T17:32:09.381852+02:00 pve1 pvestatd[1312]: mount error: exit code 32
2025-08-06T17:32:13.612597+02:00 pve1 pve-firewall[1307]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
2025-08-06T17:32:20.097216+02:00 pve1 pvestatd[1312]: mount error: exit code 32
2025-08-06T17:32:23.625870+02:00 pve1 pve-firewall[1307]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
2025-08-06T17:32:29.583974+02:00 pve1 pvestatd[1312]: mount error: exit code 32
2025-08-06T17:32:33.639030+02:00 pve1 pve-firewall[1307]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
2025-08-06T17:32:40.077188+02:00 pve1 pvestatd[1312]: mount error: exit code 32
2025-08-06T17:32:43.652503+02:00 pve1 pve-firewall[1307]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
2025-08-06T17:32:49.567686+02:00 pve1 pvestatd[1312]: mount error: exit code 32
2025-08-06T17:32:53.665460+02:00 pve1 pve-firewall[1307]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
2025-08-06T17:32:59.027332+02:00 pve1 pvestatd[1312]: mount error: exit code 32
2025-08-06T17:33:03.678988+02:00 pve1 pve-firewall[1307]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
2025-08-06T17:33:09.515679+02:00 pve1 pvestatd[1312]: mount error: exit code 32
2025-08-06T17:33:13.693012+02:00 pve1 pve-firewall[1307]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
2025-08-06T17:33:20.136445+02:00 pve1 pvestatd[1312]: mount error: exit code 32
2025-08-06T17:33:23.706329+02:00 pve1 pve-firewall[1307]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
2025-08-06T17:33:29.499493+02:00 pve1 pvestatd[1312]: mount error: exit code 32
2025-08-06T17:33:33.719479+02:00 pve1 pve-firewall[1307]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
2025-08-06T17:33:39.212533+02:00 pve1 pvestatd[1312]: mount error: exit code 32
2025-08-06T17:33:43.732868+02:00 pve1 pve-firewall[1307]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
2025-08-06T17:33:49.701220+02:00 pve1 pvestatd[1312]: mount error: exit code 32
2025-08-06T17:33:53.745871+02:00 pve1 pve-firewall[1307]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
2025-08-06T17:33:59.064411+02:00 pve1 pvestatd[1312]: mount error: exit code 32
2025-08-06T17:34:03.758995+02:00 pve1 pve-firewall[1307]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
2025-08-06T17:34:09.811517+02:00 pve1 pvestatd[1312]: mount error: exit code 32
2025-08-06T17:34:13.771930+02:00 pve1 pve-firewall[1307]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
2025-08-06T17:34:19.274108+02:00 pve1 pvestatd[1312]: mount error: exit code 32
2025-08-06T17:34:23.785331+02:00 pve1 pve-firewall[1307]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
2025-08-06T17:34:29.634605+02:00 pve1 pvestatd[1312]: mount error: exit code 32
2025-08-06T17:34:33.798431+02:00 pve1 pve-firewall[1307]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
2025-08-06T17:34:39.378052+02:00 pve1 pvestatd[1312]: mount error: exit code 32
2025-08-06T17:34:43.811844+02:00 pve1 pve-firewall[1307]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
2025-08-06T17:34:49.740474+02:00 pve1 pvestatd[1312]: mount error: exit code 32
2025-08-06T17:34:53.824880+02:00 pve1 pve-firewall[1307]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
2025-08-06T17:34:59.329388+02:00 pve1 pvestatd[1312]: mount error: exit code 32
2025-08-06T17:35:03.838031+02:00 pve1 pve-firewall[1307]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
2025-08-06T17:35:09.819183+02:00 pve1 pvestatd[1312]: mount error: exit code 32
2025-08-06T17:35:13.851237+02:00 pve1 pve-firewall[1307]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
2025-08-06T17:35:19.437252+02:00 pve1 pvestatd[1312]: mount error: exit code 32
2025-08-06T17:35:23.864735+02:00 pve1 pve-firewall[1307]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
2025-08-06T17:35:29.926698+02:00 pve1 pvestatd[1312]: mount error: exit code 32
2025-08-06T17:35:33.877564+02:00 pve1 pve-firewall[1307]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
2025-08-06T17:35:39.415107+02:00 pve1 pvestatd[1312]: mount error: exit code 32
2025-08-06T17:35:40.104395+02:00 pve1 systemd[1]: Started session-16.scope - Session 16 of User root.
2025-08-06T17:35:43.890755+02:00 pve1 pve-firewall[1307]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
2025-08-06T17:35:49.902984+02:00 pve1 pvestatd[1312]: mount error: exit code 32

I have never configured anything for the firewall.

As a bonus, another problem cropt up. I restored a VM from the pbs to local storage and tried to start it and got this:

TASK ERROR: KVM virtualisation configured, but not available. Either disable in VM configuration or enable in BIOS.

Which puzzeled me, as nothing was changed in the BIOS of the nodes.
 
root@pve1:/etc/pve# showmount -e nfs.pauw.local
Export list for nfs.pauw.local:
/mnt/user/nfs 192.168.2.*
Okay, so listing the shares works at least, that's a start.

2025-08-06T17:31:49.430577+02:00 pve1 pvestatd[1312]: mount error: exit code 32
2025-08-06T17:31:53.585853+02:00 pve1 pve-firewall[1307]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
It's unfortunate that there's not more details. Could you try mounting the share manually mount -t nfs nfs.pauw.local:/mnt/user/nfs /mnt/pve/Storage_NFS? What does pvesm status say?

I have never configured anything for the firewall.

As a bonus, another problem cropt up. I restored a VM from the pbs to local storage and tried to start it and got this:

TASK ERROR: KVM virtualisation configured, but not available. Either disable in VM configuration or enable in BIOS.

Which puzzeled me, as nothing was changed in the BIOS of the nodes.
This also sounds very strange. Could you share the full journal for the current boot, i.e. journalctl -b > /tmp/boot.txt?
 
Hi Fiona,
thanks for your help. I'm at work right now, but will report this evening, i.e. in about six hours.
Let's hope it is an easy fix.
Regards,
Albert
 
Same for me !! Unable to start any VM

mount -vvv -t nfs truenas.denis.prive:/mnt/POOL-ZFS01/DS-V /ZZ1
mount.nfs: failed to prepare mount: No such device


lsmod | grep -i nfs
> nothing

pvesm status
storage 'nfs_lv_vm' is not online
mount.nfs: failed to prepare mount: No such device
mount error: exit code 32
mount.nfs: failed to prepare mount: No such device
mount error: exit code 32
Name Type Status Total Used Available %
local dir active 28461108 5653796 21336236 19.86%
lv_vm dir active 1269650848 1017445124 194237168 80.14%
nfs_lv_vm nfs inactive 0 0 0 0.00%
nfs_truenas_backups nfs inactive 0 0 0 0.00%
nfs_x_images_iso nfs inactive 0 0 0 0.00%


journalctl -b | grep -i nfs
Aug 07 09:46:59 r740-1 systemd[1]: Mounting proc-fs-nfsd.mount - NFSD configuration filesystem...
Aug 07 09:46:59 r740-1 systemd[1]: proc-fs-nfsd.mount: Mount process exited, code=exited, status=32/n/a
Aug 07 09:46:59 r740-1 systemd[1]: proc-fs-nfsd.mount: Failed with result 'exit-code'.
Aug 07 09:46:59 r740-1 systemd[1]: Failed to mount proc-fs-nfsd.mount - NFSD configuration filesystem.
Aug 07 09:46:59 r740-1 systemd[1]: Dependency failed for nfs-server.service - NFS server and services.
Aug 07 09:46:59 r740-1 systemd[1]: Dependency failed for nfs-mountd.service - NFS Mount Daemon.
Aug 07 09:46:59 r740-1 systemd[1]: nfs-mountd.service: Job nfs-mountd.service/start failed with result 'dependency'.
Aug 07 09:46:59 r740-1 systemd[1]: Dependency failed for nfs-idmapd.service - NFSv4 ID-name mapping service.
Aug 07 09:46:59 r740-1 systemd[1]: nfs-idmapd.service: Job nfs-idmapd.service/start failed with result 'dependency'.
Aug 07 09:46:59 r740-1 systemd[1]: nfs-server.service: Job nfs-server.service/start failed with result 'dependency'.
Aug 07 09:46:59 r740-1 systemd[1]: Dependency failed for nfsdcld.service - NFSv4 Client Tracking Daemon.
Aug 07 09:46:59 r740-1 systemd[1]: nfsdcld.service: Job nfsdcld.service/start failed with result 'dependency'.
Aug 07 09:46:59 r740-1 mount[990]: mount: /proc/fs/nfsd: unknown filesystem type 'nfsd'.

systemctl restart nfs-kernel-server
journalctl -xe
 A start job for unit rpc-gssd.service has finished with a failure.
░░
░░ The job identifier is 1418 and the job result is dependency.
Aug 07 10:28:28 r740-1 systemd[1]: rpc-gssd.service: Job rpc-gssd.service/start failed with result 'dependency'.
Aug 07 10:28:28 r740-1 systemd[1]: rpc_pipefs.target: Job rpc_pipefs.target/start failed with result 'dependency'.
Aug 07 10:28:28 r740-1 systemd[1]: proc-fs-nfsd.mount: Mount process exited, code=exited, status=32/n/a
░░ Subject: Unit process exited
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ An n/a= process belonging to unit proc-fs-nfsd.mount has exited.
░░
░░ The process' exit code is 'exited' and its exit status is 32.
Aug 07 10:28:28 r740-1 systemd[1]: proc-fs-nfsd.mount: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ The unit proc-fs-nfsd.mount has entered the 'failed' state with result 'exit-code'.
Aug 07 10:28:28 r740-1 systemd[1]: Failed to mount proc-fs-nfsd.mount - NFSD configuration filesystem.
░░ Subject: A start job for unit proc-fs-nfsd.mount has failed
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ A start job for unit proc-fs-nfsd.mount has finished with a failure.
░░
░░ The job identifier is 1406 and the job result is failed.
Aug 07 10:28:28 r740-1 systemd[1]: Dependency failed for nfs-server.service - NFS server and services.
░░ Subject: A start job for unit nfs-server.service has failed
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ A start job for unit nfs-server.service has finished with a failure.
░░
░░ The job identifier is 1281 and the job result is dependency.
Aug 07 10:28:28 r740-1 systemd[1]: Dependency failed for nfs-mountd.service - NFS Mount Daemon.
░░ Subject: A start job for unit nfs-mountd.service has failed
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ A start job for unit nfs-mountd.service has finished with a failure.
░░
░░ The job identifier is 1407 and the job result is dependency.
Aug 07 10:28:28 r740-1 systemd[1]: nfs-mountd.service: Job nfs-mountd.service/start failed with result 'dependency'.
Aug 07 10:28:28 r740-1 systemd[1]: nfs-server.service: Job nfs-server.service/start failed with result 'dependency'.
Aug 07 10:28:35 r740-1 pvestatd[1765]: storage 'nfs_lv_vm' is not online
Aug 07 10:28:35 r740-1 pvestatd[1765]: mount error: exit code 32
Aug 07 10:28:35 r740-1 pvestatd[1765]: mount error: exit code 32
Aug 07 10:28:35 r740-1 pve-firewall[1764]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
 
Last edited:
More infos:

systemctl restart nfs-server
journalctl -xe
...
The unit run-rpc_pipefs.mount has entered the 'failed' state with result 'exit-code'.
Aug 07 10:48:15 r740-1 systemd[1]: Failed to mount run-rpc_pipefs.mount - RPC Pipe File System.
░░ Subject: A start job for unit run-rpc_pipefs.mount has failed
 
Hi,
@titou10 are you running your NFS server on the same node as the client? Otherwise, you shouldn't need the nfs-server package and service there.
 
Creating a new NFS storage at the console fails with
create storage failed: storage 'xxx' is not online (500)

Editing an NFS storage with the goal to disable it fails with:
update storage failed: no such option 'maxfiles' (500)

'maxfile' can not be found in any file under /etc/pve...

Totally stuck here
 
Hi,
@titou10 are you running your NFS server on the same node as the client? Otherwise, you shouldn't need the nfs-server package and service there.
Yes. Worked fine with 8.x version. did not changed anything after v9 upgrade
But also the NFS client is broken, see previous post with tentative to create new NFS storage to another NFS server
 
Last edited:
Creating a new NFS storage at the console fails with
create storage failed: storage 'xxx' is not online (500)
If the NFS server is not running, this is no surprise. Please share the full log from the current boot: journalctl -b /tmp/boot.txt
Editing an NFS storage with the goal to disable it fails with:
update storage failed: no such option 'maxfiles' (500)

'maxfile' can not be found in any file under /etc/pve...
Regarding the maxfiles parameter, this has been deprecated long ago and is dropped in Proxmox VE 9. The pve8to9 checker script would tell you and this is also noted in the breaking changes. Remove the setting and use prune-backups keep-all=1 if you had set maxfiles to 0 or prune-backups keep-last=N if you had maxfiles set to non-zero N.
 
The pve8to9 checker script would tell you and this is also noted in the breaking changes.

Nope, setting prune-backups keep-all=1 in storage.cfg. solved the "edit storage" console issue, not the create of new NFS storage to external nfs server

'boot.txt' attached. in there you can find this that may be related to the problem
systemd[1]: Starting of proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point unsupported
 

Attachments

Last edited:
Hi Fiona,
here are my replies:

root@pve2:~# mount -t nfs nfs.pauw.local/mnt/usr/nfs /mnt/pve/Storage_NFS
mount.nfs: failed to prepare mount: No such device

FYI, The last 1000s of lines of boot.txt is just a repetition about firewalld

Regards,
Albert
 

Attachments

@p-user and @fiona I solved it for my own problem
The only warning I had from pve8to9 was that i should install the "Intel-microcode" package
WARN: The matching CPU microcode package 'intel-microcode' could not be found! Consider installing it to receive the latest security and bug fixes for your CPU.

I install it and now NFS works !! (and many other things)

To install it, add the "non-free-firmware" repo to /etc/apt/sources.list.d/debian.sources,

Types: deb
URIs: https://deb.debian.org/debian/
Suites: trixie
Components: main non-free-firmware
Signed-By: /usr/share/keyrings/debian-archive-keyring.gpg

or if you use old style /etc/apt/sources.list :
deb-src http://deb.debian.org/debian trixie-updates main contrib non-free non-free-firmware

Then "apt update && apt install intel-microcode", reboot and youou.it works
Now "lsmod" shows way more modules loaded...

It also solved the problem with KVM when the microcode is not installed:
KVM virtualisation configured, but not available. Either disable in VM configuration or enable in BIOS
..and cpu=host does not work

Now this also works

@fiona IMHO the warning about "intel-microcode" missing is misleading. It is a blocker in my case, it should be shown a such, ie blocker

lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Silver 4214 CPU @ 2.20GHz
 
Last edited:
That's great, thank you so much. I will try it tomorrow. I already had taken one of the nodes out of the cluster for a clean install, but hadnt got round that bit yet. If this works then I have to find a way to add it back into the cluster, using the same hostname.

FYI I had no warning regarding missing intel microcode in the pve8to9 script. I had a warning that I had two vms running, so I stopped them and pve8to9 was all green.

If it works you are a life saver.

Regards Albert
 
If it works you are a life saver.
yw. finger crossed.
Before installing the intel-microcode package, lsmod what showing about 20 modules. now there are 118, including the nfs, nfsd, netfs, nfsv4...modules that were not there
In my case the intel-microcode package is required. I do not see a direct relationship with the NFS features/services/share not working except that the required modules where not loaded by the kernel probably because the MB (Dell Poweredge R740 in my case) was not detected
 
We'll see, my specs are more modest. It's a GMKtec G3 plus with 32GB memory and 1 TB nvme drive and Intel N150 processor. Works great though (if it works).