NFS4 idmap problem after upgrade to Proxmox 5


New Member
Jan 23, 2018
Recently we have upgraded from Proxmox 4.4 to version 5.1. We are running several ubuntu 16.04/14.04 and centos 6 containers for remote desktops, computations and data analysis tasks. We have NFS servers for user homes (Omnios/ZFS) with separate ZFS filesystems, that have to be mounted using NFS4, hence we use autofs+LDAP or systemd automounting. Everytithing worked fine under version 4.4. However, after upgrade to 5.1 the nfs id mapping has stopped to work, so all files in home directories are now mapped to "nobody:users". The same problem occurs on all ubuntu/centos 6 containers

We use custom apparmor profile:
root@vmhost6:~# cat /etc/apparmor.d/lxc/lxc-default-with-netmounts
# Do not load this file.  Rather, load /etc/apparmor.d/lxc-containers, which
# will source all profiles under /etc/apparmor.d/lxc

profile lxc-container-default-with-netmounts flags=(attach_disconnected,mediate_deleted) {
  #include <abstractions/lxc/container-base>

# allow standard blockdevtypes.
# The concern here is in-kernel superblock parsers bringing down the
# host with bad data.  However, we continue to disallow proc, sys, securityfs,
# etc to nonstandard locations.
  mount fstype=rpc_pipefs,
  mount fstype=nfs*,
  mount fstype=cgroup -> /sys/fs/cgroup/**,
  mount fstype=autofs,
but we have also tried the unconfined profile and it has not helped.

The nfs-idmapd service is started and the domain matches with idmap.conf:
fridrich@soroban-node-02 ~ $ hostname -d
fridrich@soroban-node-02 ~ $ cat /etc/idmapd.conf

Verbosity = 0
Pipefs-Directory = /run/rpc_pipefs
# set your own domain here, if id differs from FQDN minus hostname
Domain =


Nobody-User = nobody
Nobody-Group = nogroup

The mounts look like this:
fridrich@soroban-node-02 ~ $ cat /proc/mounts | grep fri
nfsserv1:/compass/home/fridrich /compass/home/fridrich nfs4 rw,nosuid,nodev,noatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=50,retrans=2,sec=sys,clientaddr=,local_lock=none,addr= 0 0
And on the fileserver:
root@nfsserv1:/root# zfs get sharenfs compass/home/fridrich
NAME                   PROPERTY  VALUE                                 SOURCE
compass/home/fridrich  sharenfs  sec=sys,sec=krb5,sec=krb5i,sec=krb5p  inherited from compass

In /var/log/syslog on proxmox one can see this kind of messages:
435036:Jan 23 07:15:15 vmhost6 nfsidmap[35780]: nss_getpwnam: name 'fridrich' not found in domain ''

dmesg does not show any apparmor problems.

When trying to run rpc.idmap directly, some missing event_base warning occurs:
root@soroban-node-02:~# rpc.idmapd -fvvvvvvvvvvvvv
rpc.idmapd: libnfsidmap: using domain:
rpc.idmapd: libnfsidmap: Realms list: 'TOK.IPP.CAS.CZ'
rpc.idmapd: libnfsidmap: processing 'Method' list
rpc.idmapd: libnfsidmap: loaded plugin /lib/x86_64-linux-gnu/libnfsidmap/ for method nsswitch

rpc.idmapd: Expiration time is 600 seconds.
rpc.idmapd: Opened /proc/net/rpc/nfs4.nametoid/channel
rpc.idmapd: Opened /proc/net/rpc/nfs4.idtoname/channel
rpc.idmapd: New client: 18
rpc.idmapd: Opened /run/rpc_pipefs/nfs/clnt18/idmap
rpc.idmapd: New client: 24
rpc.idmapd: New client: 3a
rpc.idmapd: New client: 3b
rpc.idmapd: New client: 3c
[warn] event_del: event has no event_base set.

On proxmox, when I manually create user with the correct uid and manually mount the nfs, then the user sees the files in mounted folder correctly.

Version information:
root@vmhost6:~# uname -a
Linux vmhost6 4.13.13-5-pve #1 SMP PVE 4.13.13-36 (Mon, 15 Jan 2018 12:36:49 +0100) x86_64 GNU/Linux
root@vmhost6:~# pveversion -v
proxmox-ve: 5.1-36 (running kernel: 4.13.13-5-pve)
pve-manager: 5.1-42 (running version: 5.1-42/724a6cb3)
pve-kernel-4.13.13-5-pve: 4.13.13-36
pve-kernel-4.10.17-5-pve: 4.10.17-25
libpve-http-server-perl: 2.0-8
lvm2: 2.02.168-pve6
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-19
qemu-server: 5.0-19
pve-firmware: 2.0-3
libpve-common-perl: 5.0-25
libpve-guest-common-perl: 2.0-14
libpve-access-control: 5.0-7
libpve-storage-perl: 5.0-17
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-3
pve-docs: 5.1-16
pve-qemu-kvm: 2.9.1-6
pve-container: 2.0-18
pve-firewall: 3.0-5
pve-ha-manager: 2.0-4
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.1.1-2
lxcfs: 2.0.8-1
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.7.3-pve1~bpo9
nfs versions on proxmox server:
root@vmhost6:~# apt search nfs | grep install | grep nfs

WARNING: apt does not have a stable CLI interface. Use with caution in scripts.

libnfsidmap2/stable,now 0.25-5.1 amd64 [installed]
nfs-common/stable,now 1:1.3.4-2.1 amd64 [installed]
nfs-kernel-server/stable,now 1:1.3.4-2.1 amd64 [installed]
nfs4-acl-tools/stable,now 0.3.3-3 amd64 [installed]
nfs versions inside ubunt 16 LXC
fridrich@soroban-node-02 ~ $ apt search nfs | grep install | grep nfs

WARNING: apt does not have a stable CLI interface. Use with caution in scripts.

libnfsidmap2/xenial,now 0.25-5 amd64 [installed,automatic]
nfs-common/xenial-updates,now 1:1.2.8-9ubuntu12.1 amd64 [installed]
nfs-kernel-server/xenial-updates,now 1:1.2.8-9ubuntu12.1 amd64 [installed]
nfs4-acl-tools/xenial,now 0.3.3-3 amd64 [installed]

I have already tried several (possibly unrelated) things to resolve the problem:
  • switching /sys/module/nfs/parameters/nfs4_disable_idmapping to "N"
  • switching off apparmor
  • switching off pve-firewall
  • booting older kernel pve-kernel-4.10
but nothing has helped. I suspect LXC 2, but I have not found any similar problem in the internet yet...

Does anybody know what else to try?

Thanks in advance...



The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!