[SOLVED] problem with NFSv4 IDmapping - probably not a proxmox issue

deang

Member
Jul 27, 2015
20
1
23
I've posted a question to linuxquestions.org about a problem I'm having mapping IDs with NFS4 from a physical CentOs 6 NFS4 server and a proxmox CT CentOs 7 NFS4 client. I'm pretty certain this is NOT a proxmox issue, but figured I'd ask.

My question is, is there any configuration on a proxmox 6.0-6 host necessary to allow NFS4 ID mapping to pass trough to a CentOs 7 CT which is a NFS4 client? This CT is marked as priviliged and it is successfully mounting the NFS mount points from the physical NFS server. And ID mapping is mapping to nfsnobody instead of a real UID.

As I mentioned, I don't think this is a proxmox issue, but maybe I'm missing some NFS4 option that needs to be set on the proxmox host?

If you want the details they are here:

https://www.linuxquestions.org/questions/showthread.php?p=6042715
 
Last edited:
I just posted some information that may be useful to you [here].

That information is for unprivileged containers but should work for you as well. UID/GID would be different for privileged container because the root UID/GID should match the host. You can also try remounting with NFS' no_root_squash option.

If that info doesn't help, post your NFS server /etc/export line and NFS mount details from /etc/pve/storage.cfg from PVE
 
Thanks Republicus. I was missing no_root_squash from my NFS server's exports. I added it, refreshed everything and the problem persists. Here is my /etc/pve/storage.cfg

dir: local
path /var/lib/vz
content iso,vztmpl,backup

lvmthin: local-lvm
thinpool data
vgname pve
content rootdir,images

zfspool: local-zfs
pool prox2pool01/CONTAINERS
sparse
content images,rootdir

And here is my CT's config in /etc/pve/lxc/105.conf:

rch: amd64
cores: 2
features: mount=nfs,nesting=1
hostname: client7
memory: 4096
net0: name=eth0,bridge=vmbr0,firewall=1,gw=nnn.mmm.238.1,hwaddr=BE:5B:3E:1D:58:15,ip=nnn.mmm.238.64/24,type=veth
net1: name=eth1,bridge=vmbr1,hwaddr=4E:85:D9:3C:DD:81,ip=10.2.8.3/14,type=veth
onboot: 1
ostype: centos
rootfs: local-zfs:subvol-105-disk-0,size=300G
swap: 512

The exports entry on the NFS server is:

/NFS/html client7(rw,insecure,no_root_squash,no_subtree_check,nohide)
 
I don't see any NFS mounts on storage.cfg
Are you mounting manually via fstab?

Can you share the results of:
[/ICODE]nfsstat -m[/ICODE]

Ensure your server can ping client7 and your container responds.

I'm not sure your features line of your container config does anything.

What I would do is verify that your PVE node is able to mount and read/write to the mount point first on the PVE host.
Only after you verify that, add a bind mount to your container.

Edit the container config by removing your features line and replace it with:
mp0:/path/to/host/mount,/path/inside/lxc
 
I see now what the problem is. On my PVE host I didn't have the user "ceramext" in the /etc/passwd file. On my existing PVE host which is version 3, it wasn't necessary for the mapping to be passed to the CT. But with proxmox 6, looks like I do have to have all of the users in the /etc/passwd file on the PVE host that are being passed to the container in order for the mapping to be correct.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!