NFSv4 server in openvz container?

welbers

New Member
Dec 31, 2011
9
0
1
Hello,

I tried to get NFSv4 server running in an openvz container but it doesn't work.

I enabled all NFS-features for the openvz container:
FEATURES="sysfs:eek:n nfs:eek:n nfsd:eek:n "

What works is NFSv3 server in openvz and mounting v3- and v4-shares but that's not what I need.

I need NFSv4 server in openvz with Kerberos support.

I'm gettings these error messages in the container if I start the NFSv4 server:
rpc.idmapd[481]: nfsdreopen: Opening '/proc/net/rpc/nfs4.nametoid/channel' failed: errno 2 (No such file or directory)
rpc.idmapd[481]: nfsdreopen: Opening '/proc/net/rpc/nfs4.idtoname/channel' failed: errno 2 (No such file or directory)

Is there another feature I need to enable? Is there any way to get it running? I've no ideas anymore and Google didn't really help.
 
In the Readme.pdf of Virtuozzo 4.7 I found the answer: Running NFSv4 servers in Containers is not supported.

This is a known limitation but very annoying. Virtuozzo is commercial and unable to serve as a fileserver with NFSv4 and v4 isn't very new. I don't understand it.

As Openvz is the community version of Virtuozzo it suffers the same problems, I think.
 
In the Readme.pdf of Virtuozzo 4.7 I found the answer: Running NFSv4 servers in Containers is not supported.

This is a known limitation but very annoying. Virtuozzo is commercial and unable to serve as a fileserver with NFSv4 and v4 isn't very new. I don't understand it.

As Openvz is the community version of Virtuozzo it suffers the same problems, I think.

the openvz team will know the answer, so just ask.
 
Hello,

Sorry to up this old message but i have the same problem.

I try to configure NFV4 on a container (CT), i used the container as client.

My configuration (Debian/Lenny):

pve-manager: 1.9-26 (pve-manager/1.9/6567)
running kernel: 2.6.32-6-pve
proxmox-ve-2.6.32: 1.9-50
pve-kernel-2.6.32-4-pve: 2.6.32-33
pve-kernel-2.6.32-6-pve: 2.6.32-55+ovzfix-1
qemu-server: 1.1-32
pve-firmware: 1.0-15
libpve-storage-perl: 1.0-19
vncterm: 0.9-2
vzctl: 3.0.29-3pve1
vzdump: 1.2-16
vzprocps: 2.0.11-2
vzquota: 3.0.11-1dso1
pve-qemu-kvm: 0.15.0-2
ksm-control-daemon: 1.0-6

I would like to use Idmap with NFSV4, to synchronize my account between NFS client and server.

I enable nfs and nfsd features on CT. But i have this message on the log when i try to start nfs-common :


Apr 9 09:03:10 rez10 rpc.idmapd[18504]: libnfsidmap: using domain: rezoo.frApr 9 09:03:10 rez10 rpc.idmapd[18504]: libnfsidmap: processing 'Method' listApr 9 09:03:10 rez10 rpc.idmapd[18504]: libnfsidmap: loaded plugin /usr/lib/libnfsidmap/nsswitch.so for method nsswitchApr 9 09:03:10 rez10 rpc.idmapd[18505]: Expiration time is 600 seconds.Apr 9 09:03:10 rez10 rpc.idmapd[18505]: nfsdopenone: Opening /proc/net/rpc/nfs4.nametoid/channel failed: errno 2 (No such file or directory)



I search on google and i don't find a solution. However, i found this => comments.gmane.org/gmane.linux.openvz.user/4640

I extracting the most important :

Quote:
Having said that, only NFS v2 and NFS v3 are supported inside CT.

We are currently working on making NFS v4 work inside containers, but we do it for mainline
kernels rather than in RHEL6 kernel. So whenever we will port OpenVZ to any of 3.3 kernels,
it will most probably have NFS v4 support. RHEL7-based OpenVZ kernel will have it, too.​

I successful mount my share in the CT with NFSV4, but idmapd not works.

For your information, NFSV4 + Idmapd works on the root host...

Could you please help me ?
 
Oups ... Sorry.

You think the upgrade to lastest 2.X correct the problem ? NFSV4+Idmapd is implemented on ?
 
Last edited:
It's a big migration for just try :) .....

I hope someone as the same problem and say me if it's ok on Proxmox 2.X
 
Just a quick though, since you said that NFS works on the root host, maybe you could run the nfs from there and then bind to the folders inside the CT? I know there are instructions for binding folders from the host to ct in the proxmox or openvz wiki, I did it some time ago.

Of course i'm not sure this wouldnt do something batshit insane, and if the CTs are going to be accessed by third parties I wouldnt mount host folders inside that CT ;)
 
I installed Proxmox 2.3 :

pve-manager: 2.3-13 (pve-manager/2.3/7946f1f1)
running kernel: 2.6.32-18-pve
proxmox-ve-2.6.32: 2.3-88
pve-kernel-2.6.32-18-pve: 2.6.32-88
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-4
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-36
qemu-server: 2.3-18
pve-firmware: 1.0-21
libpve-common-perl: 1.0-49
libpve-access-control: 1.0-26
libpve-storage-perl: 2.3-6
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.4-8
ksm-control-daemon: 1.1-1


I have the same problem !!! Help !!!
 
I search on google and i don't find a solution. However, i found this => comments.gmane.org/gmane.linux.openvz.user/4640

I extracting the most important :

Quote:
Having said that, only NFS v2 and NFS v3 are supported inside CT.

We are currently working on making NFS v4 work inside containers, but we do it for mainline
kernels rather than in RHEL6 kernel. So whenever we will port OpenVZ to any of 3.3 kernels,
it will most probably have NFS v4 support. RHEL7-based OpenVZ kernel will have it, too.​

Well theres your answer. NFSv4 inside containers will only be available once RHEL7 comes out since RHEL7 means 3.X (8?) openvz kernel
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!