OpenVZ Container in Napp-IT/OmniOS NFS

Raymond Burns

Member
Apr 2, 2013
333
1
18
Houston, Texas, United States
I cannot create an OpenVZ Container within my NFS Store.
I have created several KVM's within the store without any problems.

My pveversion
Code:
root@proxmox:/mnt/pve/containerNFS/private# pveversion -vpve-manager: 3.0-23 (pve-manager/3.0/957f0862)
running kernel: 2.6.32-22-pve
proxmox-ve-2.6.32: 3.0-107
pve-kernel-2.6.32-20-pve: 2.6.32-100
pve-kernel-2.6.32-22-pve: 2.6.32-107
lvm2: 2.02.95-pve3
clvm: 2.02.95-pve3
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-4
qemu-server: 3.0-20
pve-firmware: 1.0-23
libpve-common-perl: 3.0-4
libpve-access-control: 3.0-4
libpve-storage-perl: 3.0-8
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-13
ksm-control-daemon: 1.1-1

The error code upon creation of the container using a template downloaded from the store
Code:
Creating container private area (/var/lib/vz/template/cache/centos-6-standard_6.3-1_i386.tar.gz)
chmod: changing permissions of `/mnt/pve/proxmoxNFS/private/113.tmp': Operation not permitted
tar: ./usr/lib/libdns.so.81.4.1: Cannot open: Operation not permitted
tar: ./usr/lib/libpth.so.20: Cannot change ownership to uid 0, gid 0: Operation not permitted
tar: ./usr/lib/libtiff.so.3.9.4: Cannot open: Operation not permitted
tar: ./usr/lib/libnsssysinit.so: Cannot open: Operation not permitted
tar: ./usr/lib/libisccfg.so.82.0.1: Cannot open: Operation not permitted
tar: ./usr/lib/libtiffxx.so.3: Cannot change ownership to uid 0, gid 0: Operation not permitted
tar: ./usr/lib/libffi.so.5.0.6: Cannot open: Operation not permitted
The permission errors repeat about a thousand times so I didn't paste them all.

The NFS share was setup by:
1. Create Pool using mirror
2. Extend pool using mirror
3. Create ZFS Filesystem
4. Under ZFS Filesystem turn "NFS" to "On"

That's all that was used for setup.
I have tried different acl hackings, but I have reverted all settings back to standard. Again, the NFS did properly mount into the Proxmox GUI.

Also,
Code:
root@proxmox:/mnt/pve/containerNFS/private# cat /etc/pve/storage.cfgdir: local
        path /var/lib/vz
        content images,iso,vztmpl,rootdir
        maxfiles 0


nfs: proxmoxNFS
        path /mnt/pve/proxmoxNFS
        server 192.168.0.198
        export /vdev1/proxmoxstore
        options vers=3
        content images,iso,vztmpl,rootdir,backup
        maxfiles 3


nfs: containerNFS
        path /mnt/pve/containerNFS
        server 192.168.0.198
        export /vdev1/containers
        options vers=3
        content rootdir
        maxfiles 1
 
First: For performance an stability your mount options for nfs: containerNFS should be changed from 'options vers=3' to 'options vers=3,tcp'
second: The default ACL for NFS under Omnios must be changed. The following assumes no CT's exists on the storage prior to this!!!

The following is taken from by storage server where the shared NFS folder is called '/vMotion/nfs'. Remember to use Illumos tools and not gnu tools since gnu tools does not understand the advanced ACL's used in Illumos. So
ls: /usr/bin/ls and to display advanced ACL /usr/bin/ls -V
/vMotion/nfs
chmod:
/usr/bin/chmod
mkdir: /usr/bin/mkdir

top folder
'/vMotion/nfs':
drwxrwxrwx+ 8 root root 8 Aug 15 01:07 nfs
user:root:rwxpdDaARWcCos:fd-----:allow
everyone@:rwxpdDaARWc--s:fd-----:allow

Folders under '/vMotion/nfs' created by proxmos:
drwxrwxrwx+ 7 root root 7 Aug 15 01:07 backup
user:root:rwxpdDaARWcCos:fd----I:allow
everyone@:rwxpdDaARWc--s:fd----I:allow
drwxrwxrwx+ 2 root root 37 Aug 15 23:07 dump
user:root:rwxpdDaARWcCos:fd-----:allow
everyone@:rwxpdDaARWc--s:fd-----:allow
drwxrwxrwx+ 3 root root 3 Jul 25 03:54 images
user:root:rwxpdDaARWcCos:fd-----:allow
everyone@:rwxpdDaARWc--s:fd-----:allow
drwxrwxrwx+ 3 root root 3 Aug 15 23:05 private
user:root:rwxpdDaARWcCos:fd-----:allow
everyone@:rwxpdDaARWcCos:fd-----:allow
drwxrwxrwx+ 3 root root 3 Jun 17 01:31 storage
user:root:rwxpdDaARWcCos:fd-----:allow
everyone@:rwxpdDaARWc--s:fd-----:allow
drwxrwxrwx+ 2 root root 2 Aug 15 23:07 vztmp
user:root:rwxpdDaARWcCos:fd-----:allow
everyone@:rwxpdDaARWc--s:fd-----:allow

Following command should do it:
/usr/bin/chmod -R A=user:root:rwxpdDaARWcCos:fd-----:allow,everyone@:rwxpdDaARWc--s:fd-----:allow /vMotion/nfs

/usr/bin/chmod -R A=user:root:rwxpdDaARWcCos:fd-----:allow,everyone@:rwxpdDaARWcCos:fd-----:allow /vMotion/nfs/private


 
Still have error in CT creation
Code:
Creating container private area (/var/lib/vz/template/cache/centos-6-standard_6.3-1_i386.tar.gz)
chmod: changing permissions of `/mnt/pve/containerNFS/private/115.tmp': Operation not permitted
tar: ./usr/lib/libdns.so.81.4.1: Cannot open: Operation not permitted
tar: ./usr/lib/libpth.so.20: Cannot change ownership to uid 0, gid 0: Operation not permitted

This is my permission setting in OmniOS
Code:
root@omnitest:/vdev1# /usr/bin/ls -Vtotal 227157508


-rw-r--r--+  1 root     root         781 Aug  9 11:44 Bonnie.log
              user:root:r-----a-R-c--s:------I:allow
                 owner@:rw-p--aARWcCos:-------:allow
                 group@:r-----a-R-c--s:-------:allow
              everyone@:r-----a-R-c--s:-------:allow


drwxrwxrwx+  5 root     root           5 Aug 15 08:26 containers
              user:root:rwxpdDaARWcCos:fd-----:allow
              everyone@:rwxpdDaARWc--s:fd-----:allow


-rw-r--r--+  1 root     root     116285440000 Aug 14 02:51 dd2.tst
              user:root:r-----a-R-c--s:------I:allow
                 owner@:rw-p--aARWcCos:-------:allow
                 group@:r-----a-R-c--s:-------:allow
              everyone@:r-----a-R-c--s:-------:allow


-rw-r--r--+  1 root     root        5214 Aug 14 03:24 iozone1g.log
              user:root:r-----a-R-c--s:------I:allow
                 owner@:rw-p--aARWcCos:-------:allow
                 group@:r-----a-R-c--s:-------:allow
              everyone@:r-----a-R-c--s:-------:allow


drwxrwxrwx+  7 root     root           7 Aug  9 11:59 proxmoxstore
              user:root:rwxpdDaARWcCos:fd-----:allow
              everyone@:rwxpdDaARWcCos:fd-----:allow
                 owner@:rwxpdDaARWcCos:fd-----:allow
           user:proxmox:rwxpdDaARWcCos:fd-----:allow
This is inside my containers' folder
Code:
root@omnitest:/vdev1# /usr/bin/ls -V containerstotal 6

drwxrwxrwx+  2 nobody   nobody         2 Aug 15 08:02 dump
              user:root:rwxpdDaARWcCos:fd-----:allow
              everyone@:rwxpdDaARWc--s:fd-----:allow

drwxrwxrwx+  4 nobody   nobody         4 Aug 16 02:51 private
              user:root:rwxpdDaARWcCos:fd-----:allow
              everyone@:rwxpdDaARWcCos:fd-----:allow
 
I see you haven't followed my advice?

Folders are owned by user nobody!!!! Indicates that you are accessing Omnios through a misconfigured NFSv4 client or using NFSv4 with server and client belonging to different domains.
 
Last edited:
No, nothing like that. I'm trying to follow your advise. There may be a step that is assumed, but not listed.

I am using NFSv3 because of:
Code:
root@proxmox:~# cat /etc/pve/storage.cfgdir: local
        path /var/lib/vz
        content images,iso,vztmpl,rootdir
        maxfiles 0


nfs: proxmoxNFS
        path /mnt/pve/proxmoxNFS
        server 192.168.0.198
        export /vdev1/proxmoxstore
        options vers=3,tcp
        content images,iso,vztmpl,rootdir,backup
        maxfiles 3


nfs: containerNFS
        path /mnt/pve/containerNFS
        server 192.168.0.198
        export /vdev1/containers
        options vers=3,tcp
        content rootdir
        maxfiles 1


nfs: containerstnd
        path /mnt/pve/containerstnd
        server 192.168.0.198
        export /vdev1/container2
        options vers=3,tcp
        content images,iso,vztmpl,rootdir,backup
        maxfiles 3
It says option vers=3

In OmniOS, I run your two listed codes. However, the code doesn't seem to change the ownership. It adds root to the listing under it.
How do I change ownership?
I am using Napp-IT on top of OmniOS
 
Still no luck. I have run the following codes
Code:
root@omnitest:/vdev1# /usr/bin/chown -R root:root /vdev1/container2

root@omnitest:/vdev1# /usr/bin/chown -R root:root /vdev1/containers


root@omnitest:/vdev1# /usr/bin/chmod -R A=user:root:rwxpdDaARWcCos:fd-----:allow,everyone@:rwxpdDaARWc--s:fd-----:allow /vdev1/container2/private


root@omnitest:/vdev1# /usr/bin/chmod -R A=user:root:rwxpdDaARWcCos:fd-----:allow,everyone@:rwxpdDaARWc--s:fd-----:allow /vdev1/container2


root@omnitest:/vdev1# /usr/bin/chmod -R A=user:root:rwxpdDaARWcCos:fd-----:allow,everyone@:rwxpdDaARWc--s:fd-----:allow /vdev1/containers


root@omnitest:/vdev1# /usr/bin/chmod -R A=user:root:rwxpdDaARWcCos:fd-----:allow,everyone@:rwxpdDaARWc--s:fd-----:allow /vdev1/containers/private
This is the result
Code:
root@omnitest:/vdev1# /usr/bin/ls -V                                            
total 227157511

-rw-r--r--+  1 root     root         781 Aug  9 11:44 Bonnie.log
              user:root:r-----a-R-c--s:------I:allow
                 owner@:rw-p--aARWcCos:-------:allow
                 group@:r-----a-R-c--s:-------:allow
              everyone@:r-----a-R-c--s:-------:allow

drwxrwxrwx+  7 root     root           7 Aug 16 02:58 container2
              user:root:rwxpdDaARWcCos:fd-----:allow
              everyone@:rwxpdDaARWc--s:fd-----:allow

drwxrwxrwx+  5 root     root           5 Aug 15 08:26 containers
              user:root:rwxpdDaARWcCos:fd-----:allow
              everyone@:rwxpdDaARWc--s:fd-----:allow

-rw-r--r--+  1 root     root     116285440000 Aug 14 02:51 dd2.tst
              user:root:r-----a-R-c--s:------I:allow
                 owner@:rw-p--aARWcCos:-------:allow
                 group@:r-----a-R-c--s:-------:allow
              everyone@:r-----a-R-c--s:-------:allow

-rw-r--r--+  1 root     root        5214 Aug 14 03:24 iozone1g.log
              user:root:r-----a-R-c--s:------I:allow
                 owner@:rw-p--aARWcCos:-------:allow
                 group@:r-----a-R-c--s:-------:allow
              everyone@:r-----a-R-c--s:-------:allow

drwxrwxrwx+  7 root     root           7 Aug  9 11:59 proxmoxstore
              user:root:rwxpdDaARWcCos:fd-----:allow
              everyone@:rwxpdDaARWcCos:fd-----:allow
                 owner@:rwxpdDaARWcCos:fd-----:allow
           user:proxmox:rwxpdDaARWcCos:fd-----:allow
Code:
root@omnitest:/vdev1# /usr/bin/ls -V container2total 12

drwxrwxrwx+  2 root     root           2 Aug 16 02:58 dump
              user:root:rwxpdDaARWcCos:fd-----:allow
              everyone@:rwxpdDaARWc--s:fd-----:allow

drwxrwxrwx+  2 root     root           2 Aug 16 02:58 images
              user:root:rwxpdDaARWcCos:fd-----:allow
              everyone@:rwxpdDaARWc--s:fd-----:allow

drwxrwxrwx+  2 root     root           2 Aug 16 03:57 private
              user:root:rwxpdDaARWcCos:fd-----:allow
              everyone@:rwxpdDaARWc--s:fd-----:allow

drwxrwxrwx+  4 root     root           4 Aug 16 02:58 template
              user:root:rwxpdDaARWcCos:fd-----:allow
              everyone@:rwxpdDaARWc--s:fd-----:allow
Code:
root@omnitest:/vdev1# /usr/bin/ls -V containerstotal 6

drwxrwxrwx+  2 root     root           2 Aug 15 08:02 dump
              user:root:rwxpdDaARWcCos:fd-----:allow
              everyone@:rwxpdDaARWc--s:fd-----:allow

drwxrwxrwx+  4 root     root           4 Aug 16 03:54 private
              user:root:rwxpdDaARWcCos:fd-----:allow
              everyone@:rwxpdDaARWc--s:fd-----:allow
I believe I have taken your advice. Unless there is some way to make the NFSv3 on OmniOS
 
Also, when I create a new file from Proxmox Terminal in the NFS, it is created as nobody:nobody
Code:
In Proxmox Terminal

root@proxmox:/mnt/pve/containerstnd/private# mkdir test
root@proxmox:/mnt/pve/containerstnd/private# dir
test
Code:
In OmniOS Terminal

root@omnitest:/vdev1/container2/private# /usr/bin/ls -V
total 3
drwxrwxrwx+  2 nobody   nobody         2 Aug 16 04:21 test
              user:root:rwxpdDaARWcCos:fd----I:allow
              everyone@:rwxpdDaARWc--s:fd----I:allow

How do I connect Proxmox as user=root??
 
OmniOS
Code:
root@omnitest:/vdev1/container2/private# cat /etc/hosts
# CDDL HEADER START
#
# The contents of this file are subject to the terms of the
# Common Development and Distribution License (the "License").
# You may not use this file except in compliance with the License.
#
# You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
# or http://www.opensolaris.org/os/licensing.
# See the License for the specific language governing permissions
# and limitations under the License.
#
# When distributing Covered Code, include this CDDL HEADER in each
# file and include the License file at usr/src/OPENSOLARIS.LICENSE.
# If applicable, add the following below this CDDL HEADER, with the
# fields enclosed by brackets "[]" replaced with your own identifying
# information: Portions Copyright [yyyy] [name of copyright owner]
#
# CDDL HEADER END
#
# Copyright 2009 Sun Microsystems, Inc.  All rights reserved.
# Use is subject to license terms.
#
# Internet host table
#
::1             localhost
127.0.0.1       localhost loghost
127.0.0.1       omnitest
127.0.0.1       omnitest
127.0.0.1       omnitest
127.0.0.1       omnitest
127.0.0.1       omnitest
127.0.0.1       omnitest
127.0.0.1       omnitest
127.0.0.1       omnitest

Proxmox
Code:
root@proxmox:/mnt/pve/containerstnd/private# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
192.168.0.200 proxmox.hcp2.net proxmox pvelocalhost


# The following lines are desirable for IPv6 capable hosts


::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

I have been messing around with the "root no squash"
I have two datasets:
containers
container2

containers is
Code:
root@omnitest:/vdev1/container2/private# zfs get sharenfs /vdev1/containers
NAME              PROPERTY  VALUE     SOURCE
vdev1/containers  sharenfs  on        local

container2 is
Code:
root@omnitest:/vdev1/container2/private# zfs get sharenfs /vdev1/container2     
NAME              PROPERTY  VALUE                   SOURCE
vdev1/container2  sharenfs  rw,root=@192.168.0.200  local

I can write a file in container2 and change the permissions of that file. Example
Code:
root@proxmox:/mnt/pve/containerstnd/private# chown -R root test

Results:
root@omnitest:/vdev1/container2/private# /usr/bin/ls -V
total 9
drwxrwxrwx+  2 root     root           2 Aug 16 04:52 124.tmp
              user:root:rwxpdDaARWcCos:fd----I:allow
              everyone@:rwxpdDaARWc--s:fd----I:allow
drwxrwxrwx+  2 root     nobody         2 Aug 16 04:21 test
              user:root:rwxpdDaARWcCos:fd----I:allow
              everyone@:rwxpdDaARWc--s:fd----I:allow
Any thoughts?
 
Last edited:
Your /etc/hosts file on Omnios is wrong it must contain this:

127.0.0.1 localhost loghost
IP_OF_OMNIOS omnitest

Also your permission on the share is wrong. You have forgotten to configure 'no_root_splash'.
Your configuration: vdev1/container2 sharenfs rw,root=@192.168.0.200 local
where IP_OF_OMNIOS is equal to the servers public IP

To have no_root_splash enabled the following needs to be done:
vdev1/container2 sharenfs rw=@192.168.0.200,root=@192.168.0.200 local

From inside napp-it you should see the following when choosing aclextension for all folders under container
Screenshot.png

The private folder should have this
Screenshot-1.png
 
Just seen this as well in the ../log/syslog
Code:
Aug 16 10:40:14 proxmox pvedaemon[2721]: <root@pam> starting task UPID:proxmox:00000AFE:000020E3:520E47DE:qmstart:100:root@pam:
Aug 16 10:40:38 proxmox pveproxy[2741]: WARNING: proxy detected vanished client connection
Aug 16 10:40:46 proxmox pvedaemon[2716]: WARNING: mount error: mount.nfs: /mnt/pve/container2 is busy or already mounted
Aug 16 10:40:52 proxmox kernel: device tap100i0 entered promiscuous mode
Aug 16 10:40:52 proxmox kernel: vmbr0: port 2(tap100i0) entering forwarding state
Aug 16 10:40:52 proxmox pvedaemon[2721]: <root@pam> end task UPID:proxmox:00000AFE:000020E3:520E47DE:qmstart:100:root@pam: OK
Aug 16 10:40:54 proxmox ntpd[2211]: Listen normally on 8 tap100i0 fe80::7ce3:8ff:fe75:6167 UDP 123
Aug 16 10:40:54 proxmox ntpd[2211]: peers refreshed
Aug 16 10:41:02 proxmox kernel: tap100i0: no IPv6 routers present
Aug 16 10:41:08 proxmox pveproxy[2742]: WARNING: proxy detected vanished client connection
Aug 16 10:41:11 proxmox pvestatd[2735]: WARNING: mount error: mount.nfs: /mnt/pve/proxmoxNFS is busy or already mounted
Aug 16 10:41:11 proxmox pvestatd[2735]: status update time (65.532 seconds)
Aug 16 10:41:16 proxmox pvedaemon[2718]: WARNING: mount error: mount.nfs: /mnt/pve/proxmoxNFS is busy or already mounted
Aug 16 10:41:36 proxmox pvedaemon[2716]: WARNING: mount error: mount.nfs: /mnt/pve/proxmoxNFS is busy or already mounted
Aug 16 10:42:09 proxmox pvedaemon[2721]: <root@pam> starting task UPID:proxmox:00000B5B:00004E17:520E4851:vzcreate:128:root@pam:
Aug 16 10:42:35 proxmox pvedaemon[2907]: command 'vzctl --skiplock create 128 --ostemplate /var/lib/vz/template/cache/centos-6-standard_6.3-1_i386.tar.gz --private /mnt/pve/container2/private/128' failed: exit code 48
Aug 16 10:42:35 proxmox pvedaemon[2721]: <root@pam> end task UPID:proxmox:00000B5B:00004E17:520E4851:vzcreate:128:root@pam: command 'vzctl --skiplock create 128 --ostemplate /var/lib/vz/template/cache/centos-6-standard_6.3-1_i386.tar.gz --private /mnt/pve/container2/private/128' failed: exit code 48
Aug 16 10:46:10 proxmox pvedaemon[2718]: <root@pam> successful auth for user 'root@pam'
Aug 16 10:53:34 proxmox pvedaemon[2718]: <root@pam> starting task UPID:proxmox:00000D5F:000159AF:520E4AFE:vzcreate:129:root@pam:
Aug 16 10:53:59 proxmox pvedaemon[3423]: command 'vzctl --skiplock create 129 --ostemplate /var/lib/vz/template/cache/centos-6-standard_6.3-1_i386.tar.gz --private /mnt/pve/container2/private/129' failed: exit code 48
Aug 16 10:53:59 proxmox pvedaemon[2718]: <root@pam> end task UPID:proxmox:00000D5F:000159AF:520E4AFE:vzcreate:129:root@pam: command 'vzctl --skiplock create 129 --ostemplate /var/lib/vz/template/cache/centos-6-standard_6.3-1_i386.tar.gz --private /mnt/pve/container2/private/129' failed: exit code 48

The one that stands out to me is the mount error: "Aug 16 10:40:46 proxmox pvedaemon[2716]: WARNING: mount error: mount.nfs: /mnt/pve/container2 is busy or already mounted
Aug 16 10:40:52 proxmox kernel: device tap100i0 entered promiscuous mode"
 
Maybe I was a bit to hasty. This vdev1/container2 sharenfs rw,root=@192.168.0.200 local is ok if the share should be rw to world.

Btw. when you mount the share is it the done from
192.168.0.200 and do your omnios also have an IP on this network?
 
My Proxmox is 192.168.0.200 (Main Cluster)
My Zotacprox is 192.168.0.201
My OmniOs is 192.168.0.198

I have changed my OmniOS /etc/hosts
Code:
root@omnitest:~# cat /etc/hosts# CDDL HEADER START
#
# The contents of this file are subject to the terms of the
# Common Development and Distribution License (the "License").
# You may not use this file except in compliance with the License.
#
# You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
# or http://www.opensolaris.org/os/licensing.
# See the License for the specific language governing permissions
# and limitations under the License.
#
# When distributing Covered Code, include this CDDL HEADER in each
# file and include the License file at usr/src/OPENSOLARIS.LICENSE.
# If applicable, add the following below this CDDL HEADER, with the
# fields enclosed by brackets "[]" replaced with your own identifying
# information: Portions Copyright [yyyy] [name of copyright owner]
#
# CDDL HEADER END
#
# Copyright 2009 Sun Microsystems, Inc.  All rights reserved.
# Use is subject to license terms.
#
# Internet host table
#
::1             localhost
127.0.0.1       localhost loghost
192.168.0.198   omnitest
127.0.0.1       omnitest
127.0.0.1       omnitest
127.0.0.1       omnitest
127.0.0.1       omnitest
127.0.0.1       omnitest
127.0.0.1       omnitest
127.0.0.1       omnitest
127.0.0.1       omnitest

I have two datasets that I am testing with
/vdev1/containers
/vdev1/container2
container2.jpgcontainers.jpg

/vdev1/container2 has the setting for
vdev1/container2 sharenfs rw=@192.168.0.200,root=@192.168.0.200 local

/vdev1/containers has the setting for
vdev1/containers sharenfs on local

The NFS's are mounted through the Proxmox WebGUI for "Add Storage", located on 192.168.0.200

Neither dataset allows me to use OpenVZ on NFS.
I am, however, able to write a file on "container2" through PuTTy Command Line and change the ownership of that file
 
Last edited:
This is one fatal error. The share needs to be accessible from all nodes in your cluster since all nodes will mount the share. Change your permission to
vdev1/container2 sharenfs rw=@192.168.0.0/24,root=@192.168.0.0/24
 
You should also umount the share everywhere when you change permission so on every proxmox node: umount 192.168.0.198/vdev1/container2

Mount is not needed since proxmox auto mounts the share.
 
Code:
root@omnitest:/vdev1/container2/private# zfs set sharenfs='rw=@192.168.0.0/24,root=@192.168.0.0/24' vdev1/container2
root@omnitest:/vdev1/container2/private# zfs get sharenfs vdev1/container2      
NAME                PROPERTY  VALUE                                                   SOURCE
vdev1/container2  sharenfs  rw=@192.168.0.0/24,root=@192.168.0.0/24  local
I removed in Proxmox, and Add again through GUI.
Same error.
Code:
Creating container private area (/var/lib/vz/template/cache/centos-6-standard_6.3-1_i386.tar.gz)
chmod: changing permissions of `/mnt/pve/container2/private/132.tmp': Operation not permitted
tar: ./usr/lib/libdns.so.81.4.1: Cannot open: Operation not permitted
tar: ./usr/lib/libtiff.so.3.9.4: Cannot open: Operation not permitted
But I can still mkdir and change permission in command line
Code:
root@proxmox:/mnt/pve/container2/private# dir
124.tmp  test  test2


root@proxmox:/mnt/pve/container2/private# mkdir testchown
root@proxmox:/mnt/pve/container2/private# chown -R 0:0 testchown
root@proxmox:/mnt/pve/container2/private# ls -l


total 6
drwxrwxrwx 2 root root       2 Aug 16 04:52 124.tmp
drwxrwxrwx 2 root 4294967294 2 Aug 16 04:21 test
drwxrwxrwx 2 root root       2 Aug 16 04:47 test2
drwxrwxrwx 2 root root       2 Aug 16 06:27 testchown
 
umount 192.168.0.198/vdev1/container2
Code:
[/COLOR][/COLOR]root@proxmox:/mnt/pve/container2/private# umount 192.168.0.198/vdev1/container2umount: 192.168.0.198/vdev1/container2: not found
root@proxmox:/mnt/pve/container2/private# cd /

root@proxmox:/# umount /mnt/pve/container2[COLOR=#333333][COLOR=#333333]
However, I cannot select a storage when creating a CT
I have removed and added container2 through GUI. Everything seems to be back to what it was before.
 
I cannot see anything wrong except for the fact that your omnios for reasons unknown to me does not recognize either the proxmox node belongs to 192.168.0.0/24 or the user root from the same server. You are certain that you don't have an alternative route to your omnios?

route -n?

traceroute 192.168.0.198.

What about firewall rules blocking nfs traffic?

managed switch which is performing routing.
 
I can't upload iso images to my NFS either.
Does this give any clue?
Code:
starting file import from: /var/tmp/pveupload-5d1499ab1f8c41f6b6d5b17f26a93783
target node: proxmox
target file: /mnt/pve/container2/template/iso/CentOS-6.4-x86_64-minimal.iso
file size is: 358959104
command: cp /var/tmp/pveupload-5d1499ab1f8c41f6b6d5b17f26a93783 /mnt/pve/container2/template/iso/CentOS-6.4-x86_64-minimal.iso
TASK ERROR: import failed: cp: cannot create regular file `/mnt/pve/container2/template/iso/CentOS-6.4-x86_64-minimal.iso': Operation not permitted
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!