[SOLVED] CT creation on NFS Share

Raymond Burns

Member
Apr 2, 2013
333
1
18
Houston, Texas, United States
I'm at odds again. I thought it was resolved, but it is not.

Pveversion -v
Code:
root@zwtprox1:/mnt/pve/proxCT# pveversion -vpve-manager: 3.0-23 (pve-manager/3.0/957f0862)
running kernel: 2.6.32-22-pve
proxmox-ve-2.6.32: 3.0-107
pve-kernel-2.6.32-20-pve: 2.6.32-100
pve-kernel-2.6.32-22-pve: 2.6.32-107
lvm2: 2.02.95-pve3
clvm: 2.02.95-pve3
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-4
qemu-server: 3.0-20
pve-firmware: 1.0-23
libpve-common-perl: 3.0-4
libpve-access-control: 3.0-4
libpve-storage-perl: 3.0-8
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-13
ksm-control-daemon: 1.1-1
Whenever I try to create a CT onto a NFS Share I get an error for chmod
Code:
Creating container private area (/mnt/pve/proxCT/template/cache/centos-6-standard_6.3-1_i386.tar.gz)
chmod: changing permissions of `/mnt/pve/proxCT/private/108.tmp': Operation not permitted
tar: ./usr/lib/libdns.so.81.4.1: Cannot open: Operation not permitted
tar: ./usr/lib/libtiff.so.3.9.4: Cannot open: Operation not permitted
tar: ./usr/lib/libnsssysinit.so: Cannot open: Operation not permitted
tar: ./usr/lib/libisccfg.so.82.0.1: Cannot open: Operation not permitted
tar: ./usr/lib/libffi.so.5.0.6: Cannot open: Operation not permitted
tar: ./usr/lib/libgnutlsxx.so.26.14.12: Cannot open: Operation not permitted
tar: ./usr/lib/libkdb5.so.5.0: Cannot open: Operation not permitted
Attached is the setting for my NFS Share (Share Name = "proxCT")

The network is 192.168.222.0/24
The NFS is 192.168.222.11
Proxmox is 192.168.222.10
I can successfully create KVM on the share using /vdev1/proxKVM

The ownership of all new files created through SSH are "root".
"no_root_squash" has been enabled for /vdev1/proxCT
'sharenfs = rw=@192.168.222.0/24,root=@192.168.222.0/24 vdev1/proxCT'
I CAN download template files to this NFS share, whereas, I couldn't before during test production.
Capture.PNGCapture2.PNGCapture3.PNGCapture4.PNG
 
Last edited:
Also, locally created CT's do not have the option to migrate disk.
I was trying to create locally, and migrate to NFS.
Is there something special that needs to be done to enable CT on NFS shaers? I have tried OmniOS/Napp-IT and Openfiler. Both have 'no_root_squash' settings.
 
In your private folder the group everyone@ also needs to have the full_set permission. After you have done this you need to delete everything under the folder private. When you create a new CT it should work.
 
I have set the proper permissions and I am still faced with the same issue. It is not able to create CT.
Capture5.PNG
However, now I'm wondering if it is best to do this, since there is no storage migration with OpenVZ. Any suggestions? KVM works perfectly!
 
I can't see something not right. Could you present a sceendump of you ZFS filesystems as presented by clicking 'ZFS Filesystems' from the main menu?
 
However, now I'm wondering if it is best to do this, since there is no storage migration with OpenVZ. Any suggestions? KVM works perfectly!
You can 'storage migrate' but only from the command line from the node using cpio. IE.
Code:
find path/ -depth -print | cpio -pamVd /new/parent/dir

Given the following on a proxmox node:
/mnt/pve/nfs1/private (containing folder ct1, ct2)
/mnt/pve/nfs2/private (containing folder ct3)

If you want to 'storage migrate ct1 from nfs1 to nfs2:
1) shutdown ct1
2) cd /mnt/pve/nfs1/private
3) find ct1/ -depth -print | cpio -pamVd /mnt/pve/nfs2/private
4) nano /etc/pve/openvz/ct1.conf
5) change the line 'VE_PRIVATE=/mnt/pve/nfs1/private/ct1' to 'VE_PRIVATE=/mnt/pve/nfs2/private/ct1'
6) start ct1
 
Last edited:
Here are the items, screenshots, codes, and some additional
Note: I set no_root_squash through the Napp-IT GUI
root@zwtOmni01:~# zfs get all /vdev1/proxCTNAME PROPERTY VALUE SOURCE
vdev1/proxCT type filesystem -
vdev1/proxCT creation Fri Aug 23 15:00 2013 -
vdev1/proxCT used 581M -
vdev1/proxCT available 2.43T -
vdev1/proxCT referenced 581M -
vdev1/proxCT compressratio 1.00x -
vdev1/proxCT mounted yes -
vdev1/proxCT quota none default
vdev1/proxCT reservation none default
vdev1/proxCT recordsize 128K default
vdev1/proxCT mountpoint /vdev1/proxCT default
vdev1/proxCT sharenfs rw=@192.168.222.0/24,root=@192.168.222.0/24 local
vdev1/proxCT checksum on default
vdev1/proxCT compression off default
vdev1/proxCT atime off local
vdev1/proxCT devices on default
vdev1/proxCT exec on default
vdev1/proxCT setuid on default
vdev1/proxCT readonly off default
vdev1/proxCT zoned off default
vdev1/proxCT snapdir hidden local
vdev1/proxCT aclmode restricted local
vdev1/proxCT aclinherit passthrough local
vdev1/proxCT canmount on default
vdev1/proxCT xattr on default
vdev1/proxCT copies 1 default
vdev1/proxCT version 5 -
vdev1/proxCT utf8only on -
vdev1/proxCT normalization formD -
vdev1/proxCT casesensitivity insensitive -
vdev1/proxCT vscan off default
vdev1/proxCT nbmand on local
vdev1/proxCT sharesmb name=proxCT local
vdev1/proxCT refquota none default
vdev1/proxCT refreservation none default
vdev1/proxCT primarycache all default
vdev1/proxCT secondarycache all default
vdev1/proxCT usedbysnapshots 0 -
vdev1/proxCT usedbydataset 581M -
vdev1/proxCT usedbychildren 0 -
vdev1/proxCT usedbyrefreservation 0 -
vdev1/proxCT logbias latency default
vdev1/proxCT dedup off default
vdev1/proxCT mlslabel none default
vdev1/proxCT sync disabled local
vdev1/proxCT refcompressratio 1.00x -
vdev1/proxCT written 581M -
vdev1/proxCT logicalused 581M -
vdev1/proxCT logicalreferenced 581M
screendump.PNGscreendump2.PNGscreen3.PNGscreen4.PNGscreen5.PNG
 
I think I have found your problem(s):

Code:
You have:
vdev1/proxCT aclmode             restricted                                local
vdev1/proxCT aclinherit            passthrough                           local

I have:
vMotion/nfs  aclmode               passthrough                           local
vMotion/nfs  aclinherit              passthrough-x                        local
restricted – For new objects, the write_owner and write_acl permissions are removed when an ACL entry is inherited. This means that full_set is reverted to write_set when objects are created. (change owner and permissions on objected is prohibited)

passthrough – When property value is set to passthrough, files are created with a mode determined by the inheritable ACEs. If no inheritable ACEs exist that affect the mode, then the mode is set in accordance to the requested mode from the application.
passthrough-x – Has the same semantics as passthrough, except that when passthrough-x is enabled, files are created with the execute (x) permission, but only if execute permission is set in the file creation mode and in an inheritable ACE that affects the mode.


To fix:
zfs set aclmode=passthrough vdev1/proxCT
zfs set aclinherit=passthrough-x vdev1/proxCT
 
  • Like
Reactions: 1 person
You don't have to buy me anything, I am only glad to be able to help you so that you are able to discovered the full strength of Omnios. If you have another computer with Linux available configure NFS on this and compare the performance differences. You will be surprised!!! iometer, iozone and fio are excellent candidates for performance measures.
 
If you want to 'storage migrate ct1 from nfs1 to nfs2:
1) shutdown ct1
2) cd /mnt/pve/nfs1/private
3) find ct1/ -depth -print | cpio -pamVd /mnt/pve/nfs2/private
4) nano /etc/pve/openvz/ct1.conf
5) change the line 'VE_PRIVATE=/mnt/pve/nfs1/private/ct1' to 'VE_PRIVATE=/mnt/pve/nfs2/private/ct1'
6) start ct1
When I migrate this way, the new CT has disk available the same amount as the NAS. It no longer follows the restrictions set by the Proxmox VE GUI. When I do df -h on my new CentOS Container it shows 2.7tb and not the 4gb I set.
 
What does your CT's conf file have here:

# Disk quota parameters (in form of softlimit:hardlimit)
DISKSPACE="4G:4613734"
DISKINODES="800000:880000"
QUOTATIME="0"
QUOTAUGIDLIMIT="0"
DISK_QUOTA="yes"

If DISK_QUOTA="no" or missing then you will have your answer.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!