lxc containers have extended permissions - acl by default???

Also. I just installed created an LXC based on this same template on Proxmox 4.0 and it's working perfectly.

Here's the info on the working proxmox server.
root@ProxmoxNJ1:~# pveversion -v
proxmox-ve: 4.0-16 (running kernel: 4.2.2-1-pve)
pve-manager: 4.0-48 (running version: 4.0-48/0d8559d0)
pve-kernel-4.2.2-1-pve: 4.2.2-16
lvm2: 2.02.116-pve1
corosync-pve: 2.3.5-1
libqb0: 0.17.2-1
pve-cluster: 4.0-22
qemu-server: 4.0-30
pve-firmware: 1.1-7
libpve-common-perl: 4.0-29
libpve-access-control: 4.0-9
libpve-storage-perl: 4.0-25
pve-libspice-server1: 0.12.5-1
vncterm: 1.2-1
pve-qemu-kvm: 2.4-9
pve-container: 1.0-6
pve-firewall: 2.0-12
pve-ha-manager: 1.0-9
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.3-1
lxcfs: 0.9-pve2
cgmanager: 0.37-pve2
criu: 1.6.0-1
zfsutils: 0.6.5-pve4~jessie
 
Here you go.

root@proxmoxnj2:~# cat /etc/pve/lxc/100.conf
arch: amd64
cpulimit: 1
cpuunits: 1024
hostname: mcintirere.hivepbx.com
memory: 512
net0: bridge=vmbr0,gw=10.3.0.1,hwaddr=36:35:61:63:30:39,ip=10.3.0.50/21,name=eth0,type=veth
onboot: 0
ostype: centos
rootfs: proxmoxnj:vm-100-disk-1,size=8G
swap: 1024

the rootfs does not have the "acl=0" option set, so naturally the acls are not disabled..

earlier on you wrote:

axion.joey said:
I added acl=0 in the container's config file like this:
rootfs: proxmoxnj:vm-133-disk-1,size=8G, acl=0

which I just realized contains a small, but very important error: you put a space before acl=0, so this option was dropped (the mountpoint settings must not contain unquoted spaces).

editing the configuration file manually is something that should only be done when there is no other option - in this case I would suggest setting the ACL option via the GUI, or replacing the rootfs with "pct set 100 -rootfs proxmoxnj:vm-133-disk-1,size=8G,acl=0". all the updates within proxmox take care to not run into race conditions when updating the configuration, if you manually edit the configuration you might break something.

also, "pct set" warns you when you get the syntax wrong:
Code:
# pct set 100 -rootfs tank:subvol-100-disk-1,size=32G, acl=0                                                                                                                    
400 too many arguments
pct set <vmid> [OPTIONS]
# pct set 100 -rootfs tank:subvol-100-disk-1,size=32G,acl=0
#
 
Thank you so much for the detailed response. I just tried setting acl=0 in the config file, but I'm still experiencing the same errors. Is this something that had to be set when the container was originally restored from a backup?

Also I don't see where I can set that in the gui.
 
Hi Fabian,

I just created another container based off a backup. I couldn't see where I could set the acl option in the gui, so I edited it in the config file before turning the container on. And I'm still having the same problem. We have maintenance subscriptions on all of our servers. Is it possible for you or someone else to connect to the server via ssh and take a look at this?
 
Yes, of course you can open a commercial support ticket for this issue and we can take a look at it.
 
I opened a ticket yesterday. I got a few emails from Friedrich last night, but haven't heard from anyone in 24 hours. Here's the ticket number: IRW-893-90850
 
I opened a ticket yesterday. I got a few emails from Friedrich last night, but haven't heard from anyone in 24 hours. Here's the ticket number: IRW-893-90850

please open a new ticket if the issue is not related - but note that ssh access is only included with standard and premium subscriptions.

can you post the output (pveversion -v, container config, mount) again now (hopefully without typos ;))?

also note that if you use ZFS the acl settings need to be set separately using "zfs set" (fix for this is already in git, but not yet released in the repositories).
there are several other ACL related bugfixes that will be released soon, one concerning tar restoring acls wrongly, and another for not restoring the acl option of the rootfs when restoring from a backup: https://bugzilla.proxmox.com/show_bug.cgi?id=928 and https://bugzilla.proxmox.com/show_bug.cgi?id=942
 
Hi Fabian,

Running setfacl -b -R / then rebooting the container fixed the issue. Thanks for pointing out the bugs.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!