cPanel Disk Quotas for LXC - need help

Mario Aldayuz

Member
Feb 9, 2016
39
4
8
34
Greensboro, NC
I am running CentOS 6 inside of an LXC container with cPanel installed and cannot get user/account quotas to work properly. Is there a fix for this? Are they supported by LXC containers? Is anyone else running cPanel inside of an LXC through Proxmox?
 
On the GUI, got Container/Resources/RootDisk -> select "Enable Quota", stop/start the container
 
Did you clear your browser cache after updating? Maybe your browser still uses the old version of the JS client where the edit button was disabled for container disks?
 
Did you clear your browser cache after updating? Maybe your browser still uses the old version of the JS client where the edit button was disabled for container disks?

I did. Is this specific to a type of file system or storage? I have tried multiple browsers, including cache clearing of all of them.

I am running a 3 node cluster, all nodes have been updated and restarted.
 
This is an output of my mounts:

Code:
root@ct101 [~]# mount
/dev/mapper/lvgroup1-vm--101--disk--1 on / type ext4 (rw,relatime,stripe=32,data=ordered)
none on /dev type tmpfs (rw,relatime,size=100k,mode=755)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
proc on /proc/sys/net type proc (rw,nosuid,nodev,noexec,relatime)
proc on /proc/sys type proc (ro,nosuid,nodev,noexec,relatime)
proc on /proc/sysrq-trigger type proc (ro,nosuid,nodev,noexec,relatime)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
sysfs on /sys type sysfs (ro,nosuid,nodev,noexec,relatime)
sysfs on /sys/devices/virtual/net type sysfs (rw,relatime)
sysfs on /sys/devices/virtual/net type sysfs (rw,nosuid,nodev,noexec,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
cgroup on /sys/fs/cgroup type tmpfs (rw,relatime,size=12k,mode=755)
tmpfs on /sys/fs/cgroup/cgmanager type tmpfs (rw,mode=755)
lxcfs on /proc/cpuinfo type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
lxcfs on /proc/diskstats type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
lxcfs on /proc/meminfo type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
lxcfs on /proc/stat type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
lxcfs on /proc/swaps type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
lxcfs on /proc/uptime type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
lxcfs on /sys/fs/cgroup/blkio type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
lxcfs on /sys/fs/cgroup/cpu,cpuacct type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
lxcfs on /sys/fs/cgroup/cpuset type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
lxcfs on /sys/fs/cgroup/devices type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
lxcfs on /sys/fs/cgroup/freezer type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
lxcfs on /sys/fs/cgroup/hugetlb type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
lxcfs on /sys/fs/cgroup/memory type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
lxcfs on /sys/fs/cgroup/systemd type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
lxcfs on /sys/fs/cgroup/net_cls,net_prio type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
lxcfs on /sys/fs/cgroup/perf_event type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
devpts on /dev/lxc/console type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
devpts on /dev/pts type devpts (rw,relatime,gid=5,mode=620,ptmxmode=666)
devpts on /dev/lxc/tty1 type devpts (rw,relatime,gid=5,mode=620,ptmxmode=666)
devpts on /dev/lxc/tty2 type devpts (rw,relatime,gid=5,mode=620,ptmxmode=666)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,relatime)
 
What package versions are you on now and what storage do you use for the disks you want to enable quotas on? And the container config would be useful, too (pct config $vmid)
 
The newest package versions (as of this morning 8am EST).

The option now appears - it can be enabled for all storage types but the container errors out when using LVM over iSCSI.

Code:
root@node2:~# pct config 101
arch: amd64
cpulimit: 4
cpuunits: 1024
hostname: ct101.imws3.com
memory: 20480
net0: bridge=vmbr0,gw=XXX.XXX.XXX.XX,hwaddr=62:63:61:61:34:30,ip=XXX.XXX.XXX.XX/XX,name=eth0,type=veth
onboot: 1
ostype: centos
rootfs: lvm1:vm-101-disk-1,size=150G
swap: 2048

It does work when using local storage. What storage types will work with LXC quotas, and is there an easy way to migrate to other disks/storage types?
 
Code:
root@node3:~# lxc-start -n 104 -F
mknod: '/usr/lib/x86_64-linux-gnu/lxc/rootfs/dev/lvgroup1/vm-104-disk-1': No such file or directory
command 'mknod -m 666 /usr/lib/x86_64-linux-gnu/lxc/rootfs/dev/lvgroup1/vm-104-disk-1 b 252 3' failed: exit code 1
lxc-start: conf.c: run_buffer: 342 Script exited with status 1
lxc-start: conf.c: lxc_setup: 3956 failed to run autodev hooks for container '104'.
lxc-start: start.c: do_start: 736 failed to setup the container
lxc-start: sync.c: __sync_wait: 51 invalid sequence number 1. expected 2
lxc-start: start.c: __lxc_start: 1211 failed to spawn '104'
can't deactivate LV '/dev/lvgroup1/vm-104-disk-1':   Logical volume lvgroup1/vm-104-disk-1 contains a filesystem in use.
volume deativation failed: lvm1:vm-104-disk-1 at /usr/share/perl5/PVE/Storage.pm line 919.
lxc-start: lxc_start.c: main: 344 The container failed to start.
lxc-start: lxc_start.c: main: 348 Additional information can be obtained by setting the --logfile and --logpriority options.
 
storage types: with the next updates all ext4 based storage types except for passed-through /dev nodes (a patch for that is already waiting) should™ work (If I'm not mistaken then that should be all except for bind mounts, zfs subvolumes or when using `size=0`).
as for migrating storage: not yet but soon, that part's overdue by now, sorry...
about the last error: Right, the updates to the autodev hook didn't make it into the last package. Will be fixed with pve-container>=1.0-54 where LVM devices will appear the usual way with /dev/dm-* as node and the usual named symlinks in /dev/mapper/.
 
Hello, I've restores a OpenVZ container vzdump in a Proxmox 4 with LXC.
It's a CentOS container with cPanel.
quotas option is enabled.
But I get this error:

Code:
edquota: Cannot open quotafile //aquota.user: No such file or directory
edquota: Cannot open quotafile /backup/aquota.user: No such file or directory

And all cpanel account show that are using 0M

Any idea?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!