cPanel Disk Quotas for LXC - need help

All instructions for this do not work:

https://pve.proxmox.com/wiki/Linux_Container#_using_quotas_inside_containers

I cannot turn on quotas in the GUI even with container stopped. It is greyed out. (I am using an ext4 fs on a zvol).

Near the bottom of this: https://bugzilla.proxmox.com/show_bug.cgi?id=782 it says to

Code:
/etc/pve/lxc/101.conf
add the line:
lxc.rootfs.options: usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0

but that gives:

vm 202 - lxc.rootfs.options: lxc.rootfs.options is not supported, please use mount point options in the "rootfs" key

taking out lxc. and just leaving rootfs.options: gives
vm 202 - unable to parse config: rootfs.options: usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0

putting those options into the rootfs: line

rootfs: local-zfs:subvol-202-disk-1,size=80G,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0

gives vm 202 - unable to parse value of 'rootfs' - format error

I do have lxc.apparmor.profile: unconfined on which parses ok.

this is pve-manager/5.2-1/0fcd7879 (running kernel: 4.15.17-1-pve)



 
rootfs: local-zfs:subvol-202-disk-1,size=80G,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0
- This is a subvolume, not an ext4 image on a zvol, thus
- quotas are not supported there and
- you can't just write mount command line options into pve configuration files like that, there's a reason why they have their own format.

please read the bugzilla entry to the end, there's only 1 more post after the one you tried ;-)
 
ok thanks.

And actually I didnt paste my latest config which was

per df:
/dev/zd16 76G 5.1G 67G 8% /rpool/data/subvol-202-disk-1

so it's just named that but actually on a /dev/zd* device.

as for writing entries by hand - the GUI has quota=1 greyed out, and I dont know why. So I cant use the GUI.

however, with quota=1 it still doesnt work of course

quotacheck: Mountpoint (or device) / not found or has no quota enabled.
quotacheck: Cannot find filesystem to check or filesystem not mounted with quota option.

can this be run for / in an LXC container or does it have to be another filesystem?
 
Last edited:
Figured it out. Here's how:

my container has a zvol on /dev/zd16:

Code:
/dev/zd16                          76G  5.2G   67G   8% /rpool/data/subvol-202-disk-1

added some lxc permissions to all containers (since im just running cpanel here on this node):

since zd16 is

Code:
brw-rw---- 1 root disk 230, 16 Nov 22 11:07 /dev/zd16

we need to allow the container access to that device

Code:
$ cat /usr/share/lxc/config/common.conf.d/02-cpanel.conf
lxc.apparmor.profile = lxc-container-default-with-mounting
lxc.cgroup.devices.allow = b 230:16 rwm

add a mount hooks file to ensure the device node is created in the container at boot:

Code:
$ cat /var/lib/lxc/202/mount-hook.sh
#!/bin/sh
mknod -m 777 ${LXC_ROOTFS_MOUNT}/dev/zd16 b 230 16

(make sure to chmod +x the file so its executable)

added these lines to /etc/pve/lxc/202.conf

Code:
lxc.apparmor.profile: unconfined
lxc.hook.autodev: /var/lib/lxc/202/mount-hook.sh

and added ,quota=1 to the mount so it looks like:

Code:
rootfs: local-zfs:subvol-202-disk-1,size=80G,quota=1

And now it works (once I entered the container and ran

Code:
quotacheck -cmug /
quotaon /

There's probably extraneous/unnecessary elements here, or it could be done more efficiently, Im sure the staff will have some pointers.

Used some hints from https://forum.proxmox.com/threads/lxc-cannot-assign-a-block-device-to-container.23256/ for help
 
Last edited:
Figured it out. Here's how:

my container has a zvol on /dev/zd16:

Code:
/dev/zd16                          76G  5.2G   67G   8% /rpool/data/subvol-202-disk-1

added some lxc permissions to all containers (since im just running cpanel here on this node):

since zd16 is

Code:
brw-rw---- 1 root disk 230, 16 Nov 22 11:07 /dev/zd16

we need to allow the container access to that device

Code:
$ cat /usr/share/lxc/config/common.conf.d/02-cpanel.conf
lxc.apparmor.profile = lxc-container-default-with-mounting
lxc.cgroup.devices.allow = b 230:16 rwm

add a mount hooks file to ensure the device node is created in the container at boot:

Code:
$ cat /var/lib/lxc/202/mount-hook.sh
#!/bin/sh
mknod -m 777 ${LXC_ROOTFS_MOUNT}/dev/zd16 b 230 16

(make sure to chmod +x the file so its executable)

added these lines to /etc/pve/lxc/202.conf

Code:
lxc.apparmor.profile: unconfined
lxc.hook.autodev: /var/lib/lxc/202/mount-hook.sh

and added ,quota=1 to the mount so it looks like:

Code:
rootfs: local-zfs:subvol-202-disk-1,size=80G,quota=1

And now it works (once I entered the container and ran

Code:
quotacheck -cmug /
quotaon /

There's probably extraneous/unnecessary elements here, or it could be done more efficiently, Im sure the staff will have some pointers.

Used some hints from https://forum.proxmox.com/threads/lxc-cannot-assign-a-block-device-to-container.23256/ for help


hy bro, i cant fix this with my vps, can you help me, im stuck with QUOTA
 
need more details - did you move your vps container to an ext4 partition on a zvol? Creating zvols, mounting them and copying to them is general linux/zfs, not specific to promox. Lots of help on stackexchange or oracle zfs docs on how.
 
Some more helpful details - I guess I hadn't rebooted since tuning - and /dev/zd## drives can renumber randomly if you've created/removed other zvols. At any rate, for whatever reason, they changed on me.

So instead of using rootfs:/dev/zd16 for eg in your rootfs lxc/$CTID.conf file options, you should use the dynamic filename, which is a symlink (in this particular reboot) to /dev/zd16 (but could change):

Code:
rootfs: /dev/zvol/rpool/data/subvol-202-disk-1,size=80G,quota=1

now, however, I've upgraded recently and some scripts or something have changed, because this obviates the need for the mount-hook.sh script entirely. Comment it out, it's not needed.

Took a damn long while to figure that out!

this was my hint:

lxc-start 202 20190529001213.256 DEBUG conf - conf.c:run_buffer:326 - Script exec /var/lib/lxc/202/mount-hook.sh 202 lxc autodev with output: mknod: /usr/lib/x86_64-linux-gnu/lxc/rootfs/dev/zd16: File exists

it already existed, so I didnt need to create it or give permission.


BTW, hint, if it says

Code:
 Could not generate persistent MAC address for vethO9C50D: No such file or directory

then you actually might have a rootfs problem, not a networking/cgroup/namespace problem
 
Last edited:
Update: this of course doenst dynamically generate the

lxc.cgroup.devices.allow = b 230:16 rwm

entry which should extend to all 230:* device nodes. If you have a trusted environment, could add entries for as many volumes as you think you'll ever need (ie :32 :48 :64 etc etc on up, seems to number by 16s) but keep in mind that risks one container accessing another's disk possibly if it's not trusted.

Is there a dynamic way to include this in the pve/lxc/*.conf file instead?
 
Solution is for zfs to support quotas in lxc, but it can't yet apparently.
 
Solution is for zfs to support quotas in lxc, but it can't yet apparently.
Actually, formatting the zvol with ext4 does give you the quotas in lxc. Not the most efficient way to use zfs as you have two filesystems in action, but it does the job.
 
Actually, formatting the zvol with ext4 does give you the quotas in lxc. Not the most efficient way to use zfs as you have two filesystems in action, but it does the job.

yes this does work of course and I've detailed how I got this working in cpanel in other threads. However, it does not give the same visibility into the filesystem that a zfs would for other purposes like backups (to check snapshots for a file for eg) as you need to remount the ext4 in loopback each time. Additionally, you get none of the benefits of zfs's caching and other algorithms and advanced fs management with ext4, instead a filesystem ignorant of the disks underneath, making bad decisions about allocation etc. But yes, it does work.
 
Additionally, you get none of the benefits of zfs's caching and other algorithms and advanced fs management with ext4, instead a filesystem ignorant of the disks underneath, making bad decisions about allocation etc.

"a filesystem ignorant of the disks underneath, making bad decisions about allocation" ... Do you think those negative aspects of formatting a zvol w/ ext4 can eventually lead to hard disk and filesystem errors?
 
no, the hardware will not fail any faster, and there will be no errors. the issue is performance. it is possible that one runs into a cornercase of pathologically bad allocation patterns that have both ext4 and the underlying zvol fighting and producing a huge slowdown, but it may be a small chance. this would have to be analyzed more in depth to be a true worry. stress test your systems and see if they give you the performance you require before you deploy.

So far I have had no issues that are not caused by abuse of the software itself, such as huge outgoing spam loads due to customer lax security or botnets hammering wordpress - those are not ext4 or zfs's fault.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!