Proxmox 4 / LXC: Chroot instead of raw images?

Is there anyway to get user quotas to work with LXC so we can install applications that require them such as Directadmin?
 
Please what are the advantages and disadvantages of using zfs dir versus row image? I'm big fun of dirs from openvz and vserver we are now leaving. We have been rsyncing files between nodes for almost ten years, 16 servers without any issues, experiencing many power failures. Is this possible to expect it when using row image with zfs on zfs?
What are the reasons proxmox is using row images by default not supporting zfs dirs with the gui? What are the advantages/disadvantages?
I'm afraid that raw images would add an extra unnecessary abstraction. Why to have another file system on a existing filesystem? What would happen if there would be power loss? The image could get broken a way witch is not expected for hardware to happen (caused by another level of cachink) and may be impossible to recover? Is this scenario possible or likely?
If I create container in a directory with command line utility using the zero size trick, is there any other hidden drawback?
Can I consider it long term bug the the gui does not allow the zero size, or is there a good reason for it?
 
We do not use raw images for zfs container root. Instead, we use zfs subvolumes. So I do not really understand your complain?
 
Hello Dietmar

We do not use raw images for zfs container root.
Are the zfs subvolumes set up automatically by proxmox during container creation?

When a container is created with a zfs subvolume,
Is it possible to view the container's filesystem in /var/lib/lxc/ct-id/rootfs?
And can you still login to the container from host using pct enter?

Regards.
 
We do not use raw images for zfs container root. Instead, we use zfs subvolumes. So I do not really understand your complain?
You are right, if you use local-zfs, it is the best what can be. Here is a citation which explaining it:
Your "local-zfs" is a ZFS storage, which means that when you create a disk, it will create a ZFS volume to store the raw data. This is the most efficient storage you can have for this purpose, and you can use ZFS feature individually on each such volume (such as setting compression, snapshots and zfs send/receive to copy them, etc.) You see your vm-100 disk is built this way

The raw image versus subdirectory discussion is only for those who do not want to use zfs for some reason - very few people. I did not check again when we started to use zfs, the huge raw files I was worried about are gone now.
So it seems the default behaviour when zfs is used is the best what can be achieved, fully automatic with gui, no reason to complain.
 
The raw image versus subdirectory discussion is only for those who do not want to use zfs for some reason

I have been using rdiff-backup for incremental file / folder / full recoveries of openvz containers for years without fail, But that automation requires the subdirectories view in containers.

Reading through this thread, what I am still not clear about is, which storage options in Proxmox provide the same chroot view capability as openvz for LXC containers?
 
I have been using rdiff-backup for incremental file / folder / full recoveries of openvz containers for years without fail, But that automation requires the subdirectories view in containers.

Reading through this thread, what I am still not clear about is, which storage options in Proxmox provide the same chroot view capability as openvz for LXC containers?

I'm not an expert, I can describe how I understand this. The zfs file system creates special volume for each container, to be able to set each container differently, for example the maximum disk size available for each container. However the volume is mounted, so you can do rdiff-backup or rsync or update inside the container from outside. See mounted mounting points by command df -h
If a raw image is used, it can be also mounted by hand using command describe in this thread above, but raw image is not so good option as having a file system inside a big file on another file system is not that efficient a reliable.

Openvz was using simfs which was fake mounted to be able to apply different disk setting for each mount point and container, but is obsoled.
If you create a container using the zero size trick described above, there will be no mounting, but you are not able to make disk size related settings for the container. I believe that for this reason the gui does not allow creating such a type of container by default, only command line must be used.
The local-zfs is preferred as it has all the advantages and no disadvantages, so using this is the preferred solution for all, allowing everything anybody may need.

Please correct me if I'm not right.
 
Not being able to set disk quotas for individual users inside a container seems to be a disadvantage for ZFS if you have several hundred users.

I suppose this restriction can be circumvented by assigning a separate container to each user, installing a lightweight linux distro and setting up a quota for each subvol and automating the process from the cli.
 
@moxfan: ZFS has user and group quotas (both for used space, and since 0.7 also for used ZFS objects) - see "man zfs". they just don't use the same tools for setup as ext4, so it's not integrated in software like "cPanel".
 
@moxfan: ZFS has user and group quotas (both for used space, and since 0.7 also for used ZFS objects) - see "man zfs". they just don't use the same tools for setup as ext4, so it's not integrated in software like "cPanel".
Aren't the user and group quotas for the subvol itself? Like, it is impossible to set individual user quotas inside the subvol.

It is the same issue with XFS in OpenVZ 6 (simfs) containers; there is no support for user quotas if the storage device has been formatted with XFS.
 
If a raw image is used, it can be also mounted by hand using command describe in this thread above
The question is, is it possible to replace or add files to the mounted raw file from the host (from a backup, for example) without disturbing the user quotas, in the same way that it is possible to do with OpenVZ (simfs) containers.
 
Aren't the user and group quotas for the subvol itself? Like, it is impossible to set individual user quotas inside the subvol.

It is the same issue with XFS in OpenVZ 6 (simfs) containers; there is no support for user quotas if the storage device has been formatted with XFS.

you cannot set quotas on indiviual files/directories in a dataset, if that is what you mean?
 
you cannot set quotas on indiviual files/directories in a dataset, if that is what you mean?
Yes. As you've said it, can't set quotas on individual files/directories if:

* the container has been formatted with XFS (raw image) or resides in a partition formatted with XFS (openvz6-simfs)

* the container is a ZFS subvolume

So basically, XFS and ZFS are unusable if second-level user quotas are a must. Will be glad to hear if someone has created a work-around to these. ;)
 
Yes. As you've said it, can't set quotas on individual files/directories if:

* the container has been formatted with XFS (raw image) or resides in a partition formatted with XFS (openvz6-simfs)

* the container is a ZFS subvolume

So basically, XFS and ZFS are unusable if second-level user quotas are a must. Will be glad to hear if someone has created a work-around to these. ;)

usually you just create additional datasets if you want the same user to have several storage spaces with different quotas...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!