Understanding virtual disk images under lxc containers

vdm

Well-Known Member
May 23, 2018
44
0
46
Dear Team,

please help me to understand virtual disk images under lxc containers, as I read several posts which did not satisfy my challenges.

I have a VM acting as a fileserver with a 300GB qcow2 disk attached along with the root disk.

As I found that using lxc is much more efficient (very often) than virtual machines I created a container for new fileserver purpose.

Now I would like to add a second virtual disk to it, copying all my data to the new virtual disk (because I cannot attach my qcow2 disk to my new container. I read that it is not supported). The second drive should be a virtual image, as I use Proxmox Backup Server for backing up all virtual images.

What would be the right way to get all my personal data to a new virtual subvolume in this new lxc container?
Would I have to use a 300GB big root disk?
Do I really have to use the hosts filesystem, as I dont have an external NFS storage, as I use this virtual fileserver for that purpose?
How would I backup files on my primary ZFS volume? (2x 4GB harddrives as ZFS mirror with log and cache on ssd).

Many thanks for clearification,
Andreas
 
Now I would like to add a second virtual disk to it, copying all my data to the new virtual disk (because I cannot attach my qcow2 disk to my new container. I read that it is not supported). The second drive should be a virtual image, as I use Proxmox Backup Server for backing up all virtual images.
What would be the right way to get all my personal data to a new virtual subvolume in this new lxc container?
Would I have to use a 300GB big root disk?
Do I really have to use the hosts filesystem, as I dont have an external NFS storage, as I use this virtual fileserver for that purpose?
How would I backup files on my primary ZFS volume? (2x 4GB harddrives as ZFS mirror with log and cache on ssd).
LXCs aren't virtualized like VMs. LXCs are just filesystems with data which are more or less isolated. They share the kernel and hardware with your proxmox host. Because of that you are not using virtual block devices (HDDs) like QEMUs qcow2 or ZFSs zvols. The common way to add storage is bind-mounting physical partitions, folders or mountpoints of a ZFS dataset of your host system into your LXC. That way the LXC can directly access the files on your physical drives and you don't get overhead because there is no virtual blockdevice, no virtual filesystem and no network protocol like NFS in between.

But keep in mind that bind-mounting a folder to a unprivileged container is a little bit painful, because all users and groups are remapped. Your root user with UID 0 inside the LXC is not the root user on your host, but a unknown user with GID 100000 instead. Thats because all users with UID 0 to 65535 inside the LXC are mapped to the users 100000 to 165535 on your host. So you run into right conflicts if you don't manually change the user remapping or using chmod 777 everywhere.
 
Last edited:
Thank you Dunuin.

The container and it's disks are then fully backed up by the Proxmox Backup server, I believe?
 
I edited my last post. I'm not sure how PSB is handling this.
Bind-mounted folders aren't backed up by default and will be excluded in the LXC backup file.
You could create a new ZFS dataset on your ZFS pool and bind-mount that mountpoint to the LXC. I don't know if PBS is supporting rsync or zfs replication. But that are two options I would use to backup the data, because they work with the delta copy method or support incremental backups so you only need to transfer the changes and not everytime the complete 300GB of data.

Or you just create a very big LXC and store your data on that so it will be backuped too because you don't need to bind-mount anything. But thats nothing I would like to do if PBS isn't supporting incremental backups. Not sure if PBS is supporting that meanwhile, but without it would be a waste of capacity to create a new copy of all 300GB of data every time you run that backup.
 
Last edited:
Will it work better with a priviledged container and are there negative implications to that?
 
Will it work better with a priviledged container and are there negative implications to that?
Its easier with privileged containers but you encounter less problems because they are less isolated. I wouldn't use them to run services you share outside your safe LAN. Because if that privileged LXCs gets hacked it isn't that hard to get root access of your complete proxmox host, because of the bad isolation.
 
Got that. And read the docs which sound awful.

So, if I would put all my fileserver data on the root disk, I could easily have all my users and permissions set, w/o walking through the user mapping?
Because this user mapping is only to do when I use a second disk?
 
Got that. And read the docs which sound awful.

So, if I would put all my fileserver data on the root disk, I could easily have all my users and permissions set, w/o walking through the user mapping?
Because this user mapping is only to do when I use a second disk?
Basically yes. If you don't share a folder/hardware between host and unprivileged LXC the user remapping is not the big problem, because everything is done inside the LXC with the same remapped UIDs. But keep in mind that you can't directly mount stuff like SMB/NFS shares inside a unprivileged LXC because that is not allowed and the LXC is missing the rights for that. Not sure how serving a NFS/SMB share works.
 
That's really interesting questions you mention here, indeed.

So, as lxc, how would I realize e.g.

- A Webserver, with attached smb/nfs shares, as the Webserver acting as a frontend for e.g. Nextcloud, where the data lives on a fileserver.
- A Fileserver, serving smb/nfs mounts to such a webserver or a mailserver.

Or would such use cases rather be left as a fully fledged VM to avoid creating a monster of mappings of permissions?
 
That's really interesting questions you mention here, indeed.

So, as lxc, how would I realize e.g.

- A Webserver, with attached smb/nfs shares, as the Webserver acting as a frontend for e.g. Nextcloud, where the data lives on a fileserver.
- A Fileserver, serving smb/nfs mounts to such a webserver or a mailserver.

Or would such use cases rather be left as a fully fledged VM to avoid creating a monster of mappings of permissions?
I personally would use a VM for that, even if it is not that ressource efficient, because it is more secure and easier to manage, backup and migrate because of the full isolation.
All your stuff would be done inside VMs and you don't need edit stuff on the host itself and it will run too if you migrate a VM from one server to another (if your server may die) without hours of editing your new host to run services your LXC would rely on to be able to operate.
Moreover mailservers and webservers are risky targets for hackers and if a VMs gets hacked the hacker is limited to that one VM and can't easily get access to the complete host oder other VMs/LXCs running on that host.

If you really need to use LXCs, because you are running super low end hardware, I would atleast use unprivileged LXCs for that to get a little bit of isolation. If you want to mount a SMB share into your unprivileged LXC you can't directly do that. You host needs to mount that SMB share as a specific user and group (UID/GID 1000 for example). Next you bind-mount that mountpoint of the SMB share from your host to your LXC. Now you need to manually edit the the remapping, so the UID 1000 inside the LXC is mapped to UID 1000 on the host (instead of the default remapping UID 1000 to 101000) so the UID/GIDs the share is using are the same on both host and LXC or you would get into problems with the rights.
How to do that is described here.
 
Last edited:
I agree on this and hopefully this conversation may also help others.
Thank you Dunuin.
 
Ok I'm a bit late to this discussion but...

I did that exact same thing of running an NFS server inside an LXC container. The areas on the host PVE system that I wanted to share out were bind mounted into the container.

As mentioned before, in order to not have id remapping getting in the way, and more fundamentally, to be able to load the NFS kernel modules into the host's kernel, the container has to be privileged. This worked more or less without issue for Proxmox 6.x, but I abandoned that approach in the end and set up some old hardware as a file server instead.

Some thoughts and observations:
  • An alternative is to simply install the NFS kernel packages on the PVE host and share out data directly from the host. Whilst the privileged container offers the advantage of being able to completely and cleanly reverse your decision at a later date should you want to (as all the extra stuff was installed in the container and so is removed along with it), being a privileged container, it doesn't offer much more protection than the direct approach.
  • A type one hypervisor has one purpose and that's it. Keep it clean and change stuff on the host as little as possible. This is coming from a tinkerer!
  • With the privileged container approach I had to make sure that is was updated exactly at the same time and version as the PVE host, otherwise I would get warnings from the kernel when ever the container was shutdown or sometimes when it started up.
If I had to go back to that setup I'd use a proper VM or, as a last resort, just install the NFS modules directly onto the PVE host.

I hope that helps.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!