LXC or VM to run docker containers

toxic

Active Member
Aug 1, 2020
57
6
28
37
Hello,
I have found a way and I can now choose either a VM or a LXC to run my main docker host on top of pve. It will be the one exposed to the internet by my opnSense router, it will run portainer to let me start most of my services and try out new docker containers.

The LXC looks better for performance, freeing RAM more readily for example. But the "drawback" is that most of my containers need access to my NAS using cifs, the LXC itself will run unconfined just because of this. But inside the LXC only Ubuntu and docker with overlay driver are running, and all my other containers are non-priviledged.

So since it's my home setup I feel this is not really an issue and I think performance will be much better with LXC.

Do you see any big issue I could have missed and that would make a VM setup more suited to running docker/portainer than with my LXC ?

(I particularly like the almost instantaneous boot time ov an LXC versus the VM that is quite long)

Thanks for any input!
 
hi,

if it will be exposed to the internet and used by untrusted users, i would highly suggest to use a VM instead of an unconfined LXC, for security reasons.

also the container need not be unconfined, a good option for using cifs in your container would be to mount the share on your host instead, and use lxc bindmounts [0] to make it available for the container.

hope this helps

[0]: https://pve.proxmox.com/wiki/Linux_Container#_bind_mount_points
 
Thanks for the reply and ideas !

In fact, only docker unprivileged containers will be exposed to WAN, but the docker engine that runs them will itself run in an unconfined LXC.

I've been looking into mounting the cifs on pve host and later bind mountpoints on the CT.
Leaving aside a few hassle with uid mapping, I got it working but had issues with mounting the cifs properly on boot of the pve host and unmounting it on shutdown : after some VM but before some other... (Long minutes of waiting for it to timeout...)
In fact, one of the VM is my router/gateway, while it's off, there is no knows path from pve to the NAS.
What I have now is a start-stop-hook in my router VM that connects to pve and start-stop the other VM/CT that depend on gateway being up... So I'd have to rewrite that to include cifs mounting/unmounting at the proper time...

I also had issues binding a mountpoint that contains several cifs mounts and had to bind each of the cifs separately...

I'd rather stick to each VM or CT mounting what it needs for the time being...

But maybe in the future we can get a new type of storage for a VM that would be a cifs share? No? Just to ensure pve handle the mounting and unmounting upon CT start-stop and keeps cifs secrets on pve and not exposed to CT as for a bind mount?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!