Slightly OT: How does PM do LXC containers under the covers?

Chris Olive

New Member
Apr 8, 2020
20
0
1
44
BACKGROUND

For years, I've had two 32gb mem VMware ESXis in the house to run lab VMs. I recently converted not only one of them to PM, but completely converted myself to PM. It's far superior to VMware. Wish I had discovered PM sooner. And one of the big, big, BIG selling features of PM is the LXC orchestration. I'm not a container snob -- containers are basically containers from a 10' meter view, and LXC on PM suits me just fine for lightweight servers (MariaDB, Redis, etc.)

I very reluctantly had to keep one of the serves as VMware ESXis instead of converting them both due to a security appliance that does underlying platform testing to make sure it's running on ESXi or HyperV. (WHY!!!???/WARUM!!!???) There's no getting around it.

So I decided to see if I could mostly duplicate the simplicity of LXC on VMWare using a VM dedicated to containers. I decided to stick with LXC/LXD for "consistency" and it's amazing how far I've come and how I've almost got this working extremely well. The problem is when it comes time to launch a container, I don't have the granularity and control over the container settings -- memory, disk space most especially, CPUs -- as what PM affords and orchestrates. I've got to become a freaking expert in all these lxc.whatever settings, and I have to tweak these each time I want to launch a container.

Which defeats the entire purpose.

WHAT I'M LOOKING FOR

My thought is... perhaps under the covers somewhere, when launching a container, PM gathers all the settings entered in the Web GUI and cuts an LXC container settings/config or template file underneath the covers that has all the lxc.whatever settings set, and then does an lxc-create -f or lxc-create -t to create/launch the container. If I could grab whatever PM generates for a config or template file, I could use those as templates to generate a few of my own configs/templates on the raw LXC/LXD VM I've created for the finite LXC settings I want and I'd be done.

But with PM pct, not sure it's really doing that (described above).

it's ridiculous how much it takes to manage a smallish container system just for development lab purposes (which is essentially my personal production universe).

Without orchestration, maybe it's time to abandon LXC/LXD in a VM and look elsewhere like Docker, rkt, etc. I just need something simple. I am very, very close to having what I want. It's container settings at container launch that has become surprisingly and ridiculously daunting.
 
My thought is... perhaps under the covers somewhere, when launching a container, PM gathers all the settings entered in the Web GUI and cuts an LXC container settings/config or template file underneath the covers that has all the lxc.whatever settings set, and then does an lxc-create -f or lxc-create -t to create/launch the container

Sorta, we have a simple config format in /etc/pve/nodes/<node>/lxc/<vmid>.conf, we then translate it to LXC config keys, we do not use lxc-create though, initial creation (extraction of archive + initial config generation) and config update before start is all done directly by our management code:
https://git.proxmox.com/?p=pve-cont...b871a2dacb31a5045835b95acbe35e2d;hb=HEAD#l560

You can take such a config and sorta re-use it, but there are some hooked scripts called for network create/delete etc., that would need handling too..

it's ridiculous how much it takes to manage a smallish container system just for development lab purposes (which is essentially my personal production universe).

That's why Proxmox VE does it for you ;-) Anyway, I'm not really sure what you ask for - if you want to recreate a CT management engine, honestly: don't, it's really a lot of work, it may work for very specialized small setups with not to much churn, but as soon as you want some basic features (easy config change, handling more distros in more versions, hotplug changes, backup restore, peer reviewed security handling of CTs with apparmor/seccomp, ...) it gets very complex fast, stick with something existing, it will save you lots of works and headache..

Also: There's no such thing as "PM", there's Proxmox VE, shortened to "PVE", mostly telling as I'd not recognize what was meant with "PM" if reading this in the wild :)
 
Sorta, we have a simple config format in /etc/pve/nodes/<node>/lxc/<vmid>.conf, we then translate it to LXC config keys, we do not use lxc-create though, initial creation (extraction of archive + initial config generation) and config update before start is all done directly by our management code:
https://git.proxmox.com/?p=pve-cont...b871a2dacb31a5045835b95acbe35e2d;hb=HEAD#l560

You can take such a config and sorta re-use it, but there are some hooked scripts called for network create/delete etc., that would need handling too..

Might be enough for me. Ima check it out.

That's why Proxmox VE does it for you ;-) Anyway, I'm not really sure what you ask for - if you want to recreate a CT management engine, honestly: don't, it's really a lot of work, it may work for very specialized small setups with not to much churn, but as soon as you want some basic features (easy config change, handling more distros in more versions, hotplug changes, backup restore, peer reviewed security handling of CTs with apparmor/seccomp, ...) it gets very complex fast, stick with something existing, it will save you lots of works and headache..

Totally. Totally agree. Again, it's amazing what I was able to accomplish with LXD for LXC and it's all very simple once set up except the actual container launch settings. I can issue an lxc launch blah/blah/blah then yum -y install mariadb (CentOS 8) and yum -y install openssh-server, ssh-code-id -i me@container and I'm DONE. Less than 5 minutes and I have a running MariaDB/Redis/Whatever container. I mainly want to specify disk size out of an LXD storage pool already configured and I'd be set. I'm THAT. CLOSE. I don't need mem and CPU restrictions, really.

Yes, PVE makes it very nice/sehr nett.

Also: There's no such thing as "PM", there's Proxmox VE, shortened to "PVE", mostly telling as I'd not recognize what was meant with "PM" if reading this in the wild :)

Yeah, after I wrote "PM", I was betting "PVE" was really the better way to go. I'm not in here much bc PVE pretty much works or what I have to figure out, I can figure out quickly.

Thanks/Vielen Dank!
 
Sorta, we have a simple config format in /etc/pve/nodes/<node>/lxc/<vmid>.conf, we then translate it to LXC config keys, we do not use lxc-create though, initial creation (extraction of archive + initial config generation) and config update before start is all done directly by our management code:
https://git.proxmox.com/?p=pve-cont...b871a2dacb31a5045835b95acbe35e2d;hb=HEAD#l560

You can take such a config and sorta re-use it, but there are some hooked scripts called for network create/delete etc., that would need handling too..

Okay, had a look and you are exactly correct. The .conf files at /etc/pve/nodes/<node>/lxc/<vmid>.conf get handled by the management code at the URL above. I was pleased to see Perl when I visited that link as I still do a lot of coding in Perl. But I can see this is exactly what someone did is xlate say cores: 2 in the .conf file to the correct lxc.whatever parameter, and again, you're correct... It's more work than I want to do. But not unexpected at all.

I've got enough help here to play with/hang myself with. Thanks a bunch!
 
You could install PVE in the VM you are using for containers just for management purposes. Just a thought.

Holy carp, I never really even thought of that? How does PVE perform when inside a VM itself? I'd probably use it just for containers, yes... Seems simple. I conceive of PVE on bare metal, but I don't see why this wouldn't work?

It irks me I have to keep this ESXi server around.
 
Talking about nested virtualization: I am not sure of the effects for your security appliance but you could also run ESXi as virtual machine inside Proxmox VE.
 
Holy carp, I never really even thought of that? How does PVE perform when inside a VM itself? I'd probably use it just for containers, yes... Seems simple. I conceive of PVE on bare metal, but I don't see why this wouldn't work?

It works very good and is used daily by us developers to test more complex setups without needing 10s of test servers for everyone in the lab here :)

Containers work really good, you won't really feel the slight additional overhead IMO.
Nested Virtualization for VMs works also quite well, but here I'd rather recommend using it for testing only, in fact I often recommend playing through major upgrades or new setups ideas as virtual PVE cluster to get a basic feeling for it.
 
It works very good and is used daily by us developers to test more complex setups without needing 10s of test servers for everyone in the lab here :)

Containers work really good, you won't really feel the slight additional overhead IMO.
Nested Virtualization for VMs works also quite well, but here I'd rather recommend using it for testing only, in fact I often recommend playing through major upgrades or new setups ideas as virtual PVE cluster to get a basic feeling for it.

I totally should have gone this way (tested PVE under ESXi). But I was also pressed for time and PVE was still a magnificent discovery. I've learned so much in just a few weeks and there are things I'd like to change where I'm basically entrenched and can't go back, not without a lot of work I don't have time for. One big irritation is the name of my original PVE server which I'd like to change, but it looks like the work to do so isn't necessarily trivial (due to pathings, etc.)

But yes, I pretty readily realized that for containers, nested virtualization is going to work well, PVE running under ESXi. That's all I wanted pretty much anyway and by running PVE, I basically inherit all the lxc.whatever Perl remapping code I was looking for. Simple.
 
Talking about nested virtualization: I am not sure of the effects for your security appliance but you could also run ESXi as virtual machine inside Proxmox VE.

It would probably run like a dog, going the other way around (nesting ESXi inside of PVE instead of the other way around).

I have no idea why in English, the idiom "runs like a dog" is what it is. Dog are actually pretty fast. :cool:
 
It works very good and is used daily by us developers to test more complex setups without needing 10s of test servers for everyone in the lab here :)

Containers work really good, you won't really feel the slight additional overhead IMO.
Nested Virtualization for VMs works also quite well, but here I'd rather recommend using it for testing only, in fact I often recommend playing through major upgrades or new setups ideas as virtual PVE cluster to get a basic feeling for it.

But now networking is a pain. I'm totally running into this now:

https://forum.proxmox.com/threads/t...alized-proxmox-on-esxi-network-problem.37612/

Trying to work through it, but none of this is coming easy.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!