Debian 13.1 LXC template fails to create/start (FIX)

Not sure if I am missing something...
I installed a new host from the latest ISO, did a full upgrade immediately after installation with the following now active:
- debian version 13.1
- kernel version 6.14.11-2-pve
- virtual environment 9.0.10
- pve-container version 6.0.13

I downloaded the latest debian template 13.1-2 and tried to run it - it starts and appears to be running but there is nothing in the console.
I then tried the latest v12 template - 12.12-1 - it starts and is totally usable.
I manually upgraded this LXC to v13 and after a reboot it is the same story - it starts and appears to be running but there is nothing in the console.

I came across this thread in which the issue seems to have been resolved so for the heck of it I looked at the content of /usr/share/perl5/PVE/LXC/Setup/Debian.pm but line 37 does not match what is indicated in this thread. I also did a search for 'unsupported debian version' but that appears nowhere in that file.
I tried changing the debian_version file in the LXC to 13 instead of 13.1 as that seems to have worked for a number of posters - still the issue remains.

So, not sure why the check is missing from Debian.pm or what is causing any LXC running v13+ to not start and load as it should :(

Any pointers of what to look at or other newer posts addressing the issue would be highly appreciated!

Werner
 
 
Thanks @Impact . I had a look at that thread.

Unprivileged = 1
Nesting = 0
Firewall = 0
Doesn't work.

Unprivileged = 1
Nesting = 1
Firewall = 0
Works.

I prefer not using nesting for isolation.
@ProxmoxStaff: Any idea why not nesting will cause this?
After some digging it looks like this relates to systemd in Debian 13 again which is causing issues.
I also attached the lxc-start debug file if that can be of any help.

Werner
 

Attachments

To quote myself:

Basically anything running a recent systemd (recent meaning released in the last five if not ten years at least) since systemd now has a quite powerful sandboxing and container mechanisms by itself (see https://wiki.archlinux.org/title/Systemd/Sandboxing https://wiki.debian.org/ServiceSandboxing and following posts by systemd developer Lennart Poettering https://0pointer.net/blog/projects/security.html https://0pointer.net/blog/systemd-for-administrators-part-xxi.html) that needs nesting. Thus nesting is also enabled by default on new containers since some ProxmoxVE Versions.

Any security fears are imho not grounded in reality: A service managed by systemd in unpriviliged container shouldn't be less secure than any process running bare metal as non-root user on the host even without the added sandboxing options of systemd. With them enabled and the additional security of lxc (using Linux control groups and namespaces) they are propably more secure than a service running as a non-root on the host.

If however you are running a priviliged LXC container it doesn't really matter whether you have nesting enabled or not since you are running as root:

Unprivileged Containers
Unprivileged containers use a new kernel feature called user namespaces. The root UID 0 inside the container is mapped to an unprivileged user outside the container. This means that most security issues (container escape, resource abuse, etc.) in these containers will affect a random unprivileged user, and would be a generic kernel security bug rather than an LXC issue. The LXC team thinks unprivileged containers are safe by design.

This is the default option when creating a new container.

Note If the container uses systemd as an init system, please be aware the systemd version running inside the container should be equal to or greater than 220.
Privileged Containers
Security in containers is achieved by using mandatory access control AppArmor restrictions, seccomp filters and Linux kernel namespaces. The LXC team considers this kind of container as unsafe, and they will not consider new container escape exploits to be security issues worthy of a CVE and quick fix. That’s why privileged containers should only be used in trusted environments.
https://pve.proxmox.com/wiki/Linux_Container#_security_considerations


Now if you are running a container in a distribution which doesn't use systemd but another init system (like Devuan or alpine) of course you don't need nesting (as long as you don't want to run a container inside that container) but that's another story ;) This subdiscussion started with somebody running an Ubuntu container without nesting and Ubuntu uses systemd.


So if you don't want to use nesting you will need to have a priviliged container (which is way worse than a unpriviliged container with enabled nesting from a security point of view) or use a distribution without systemd as os inside the container, e.g. Devuan or Alpine. Not sure it's worth the hassle though...
 
  • Like
Reactions: leesteken
@Johannes S thanks for the added information. Looks like systemd containers is really forcing nesting to be on then.
Coming from a vm world with full isolation, I'm still getting used to security best practices for containers :)
I will have a more indepth read about your recommendations.
 
Hey everybody.
Also encountered this strange issue with Debian 13, Ubuntu 25 trying to run them on PVE 8.1.
But here is interesting findings:
After upgrading to PVE 8.4.14, I was able to **start** the deb and ubuntu lxc's, but got totally nothing in the console.
I've also tried to manually dist-upgrade on the deb12 lxc. And before the reboot it worked normally, abut after - complete darkness in the console. But the PVE web UI says it started.
So i tried on the manually upgraded `pct console 100` - and still, totally dead console. Had to `ctrl+a q`
But when I tried `pct enter 100` - it just worked. The LXC is alive, and it can exec commands (apt, ls, ip etc.).
I tried to `wall "hello"` - and on the web-ui console nothing were displayed.

I'm sorry If this doesn't help anyone, but may be someone with better knowledge can find a solution ¯\_(ツ)_/¯
 
@elkemper referring to the feedback from @Johannes S and some additional digging on my side it has become evident that if you are running an O/S in a LXC that is using systemd on top of Proxmox (which is also using systemd), then you almost have no choice but to enable nesting if you want your LXC to run without hiccups.

I say almost because:
- With Debian v12 you could still disable a controlgroup setting inside the LXC itself which solved some lightweight issues and you could continue running unnested.
- With v13 though it seems like there are many more systemd processes (I have not wasted any amount of time on trying to figure out which) dependent on a passthru to the host O/S for it to work as it should.
- You can run v13 unnested if you add a bunch of flags in the LXC's config file which basically will allow those critical services to passthru to the host O/S without allowing full nesting.
- Or you can make it easy on yourself and enable nesting for both v12 & 13 (and any other LXC O/S using systemd) to just have everything work as it should. As @Johannes S pointed out there is enough ring fencing now with things like AppArmor etc to curtail step-out exposure to the underlying host O/S that you should not be overly paranoid about it any longer.
And if you really are still concerned then spend time with journalctl inside the LXC to figure out which systemd processes are failing/warning and add the flags in the config just for those.