Debian 13.1 LXC template fails to create/start (FIX)

tsv0

Member
Dec 14, 2021
5
12
8
Hi All

hit an issue with updated LXC to Debian 13.1


TASK ERROR: unable to create CT 200 - unsupported debian version '13.1'

FIX:

you need to edit /usr/share/perl5/PVE/LXC/Setup/Debian.pm
on line 39
and change

from

die "unsupported debian version '$version'\n" if !($version >= 4 && $version <= 13);

to

die "unsupported debian version '$version'\n" if !($version >= 4 && $version <= 14);


save and restart you PVE host.

P.S. probably this needs to be addressed in newer versions of PVE.

P.P.S. in my case - i'm with PVE v9.0.6 but looks like it also affects PVE 8.x
 
Last edited:
Thank you so much! Just updated all my lxc container just to not have them start after a reboot :S

Do you mind sharing how you debugged this?
I was absolutely lost trying to find any information on why they are not staring. Just got the rather cryptic message of:

Code:
run_buffer: 571 Script exited with status 25
lxc_init: 845 Failed to run lxc.hook.pre-start for container "110"
__lxc_start: 2034 Failed to initialize container "110"
startup for container '110' failed
 
Chatgpt for me has been super useful, free account. It got me close on this one but it was targeting the wrong file. But with proxmox it has been like having a true tech assistant/tutor. Yeah, I know it's not always right but the stuff I struggle with is typically the easier things. Great learning tool.
 
Same Issue and yes, the Workaround works :) .

I think that the Proxmox VE Team should just make a <= (current release + 1) Check or just filter the Content of /etc/os-release or /etc/debian_version to only keep the Major Version (13 in this Case for 13.1) into account instead of the MAJOR.minor Version Number.
 
Hi All

hit an issue with updated LXC to Debian 13.1


TASK ERROR: unable to create CT 200 - unsupported debian version '13.1'

FIX:

you need to edit /usr/share/perl5/PVE/LXC/Setup/Debian.pm
on line 39
and change

from

die "unsupported debian version '$version'\n" if !($version >= 4 && $version <= 13);

to

die "unsupported debian version '$version'\n" if !($version >= 4 && $version <= 14);


save and restart you PVE host.

P.S. probably this needs to be addressed in newer versions of PVE.

P.P.S. in my case - i'm with PVE v9.0.6 but looks like it also affects PVE 8.x
Thank you so very much for your post, I was having the same issues and after the update to the Debian.pm file it fixed the issue. Awesome work!
 
I believe once they add CT template for debian 13, they will also fix this perl script. Maybe their approach is "since we don't provide official trixie CT, we won't allow you to start one, if you dist-upgrade inside the container"?
 
  • Like
Reactions: gfngfn256
Maybe their approach is "since we don't provide official trixie CT, we won't allow you to start one, if you dist-upgrade inside the container"?
They allowed 13.0 (Proxmox Backup Server 4.0) so this is unlikely. It's much more likely they forgot to increase the number (or someone thought that <= 13 is the same as < 14) again (as this also happened before with Debian 12.1).
 
  • Like
Reactions: Johannes S
Since I have need for this myself I've created a temporary patch for this. You can apply it like this
Bash:
wget https://gist.githubusercontent.com/Impact123/59f8340c30b64c6fdfc2ea7e24b6b98d/raw/8d9839044bd8792b8b498773ff5979286d229094/debian_version.patch
patch /usr/share/perl5/PVE/LXC/Setup/Debian.pm < debian_version.patch
I chose a patch over sed because it can be easily reversed and it felt safer/cleaner than a one-liner in this case but if you prefer that you can use this instead
Bash:
sed -i '39s/\($version <= \)13/\114/' /usr/share/perl5/PVE/LXC/Setup/Debian.pm

Validate with
Bash:
grep -Hn "unsupported debian version" /usr/share/perl5/PVE/LXC/Setup/Debian.pm

You might also want to subscribe to the issue(s) about this
- https://bugzilla.proxmox.com/show_bug.cgi?id=6772
- https://bugzilla.proxmox.com/show_bug.cgi?id=6771
 
Last edited:
This is not a very robust way of doing things Proxmox / @t.lamprecht ...

Stuff like this should soft-fail and not suddenly completly prevent legitimate containers that just have been updated - from being launched with a not so clear error message.
To be fair Proxmox VE is far from the only one doing this.

At least in this Case the Workaround is quite simple. And most of the Stuff gets fixed relatively quickly by the Proxmox VE Team. They also provide good Answers in most Cases, although I admit that some Kernel Panics & Regression (very low % of Occurrences according to them, although that doesn't sound very helpful when YOU are the one affected by it) were left a bit hanging :( .

Concerning this specific Issue, instead of "LXC just working", any Update can stop the entire Infrastructure. What if your Nameserver is running in LXC and that doesn't come up ? For instance I have pihole in an LXC. And ISC bind / named on another LXC on another Server.

If you were a paying Customer I'd of course expect Proxmox Server Solutions GmbH to be very responsive in getting this fixed ASAP.

But for the vast majority of us running on pve-no-subscription Repository, we just get community Support and that's basically it ...

Frankly as a Homelab User, if I'd have to purchase one Subscription for each Server (like their Selling Model is), then I would pretty quickly end up without Money. I also fail to see in how buying a Subscription for a single Server would help me (as Luck would have it, surely it's going to be ANOTHER Server that is affected next Time) ...
 
This is not a very robust way of doing things Proxmox / @t.lamprecht ...
The check as it was previously was indeed not ideal, so we not only fixed the issue itself but also changed the check such that it cannot happen in this form again, as now PVE either supports a major Debian release or it doesn't, but no break from e.g. going from .0 to .1 like here anymore.

The fix is contained in pve-container version 6.0.10 for PVE 9 and version 5.3.1 for PVE 8, which are both currently available on the respective pve-no-subscription repository. While we tested this closely, it would be still great to get additional feedback about those versions.

Adapting the behavior to differ between a fresh creation/restore and a start of an existing CT definitively makes sense, but is a bigger change, so out of scope for getting the initial fix out.
 
Last edited: