lxc_init: 845 Failed to run lxc.hook.pre-start for container "200"

I have a similar Issue unfortunately on 8.3.0:
Code:
pve-manager/8.3.0/c1689ccb1065a83b (running kernel: 6.10.11+bpo-amd64)

YES, I know that is the Debian Backports Kernel. Unfortunately Proxmox VE Kernels has a BIG Tendency to Panic on many Systems recently. Both 6.5 and 6.8 for that Matter. Installing the Debian Backports Kernel was the only solution that allowed the System to somewhat work ...

But no apparent / clear Cause.



The workaround for this PRIVILEGED Container (Fedora Linux 41) was to use NFS Mount (from inside the LXC Container) using the ro,nolock Options.

This was also a PITA because the rpc-statd.service keeps Failing inside the Container (and Logs are completely useless since they don't give any hint as to why this happens):
Code:
root@HOST:/home/podman# systemctl status rpc-statd
× rpc-statd.service - NFS status monitor for NFSv2/3 locking.
     Loaded: loaded (/usr/lib/systemd/system/rpc-statd.service; static)
    Drop-In: /usr/lib/systemd/system/service.d
             └─10-timeout-abort.conf, 50-keep-warm.conf
     Active: failed (Result: exit-code) since Sun 2024-12-01 19:19:50 CET; 10s ago
 Invocation: f0caa9ad31384b4abf3ff6bf98206589
       Docs: man:rpc.statd(8)
    Process: 432 ExecStart=/usr/sbin/rpc.statd (code=exited, status=1/FAILURE)
   Mem peak: 1.2M
        CPU: 17ms

Dec 01 19:19:50 HOST systemd[1]: Starting rpc-statd.service - NFS status monitor for NFSv2/3 locking....
Dec 01 19:19:50 HOST rpc.statd[433]: Version 2.8.1 starting
Dec 01 19:19:50 HOST rpc.statd[433]: Flags: TI-RPC
Dec 01 19:19:50 HOST rpc.statd[433]: Initializing NSM state
Dec 01 19:19:50 HOST systemd[1]: rpc-statd.service: Control process exited, code=exited, status=1/FAILURE
Dec 01 19:19:50 HOST systemd[1]: rpc-statd.service: Failed with result 'exit-code'.
Dec 01 19:19:50 HOST systemd[1]: Failed to start rpc-statd.service - NFS status monitor for NFSv2/3 locking..

Hence the reason for the ro,nolock Mount Option.

Note that this was a PRIVILEGED LXC Container with Nested and NFS Features Enabled.
 

Attachments

I have a similar Issue unfortunately on 8.3.0:
You face a different issue with similar symptoms, but not the same issue.
OP run into a bug of a library that resulted in some perl module not being found.
But your log do not contain that error at all, but rather lxc pre-start produced output: directory '/mnt/bindmounts/tools_nfs' does not exist, so in your case you (probably) have some bind mount configured and that's either not available for the unprivileged user id range or just does not exist.

YES, I know that is the Debian Backports Kernel. Unfortunately Proxmox VE Kernels has a BIG Tendency to Panic on many Systems recently. Both 6.5 and 6.8 for that Matter. Installing the Debian Backports Kernel was the only solution that allowed the System to somewhat work ...
We cannot really support this sanely as the Debian kernel lacks some apparmor and other patches, so it's hard to tell what can work or how it should be workarounded. And FWIW, from our enterprise support feedback and our the 7-figure number of hosts using Proxmox VE 8 repos it's certainly not the case that there is a significant percentage of hosts where the Proxmox Kernel is generally unstable. Problems exists, but are normally quite targeted. So it might cause problems for your specific setup, but definitively not wide-spread issues.

In any way, please open a new thread and include more details (CT config and directory permissions for the start/mount issue or hardware and error messages for the kernel one).
 
We cannot really support this sanely as the Debian kernel lacks some apparmor and other patches, so it's hard to tell what can work or how it should be workarounded. And FWIW, from our enterprise support feedback and our the 7-figure number of hosts using Proxmox VE 8 repos it's certainly not the case that there is a significant percentage of hosts where the Proxmox Kernel is generally unstable. Problems exists, but are normally quite targeted. So it might cause problems for your specific setup, but definitively not wide-spread issues.
Pretty sure I say MANY Threads about Kernel 6.8 leading to Kernel Panics for many People.

My Experience at least on 2 Systems:
- AMD B550 with AMD 5950x CPU With AMD RX 6600 XT GPU, Kernel Panic at Boot. Need to blacklist amdgpu, then no more Panics. But of course given the limited number of IOMMU Groups that screws up the whole Setup since I NEED to pass the GPU to a VM but I CANNOT pass the Hailo-8L adapters to the same VM ...

- ASUS P9DWS with Intel Xeon E3-1245 v3 CPU Kernel Panic Immediately at boot no matter what


In any way, please open a new thread and include more details (CT config and directory permissions for the start/mount issue or hardware and error messages for the kernel one).
Will do :) . I kinda gave up on this LXC Setup though to run Podman inside LXC seems to be a Permissions Nightmare. So far I managed to build from Source and install it directly on the Proxmox VE Host though.
 
Pretty sure I say MANY Threads about Kernel 6.8 leading to Kernel Panics for many People.
Note that with our size even a few hundreds of reports would be less than 0.1 %, and there certainly are not as many.
Besides that, common issues are running ancient firmware/bios versions or very old HW.
- AMD B550 with AMD 5950x CPU With AMD RX 6600 XT GPU,
- ASUS P9DWS with Intel Xeon E3-1245 v3 CPU Kernel Panic Immediately at boot no matter what
I got a Xeon E5-2620 v3 cluster running just fine with our 6.8 and 6.11, and we got various developer workstations with CPUs from that AMD family, we're always dogfooding PVE and our kernel for all devs here after all.
If this should be improved then we would need some error message from the panic, maybe with logging verbosity increased. Naturally only after ensuring that firmware/BIOS are up-to-date.
 
  • Like
Reactions: silverstone
Note that with our size even a few hundreds of reports would be less than 0.1 %, and there certainly are not as many.
Besides that, common issues are running ancient firmware/bios versions or very old HW.

I got a Xeon E5-2620 v3 cluster running just fine with our 6.8 and 6.11, and we got various developer workstations with CPUs from that AMD family, we're always dogfooding PVE and our kernel for all devs here after all.
If this should be improved then we would need some error message from the panic, maybe with logging verbosity increased. Naturally only after ensuring that firmware/BIOS are up-to-date.
Sure, it's an Echo Chamber of the People having Issues, while the 99%+ of People who have everything running Fine don't "show up" :) .

At the same Time, when an Issue happens, it's always frustrating, because I could see several Reports that seem to Indicate Issues on Kernel 6.8 especially with AMD CPUs whereas you dig deeper and it's also Intel etc. Surely, NOT everybody is affected, and there are a lot of configuration Issues, BIOS Versions, BIOS Settings, etc that might lead to the Issue.

You might not agree on me installing Debian Bookworm Kernel but what is the Alternative (until and if the Issue gets fixed by Proxmox Team) ?

My BIOS is latest (which is still very old) for that ASUS P9DWS + Intel Xeon E3-1245 v3.

The AMD B550 + AMD 5950x + AMD Radeon RX 6600 XT issue bypassed by blacklist amdgpu kinda holds, however I also had to BLOCK kernel 6.8.12-4-pve as it would otherwise crash the VM with the GPU passed through like every 10 Minutes. With Kernel 6.8.12-3-pve it crashes "just" every 7 Days or so. In both cases, the only Solution is a Host Reboot.

Kernel Logs could be provided, but given my Experience in the Past, netconsole is absolutely unreliable over netcat (I NEVER EVER got it to work !) and I don't currently have a remote Syslog Server set up, so it's kinda difficult getting logs like that (I don't know if configuring Serial Console might be an Option, the ASUS MB does NOT have IPMI like many Supermicro Motherboards, although it should have some kind of Remote Management Capability given that it has Intel ME ...).
 
At the same Time, when an Issue happens, it's always frustrating, because I could see several Reports that seem to Indicate Issues on Kernel 6.8 especially with AMD CPUs whereas you dig deeper and it's also Intel etc. Surely, NOT everybody is affected, and there are a lot of configuration Issues, BIOS Versions, BIOS Settings, etc that might lead to the Issue.
Yeah no, I get that it's frustrating, but often users (and not meaning you here) make a quick jump that their problem is obvious and affects so many people that it simply not understandable that devs won't fix this, while the combination of a wide array of hardware used in our dev workstations, test lab server and production server is all using running stable PVE/PBS/PMG. Things that break there or get reported in enterprise support are much easier to fix due to having deeper access to information and, at least for our systems, being able to do some invasive debugging too. That's why I occasionally state this explicit, as alleging that devs here won't fix something easy to reproduce is not really productive, not that you did that, but in my experience the statement "there are so many reports" is often not that far away from such a thing, e.g. by someone else feeling validated.

You might not agree on me installing Debian Bookworm Kernel but what is the Alternative (until and if the Issue gets fixed by Proxmox Team) ?
If your host is not exposed and works fine (or at least good enough) with another (older) Proxmox kernel version then I'd recommend that, if not then yes, there are indeed not that many options, but we still won't be able to debug all problems of user space programs with other kernels, we would much rather debug issues in our kernel instead.

Kernel Logs could be provided, but given my Experience in the Past, netconsole is absolutely unreliable over netcat (I NEVER EVER got it to work !) and I don't currently have a remote Syslog Server set up, so it's kinda difficult getting logs like that (I don't know if configuring Serial Console might be an Option, the ASUS MB does NOT have IPMI like many Supermicro Motherboards, although it should have some kind of Remote Management Capability given that it has Intel ME ...).
A serial console adapter over USB can be nice for these things, maybe someone you know has one you can borrow if you cannot/do not want to buy one just for this. Else, the earlyprintk=vga,keep kernel command line option and a monitor might be also able to provide some hints about what/where the kernel actually panics.
 
  • Like
Reactions: silverstone
Yeah no, I get that it's frustrating, but often users (and not meaning you here) make a quick jump that their problem is obvious and affects so many people that it simply not understandable that devs won't fix this, while the combination of a wide array of hardware used in our dev workstations, test lab server and production server is all using running stable PVE/PBS/PMG. Things that break there or get reported in enterprise support are much easier to fix due to having deeper access to information and, at least for our systems, being able to do some invasive debugging too. That's why I occasionally state this explicit, as alleging that devs here won't fix something easy to reproduce is not really productive, not that you did that, but in my experience the statement "there are so many reports" is often not that far away from such a thing, e.g. by someone else feeling validated.


If your host is not exposed and works fine (or at least good enough) with another (older) Proxmox kernel version then I'd recommend that, if not then yes, there are indeed not that many options, but we still won't be able to debug all problems of user space programs with other kernels, we would much rather debug issues in our kernel instead.


A serial console adapter over USB can be nice for these things, maybe someone you know has one you can borrow if you cannot/do not want to buy one just for this. Else, the earlyprintk=vga,keep kernel command line option and a monitor might be also able to provide some hints about what/where the kernel actually panics.
Well I have lots of USB FTDI Adapters which I normally use for ESP32 and such (thus USB -> RX/TX/VCC/GND Pins) and Some RS-232 to RS-232 Normal Cables.

The "Problem" that I never understood is: even if I have 2 Computers with RS-232 (I have some Servers I can use for that), I need to make sure that one RS-232 is configured as "Output" and the other as "Input". Or "Host" vs "Target" might be more correct Terminology.

I don't know if otherwise Things could get damaged.

Heck I cannot even select the Proxmox Kernel in GRUB Menu because the Keyboard support AFTER BIOS and BEFORE System is booted up seems to be bricked :(. I need to set GRUB_DEFAULT="gnulinux-advanced-/dev/mapper/ata-SSDSC2BB120G7R_PHDVXXXXXXXXXXXXX_crypt_/dev/mapper/ata-SSDSC2BB120G7R_PHDVXXXXXXXXXXXXX_crypt>gnulinux-6.8.12-4-pve-advanced-/dev/mapper/ata-SSDSC2BB120G7R_PHDVXXXXXXXXXXXXX_crypt_/dev/mapper/ata-SSDSC2BB120G7R_PHDVXXXXXXXXXXXXX_crypt" in /etc/default/grub.

EDIT 1: Note the ">" in the GRUB_DEFAULT String, since it's needed to tell GRUB to "Navigate" into that Submenu in order to boot that Kernel (Kudos to Edd Barrett's Answer in https://serverfault.com/questions/8...l-when-trying-to-boot-a-custom-kernel-by-id-o for that).

EDIT 2: Now for some weird Reason it seems to boot without Panic. Completely weird.
 
Last edited:
The "Problem" that I never understood is: even if I have 2 Computers with RS-232 (I have some Servers I can use for that), I need to make sure that one RS-232 is configured as "Output" and the other as "Input". Or "Host" vs "Target" might be more correct Terminology.
It isn't a matter of software configuring. You need a crossover (sometimes called a "null modem") cable that swaps pins 2 and 3 of the serial connectors. You can buy such a thing on Amazon for $10 US.
 
  • Like
Reactions: silverstone

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!