I have a similar Issue unfortunately on 8.3.0:
YES, I know that is the Debian Backports Kernel. Unfortunately Proxmox VE Kernels has a BIG Tendency to Panic on many Systems recently. Both 6.5 and 6.8 for that Matter. Installing the Debian Backports Kernel was the only solution that allowed the System to somewhat work ...
But no apparent / clear Cause.
The workaround for this PRIVILEGED Container (Fedora Linux 41) was to use NFS Mount (from inside the LXC Container) using the
This was also a PITA because the rpc-statd.service keeps Failing inside the Container (and Logs are completely useless since they don't give any hint as to why this happens):
Hence the reason for the
Note that this was a PRIVILEGED LXC Container with Nested and NFS Features Enabled.
Code:
pve-manager/8.3.0/c1689ccb1065a83b (running kernel: 6.10.11+bpo-amd64)
YES, I know that is the Debian Backports Kernel. Unfortunately Proxmox VE Kernels has a BIG Tendency to Panic on many Systems recently. Both 6.5 and 6.8 for that Matter. Installing the Debian Backports Kernel was the only solution that allowed the System to somewhat work ...
But no apparent / clear Cause.
The workaround for this PRIVILEGED Container (Fedora Linux 41) was to use NFS Mount (from inside the LXC Container) using the
ro,nolock
Options.This was also a PITA because the rpc-statd.service keeps Failing inside the Container (and Logs are completely useless since they don't give any hint as to why this happens):
Code:
root@HOST:/home/podman# systemctl status rpc-statd
× rpc-statd.service - NFS status monitor for NFSv2/3 locking.
Loaded: loaded (/usr/lib/systemd/system/rpc-statd.service; static)
Drop-In: /usr/lib/systemd/system/service.d
└─10-timeout-abort.conf, 50-keep-warm.conf
Active: failed (Result: exit-code) since Sun 2024-12-01 19:19:50 CET; 10s ago
Invocation: f0caa9ad31384b4abf3ff6bf98206589
Docs: man:rpc.statd(8)
Process: 432 ExecStart=/usr/sbin/rpc.statd (code=exited, status=1/FAILURE)
Mem peak: 1.2M
CPU: 17ms
Dec 01 19:19:50 HOST systemd[1]: Starting rpc-statd.service - NFS status monitor for NFSv2/3 locking....
Dec 01 19:19:50 HOST rpc.statd[433]: Version 2.8.1 starting
Dec 01 19:19:50 HOST rpc.statd[433]: Flags: TI-RPC
Dec 01 19:19:50 HOST rpc.statd[433]: Initializing NSM state
Dec 01 19:19:50 HOST systemd[1]: rpc-statd.service: Control process exited, code=exited, status=1/FAILURE
Dec 01 19:19:50 HOST systemd[1]: rpc-statd.service: Failed with result 'exit-code'.
Dec 01 19:19:50 HOST systemd[1]: Failed to start rpc-statd.service - NFS status monitor for NFSv2/3 locking..
Hence the reason for the
ro,nolock
Mount Option.Note that this was a PRIVILEGED LXC Container with Nested and NFS Features Enabled.