Search results

  1. S

    lxc_init: 845 Failed to run lxc.hook.pre-start for container "200"

    Sure, it's an Echo Chamber of the People having Issues, while the 99%+ of People who have everything running Fine don't "show up" :) . At the same Time, when an Issue happens, it's always frustrating, because I could see several Reports that seem to Indicate Issues on Kernel 6.8 especially...
  2. S

    lxc_init: 845 Failed to run lxc.hook.pre-start for container "200"

    Pretty sure I say MANY Threads about Kernel 6.8 leading to Kernel Panics for many People. My Experience at least on 2 Systems: - AMD B550 with AMD 5950x CPU With AMD RX 6600 XT GPU, Kernel Panic at Boot. Need to blacklist amdgpu, then no more Panics. But of course given the limited number of...
  3. S

    lxc_init: 845 Failed to run lxc.hook.pre-start for container "200"

    I have a similar Issue unfortunately on 8.3.0: pve-manager/8.3.0/c1689ccb1065a83b (running kernel: 6.10.11+bpo-amd64) YES, I know that is the Debian Backports Kernel. Unfortunately Proxmox VE Kernels has a BIG Tendency to Panic on many Systems recently. Both 6.5 and 6.8 for that Matter...
  4. S

    problem when I add a second NVME disk

    Hi, Old Post I know but I thought I'd throw in my 0.02$. I got a similar Error Message, but it doesn't look to be caused by AppArmor: run_buffer: 571 Script exited with status 1 lxc_init: 845 Failed to run lxc.hook.pre-start for container "121" __lxc_start: 2034 Failed to initialize container...
  5. S

    Proxmox Kernel 6.8.12-2 Freezes (again)

    I kind of agree with the Approach. I hit the same Kernel Panic at boot Time with an old ASUS P9D WS + Intel Xeon E3-1245 v3 CPU. However, 2 Things to note: a. I would NOT trust Ubuntu's ZFS Packaging from 100km Away ... They screwed up pretty badly once and that caused major Data Loss. Plus the...
  6. S

    Infiniband HCA and ASPM

    I recently found out (kinda lived under a Rock in that Regards :rolleyes:) that the NIC I was using (Mellanox ConnectX-2 and ConnectX-3) do NOT Support ASPM and therefore the CPU will never be able to achieve high Power Saving States (anithing above C2/C3 IIRC). Same goes for my preferred HBA...
  7. S

    Random 6.8.4-2-pve kernel crashes

    Well IOMMU/SR-IOV don't matter to you for USING them, but they might still cause ISSUES if enabled. So if you are 100% sure that you do NOT need them, I'd suggest trying to disable them. Do you have actual Logs / Messages of how the HOST Crashes ? Without it we have to make a lot of guesses...
  8. S

    Random 6.8.4-2-pve kernel crashes

    Last shot in the Dark ... Did you try to play with the IOMMU, SR-IOV, Re-sizeable BAR and Above 4G Decoding Settings in the BIOS ? Power Related Settings such as P-States and C-States ? IIRC the Re-sizeable BAR, while beneficial in some cases, could cause Issues. I must admit when I had this...
  9. S

    Random 6.8.4-2-pve kernel crashes

    Hi @chacha . Sorry for the Late Reply. The HOST or the GUEST VM ? I managed to achieve around 7 Days uptime with my RX 6600 before it would crash ("Internal VM Error", marked in Yellow with an Explanation Mark in the GUI). At that point, the only Thing to do is to reboot the Host. I didn't...
  10. S

    [SOLVED] AMD GPU inaccessible after VM Poweroff: Unable to change power state from D3cold to D0,device inaccessible.

    For info, the RX 6600 **apparently** does NOT need that reset "trick". With the currently configured Options, I see (knock on Wood) approximatively 7 Days uptime with my Frigate VM before it crashes. Once it does, I still need to perform full Host reboot though. Better than before, but still...
  11. S

    [SOLVED] AMD GPU inaccessible after VM Poweroff: Unable to change power state from D3cold to D0,device inaccessible.

    Unfortunately that does NOT work for me :( . AMD RX 6600, vendor-reset Module installed & loaded at boot, but still "Invalid Argument" Error ... root@pve:~# lspci -kk | grep -i vga -B5 -A10 0b:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Navi 23 [Radeon RX 6600/6600...
  12. S

    Problems with GPU Passthrough since 8.2

    Concerning the Error kvm: ../hw/pci/pci.c:1637: pci_irq_handler: Assertion `0 <= irq_num && irq_num < PCI_NUM_PINS' failed. TASK ERROR: start failed: QEMU exited with code 1 I also seem to be affected. I originally reported this in another Thread ->...
  13. S

    GPU Passthrough ends up with PCI_NUM_PINS error

    Unfortunately that doesn't seem to help in my Case :(. Also /sys/devices/pci0000:00/0000:00:03.1/0000:09:00.0/0000:0a:00.0/0000:0b:00.1/d3cold_allowed (AMD RX 6600) was already set to 1. I tried setting to 0 but that doesn't really solve the Issue. dmesg says the following (with default...
  14. S

    Random 6.8.4-2-pve kernel crashes

    I'm NOT sure it's the same Issue described in this thread, but I'm also getting a Kernel Panic of 6.8.x AT BOOT TIME. This is approx. 2-4s after GRUB boots the Kernel, before even Clevis unluks the LUKS encrypted Disks and ZFS Mounts the Filesystem. I had the Impression all/most Users affected...
  15. S

    Proxmox VE 8.2.2 - High IO delay

    Not sure to be honest (the system where this shows the most is down for other reasons now). I'm curious to see if it would improve, but carefully skeptical. After all, everybody seem to point the Finger and Blame "Consumer SSDs" (or HDDs) wheras in my View it's a Kernel (or Kernel + ZFS) Thing...
  16. S

    Should I move from KVM to LXC ?

    Not a factor for the First. I always use ZFS on Root rpool (and sometimes I also have a separate zdata Storage Pool, again on ZFS). Lucky you :). Heck even Nextcloud VM takes 3+ GB of RAM. Same with Mediawiki at 3+ GB of RAM. Also the test Seafile VM that's not doing anything is taking 3+ GB of...
  17. S

    Should I move from KVM to LXC ?

    Well for Podman (Docker) I usually setup a KVM Virtual Machine for that Purpose. Initially Debian, now slowly migrating to Fedora because of more recent Podman Support. Many Services are not even "Deployed". Rather some kind of "Work in Progress" that hasn't progressed for several Months/Years...
  18. S

    Should I move from KVM to LXC ?

    I have A LOT of VMs running on Proxmox VE across several Servers. Most of the VMs run Debian GNU/Linux Bookworm. Overall the KVMs seems quite inefficient, especially in Terms of: - RAM Usage (a bit less on Xeon E3 v5/v6, since up to 64GB RAM can be used there) - Disk Space (each VM takes 16GB...
  19. S

    Proxmox VE 8.2.2 - High IO delay

    I migrated my Podman Data from ZFS on top of ZVOL to EXT4 on top of ZVOL. I tried to do zpool trim rpool on the Proxmox VE Host to see if that would improve Things. It didn't unfortunately. The Issue still persists. I also tested on Kernel 6.5.x, same thing. IOWait can jump above 80% very...
  20. S

    [TUTORIAL] Proxmox VE 8.0 Mainline Kernel Builds

    Well Kernel 6.1.x is the LTS one on Kernel.org and the default one provided by Debian. If you go the Debian Backports Route, you will get Kernel 6.9.x at the Moment. I also have linux-image-6.6.13+bpo-amd64 but that's because I installed it, it's not in the Repos anymore (and quite old ...