Search results

  1. S

    Proxmox with Second Hand Enterprise SSDs

    Do you mind providing a bit more Context ? Apart from SMART 5 (Reallocated Sectors Count), SMART 197 (Current Pending Sector Count), SMART 198 (Uncorrectable Sector Count) I don't have the Others. And SMART 5 is either 1 or 2. Bigger than Zero, yes, bus my main worry is actually SMART 1...
  2. S

    Proxmox with Second Hand Enterprise SSDs

    I had many problems related to VERY HIGH iowait on my Proxmox VE Systems using Crucial MX500, which is a Consumer SSD (and one with very little DRAM, so once that fills up, Performance drops like a Rock). Now I got 2 x Intel DC S3610 1.6 TB SSDs which should be very good for VM Storage or also...
  3. S

    lxc_init: 845 Failed to run lxc.hook.pre-start for container "200"

    Well I have lots of USB FTDI Adapters which I normally use for ESP32 and such (thus USB -> RX/TX/VCC/GND Pins) and Some RS-232 to RS-232 Normal Cables. The "Problem" that I never understood is: even if I have 2 Computers with RS-232 (I have some Servers I can use for that), I need to make sure...
  4. S

    lxc_init: 845 Failed to run lxc.hook.pre-start for container "200"

    Sure, it's an Echo Chamber of the People having Issues, while the 99%+ of People who have everything running Fine don't "show up" :) . At the same Time, when an Issue happens, it's always frustrating, because I could see several Reports that seem to Indicate Issues on Kernel 6.8 especially...
  5. S

    lxc_init: 845 Failed to run lxc.hook.pre-start for container "200"

    Pretty sure I say MANY Threads about Kernel 6.8 leading to Kernel Panics for many People. My Experience at least on 2 Systems: - AMD B550 with AMD 5950x CPU With AMD RX 6600 XT GPU, Kernel Panic at Boot. Need to blacklist amdgpu, then no more Panics. But of course given the limited number of...
  6. S

    lxc_init: 845 Failed to run lxc.hook.pre-start for container "200"

    I have a similar Issue unfortunately on 8.3.0: pve-manager/8.3.0/c1689ccb1065a83b (running kernel: 6.10.11+bpo-amd64) YES, I know that is the Debian Backports Kernel. Unfortunately Proxmox VE Kernels has a BIG Tendency to Panic on many Systems recently. Both 6.5 and 6.8 for that Matter...
  7. S

    problem when I add a second NVME disk

    Hi, Old Post I know but I thought I'd throw in my 0.02$. I got a similar Error Message, but it doesn't look to be caused by AppArmor: run_buffer: 571 Script exited with status 1 lxc_init: 845 Failed to run lxc.hook.pre-start for container "121" __lxc_start: 2034 Failed to initialize container...
  8. S

    Proxmox Kernel 6.8.12-2 Freezes (again)

    I kind of agree with the Approach. I hit the same Kernel Panic at boot Time with an old ASUS P9D WS + Intel Xeon E3-1245 v3 CPU. However, 2 Things to note: a. I would NOT trust Ubuntu's ZFS Packaging from 100km Away ... They screwed up pretty badly once and that caused major Data Loss. Plus the...
  9. S

    Infiniband HCA and ASPM

    I recently found out (kinda lived under a Rock in that Regards :rolleyes:) that the NIC I was using (Mellanox ConnectX-2 and ConnectX-3) do NOT Support ASPM and therefore the CPU will never be able to achieve high Power Saving States (anithing above C2/C3 IIRC). Same goes for my preferred HBA...
  10. S

    Random 6.8.4-2-pve kernel crashes

    Well IOMMU/SR-IOV don't matter to you for USING them, but they might still cause ISSUES if enabled. So if you are 100% sure that you do NOT need them, I'd suggest trying to disable them. Do you have actual Logs / Messages of how the HOST Crashes ? Without it we have to make a lot of guesses...
  11. S

    Random 6.8.4-2-pve kernel crashes

    Last shot in the Dark ... Did you try to play with the IOMMU, SR-IOV, Re-sizeable BAR and Above 4G Decoding Settings in the BIOS ? Power Related Settings such as P-States and C-States ? IIRC the Re-sizeable BAR, while beneficial in some cases, could cause Issues. I must admit when I had this...
  12. S

    Random 6.8.4-2-pve kernel crashes

    Hi @chacha . Sorry for the Late Reply. The HOST or the GUEST VM ? I managed to achieve around 7 Days uptime with my RX 6600 before it would crash ("Internal VM Error", marked in Yellow with an Explanation Mark in the GUI). At that point, the only Thing to do is to reboot the Host. I didn't...
  13. S

    [SOLVED] AMD GPU inaccessible after VM Poweroff: Unable to change power state from D3cold to D0,device inaccessible.

    For info, the RX 6600 **apparently** does NOT need that reset "trick". With the currently configured Options, I see (knock on Wood) approximatively 7 Days uptime with my Frigate VM before it crashes. Once it does, I still need to perform full Host reboot though. Better than before, but still...
  14. S

    [SOLVED] AMD GPU inaccessible after VM Poweroff: Unable to change power state from D3cold to D0,device inaccessible.

    Unfortunately that does NOT work for me :( . AMD RX 6600, vendor-reset Module installed & loaded at boot, but still "Invalid Argument" Error ... root@pve:~# lspci -kk | grep -i vga -B5 -A10 0b:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Navi 23 [Radeon RX 6600/6600...
  15. S

    Problems with GPU Passthrough since 8.2

    Concerning the Error kvm: ../hw/pci/pci.c:1637: pci_irq_handler: Assertion `0 <= irq_num && irq_num < PCI_NUM_PINS' failed. TASK ERROR: start failed: QEMU exited with code 1 I also seem to be affected. I originally reported this in another Thread ->...
  16. S

    GPU Passthrough ends up with PCI_NUM_PINS error

    Unfortunately that doesn't seem to help in my Case :(. Also /sys/devices/pci0000:00/0000:00:03.1/0000:09:00.0/0000:0a:00.0/0000:0b:00.1/d3cold_allowed (AMD RX 6600) was already set to 1. I tried setting to 0 but that doesn't really solve the Issue. dmesg says the following (with default...
  17. S

    Random 6.8.4-2-pve kernel crashes

    I'm NOT sure it's the same Issue described in this thread, but I'm also getting a Kernel Panic of 6.8.x AT BOOT TIME. This is approx. 2-4s after GRUB boots the Kernel, before even Clevis unluks the LUKS encrypted Disks and ZFS Mounts the Filesystem. I had the Impression all/most Users affected...
  18. S

    Proxmox VE 8.2.2 - High IO delay

    Not sure to be honest (the system where this shows the most is down for other reasons now). I'm curious to see if it would improve, but carefully skeptical. After all, everybody seem to point the Finger and Blame "Consumer SSDs" (or HDDs) wheras in my View it's a Kernel (or Kernel + ZFS) Thing...
  19. S

    Should I move from KVM to LXC ?

    Not a factor for the First. I always use ZFS on Root rpool (and sometimes I also have a separate zdata Storage Pool, again on ZFS). Lucky you :). Heck even Nextcloud VM takes 3+ GB of RAM. Same with Mediawiki at 3+ GB of RAM. Also the test Seafile VM that's not doing anything is taking 3+ GB of...
  20. S

    Should I move from KVM to LXC ?

    Well for Podman (Docker) I usually setup a KVM Virtual Machine for that Purpose. Initially Debian, now slowly migrating to Fedora because of more recent Podman Support. Many Services are not even "Deployed". Rather some kind of "Work in Progress" that hasn't progressed for several Months/Years...