Search results

  1. S

    Can't remove old vm-disk "data set is busy"

    You are welcome :) . I also just adapted that line from something I found online. Cannot remember exactly where. I usually put a comment with the "Source" or "Reference" for it. Pretty sure I just googled linux kill/list processes running inside chroot online or something along those Lines...
  2. S

    Can't remove old vm-disk "data set is busy"

    You are Welcome :). I agree that unfortunately Google > Built-in Search Feature :rolleyes:. If you ever happen to rescue a system from LiveCD/LiveUSB, and the Pool (of the Chrooted System that you attempt to recover) refuses to export (Cannot export 'rpool': Pool is busy Error Message), that's...
  3. S

    system stuck in C0 and C1 pkg C states

    Not sure if you managed to solve it or not, but I'm going through different Systems to try to enable ASPM. Some of them work, some of them seem quite Stubborn for no apparent Reason. For the most likely causes I tried to compile some README (with Links to other Tools / Patch Scripts)...
  4. S

    Proxmox with Second Hand Enterprise SSDs

    Intenso SSDs I read VERY BAD Things about, I assume that's why you have triple Redundancy. About Spinning Rust no complains so far with HGST / Toshiba, but knocking on Wood ...
  5. S

    Proxmox with Second Hand Enterprise SSDs

    And about SATA/SAS, anything good there in your Opinion ? I really don't have that many slots for NVME (if any).
  6. S

    Problem with HP NC364T and pfsense

    Old Post, but I started having Similar Errors, although NOT with the (uninitialized) Part. [43727.251415] e1000e 0000:04:00.1 enp4s0f1: Hardware Error [43727.287525] e1000e 0000:04:00.0 enp4s0f0: Hardware Error [46873.169962] e1000e 0000:04:00.1 enp4s0f1: Hardware Error [46873.204187] e1000e...
  7. S

    Optiming Virtual Disk Volume Block Size (Host) and EXT4 Blocks/Groups (Guest)

    It just occurred to me that the Reason why I'm getting such a MASSIVE overhead on my Backup Server (which has 2 x RAIDZ-2 x 6 Disks each) is because the Default ZVOL on Proxmox VE is (or probably "was" for a long Time, and many of my Virtual Disks are quite Old) 8k, which results in a Space...
  8. S

    Proxmox with Second Hand Enterprise SSDs

    Would you be able to give a quick Summary? To read that Forum Section seems like a Full Time Job :eek:. There were some Auctions of NVME PCIe x8 Samsung MZ-PLK3T20 3.2TB as well as NVME PCIe x8 Sandisk SDFADAMOS-3T20-SF1 ioMemory 3.2TB Toda,y but (even though they were sold quite Cheap, 125...
  9. S

    Proxmox with Second Hand Enterprise SSDs

    But what SSD would you reccomend besides the Intel S3610 ? I have lots of space for SATA not so much for PCIe (or M.2, let alone U.2) since I don't really have Slots / Caddies for those (in the Supermicro Chassis) or PCIe Lanes available (in Desktop-Like chassis such as a Fractal Define R4/R5/XL...
  10. S

    Proxmox with Second Hand Enterprise SSDs

    Well, I think they are marketed as Mixed Use (whereas the S3700/S3710 was for more Write-Intensive Workloads). But yes, agreed, 4.2TB (or "4TB" for short just to avoid Confusion with the Decimal / 1k Separator <.> vs <,> vs <'>) - which might actually be 34TB due to the wrong Sector/Block size...
  11. S

    Proxmox with Second Hand Enterprise SSDs

    So you can confirm VERY high values for the same Attributes (Raw_Read_Error_Rate and Read_Soft_Error_Rate) ? What about the Reallocated Sector Count which I have at 1 or 2, should I be worried about that ? If I could trust the SMART Attribute 180 the Unused_Rsvd_Blk_Cnt_Tot is set at around...
  12. S

    Proxmox with Second Hand Enterprise SSDs

    Do you mind providing a bit more Context ? Apart from SMART 5 (Reallocated Sectors Count), SMART 197 (Current Pending Sector Count), SMART 198 (Uncorrectable Sector Count) I don't have the Others. And SMART 5 is either 1 or 2. Bigger than Zero, yes, bus my main worry is actually SMART 1...
  13. S

    Proxmox with Second Hand Enterprise SSDs

    I had many problems related to VERY HIGH iowait on my Proxmox VE Systems using Crucial MX500, which is a Consumer SSD (and one with very little DRAM, so once that fills up, Performance drops like a Rock). Now I got 2 x Intel DC S3610 1.6 TB SSDs which should be very good for VM Storage or also...
  14. S

    lxc_init: 845 Failed to run lxc.hook.pre-start for container "200"

    Well I have lots of USB FTDI Adapters which I normally use for ESP32 and such (thus USB -> RX/TX/VCC/GND Pins) and Some RS-232 to RS-232 Normal Cables. The "Problem" that I never understood is: even if I have 2 Computers with RS-232 (I have some Servers I can use for that), I need to make sure...
  15. S

    lxc_init: 845 Failed to run lxc.hook.pre-start for container "200"

    Sure, it's an Echo Chamber of the People having Issues, while the 99%+ of People who have everything running Fine don't "show up" :) . At the same Time, when an Issue happens, it's always frustrating, because I could see several Reports that seem to Indicate Issues on Kernel 6.8 especially...
  16. S

    lxc_init: 845 Failed to run lxc.hook.pre-start for container "200"

    Pretty sure I say MANY Threads about Kernel 6.8 leading to Kernel Panics for many People. My Experience at least on 2 Systems: - AMD B550 with AMD 5950x CPU With AMD RX 6600 XT GPU, Kernel Panic at Boot. Need to blacklist amdgpu, then no more Panics. But of course given the limited number of...
  17. S

    lxc_init: 845 Failed to run lxc.hook.pre-start for container "200"

    I have a similar Issue unfortunately on 8.3.0: pve-manager/8.3.0/c1689ccb1065a83b (running kernel: 6.10.11+bpo-amd64) YES, I know that is the Debian Backports Kernel. Unfortunately Proxmox VE Kernels has a BIG Tendency to Panic on many Systems recently. Both 6.5 and 6.8 for that Matter...
  18. S

    problem when I add a second NVME disk

    Hi, Old Post I know but I thought I'd throw in my 0.02$. I got a similar Error Message, but it doesn't look to be caused by AppArmor: run_buffer: 571 Script exited with status 1 lxc_init: 845 Failed to run lxc.hook.pre-start for container "121" __lxc_start: 2034 Failed to initialize container...
  19. S

    Proxmox Kernel 6.8.12-2 Freezes (again)

    I kind of agree with the Approach. I hit the same Kernel Panic at boot Time with an old ASUS P9D WS + Intel Xeon E3-1245 v3 CPU. However, 2 Things to note: a. I would NOT trust Ubuntu's ZFS Packaging from 100km Away ... They screwed up pretty badly once and that caused major Data Loss. Plus the...
  20. S

    Infiniband HCA and ASPM

    I recently found out (kinda lived under a Rock in that Regards :rolleyes:) that the NIC I was using (Mellanox ConnectX-2 and ConnectX-3) do NOT Support ASPM and therefore the CPU will never be able to achieve high Power Saving States (anithing above C2/C3 IIRC). Same goes for my preferred HBA...