Recent content by blebo

  1. B

    Issue with Intel Arc A380 PCIe passthrough

    What version of Ubuntu are you using ? You need at least 22.04.3 LTS for kernel 6.2 with ARC support (maybe try upgrading your existing Ubuntu VM to the HWE kernel).
  2. B

    Swaping my Passthrough target from VM to LXC

    To revert, action the following to start with: Remove/comment blacklist nvidia from /etc/modprobe.d/blacklist.conf Remove/comment options vfio-pci ids=10de:1cb3,10de:0fb9 disable_vga=1 from /etc/modprobe.d/vfio.conf rerun the initramfs update. reboot Install Nvidia tools on host, create LXC...
  3. B

    Thunderbolt Networking?

    I have this in /etc/network/interfaces on my NUC to QNAP link via thunderbolt, which might help: iface thunderbolt0 inet manual auto vmbr8 iface vmbr8 inet static address w.x.y.z/24 bridge-ports thunderbolt0 bridge-stp off bridge-fd 0 bridge-vlan-aware...
  4. B

    Intel iGPU passthrough error: VFIO_DEVICE_SET_IRQS failure

    Try hostpci0: 0000:00:02,pcie=1 in the VM config? What is the output of lspci -k -s 00:02?
  5. B

    [SOLVED] Issues with Intel ARC A770M GPU Passthrough on NUC12SNKi72 (vfio-pci not ready after FLR or bus reset)

    Success!! TLDR: add a hook script to clear the reset_methods at VM pre-start catting the reset_methods gave flr bus, the 2 methods not working above. However clearing it completely allowed the VM to boot and see the attached dGPU and its Audio controller: echo >...
  6. B

    [SOLVED] Issues with Intel ARC A770M GPU Passthrough on NUC12SNKi72 (vfio-pci not ready after FLR or bus reset)

    Has anyone had success with passing through the Intel ARC A770M (or similar) to a VM (Windows or Linux)? I think I have vfio binding mostly working, however when I attempt to start the Windows 11 VM, it complains about VFIO being not ready/giving up on resets (causing the Proxmox task to...
  7. B

    pvestatd leak creates 3000+ processes, consumes all RAM & SWAP, and halts/reboots machine over a 4 hour cycle. Then repeats.

    From recollection, issue was still present in early v7.0 releases. Haven’t had another shot at it since.
  8. B

    pvestatd leak creates 3000+ processes, consumes all RAM & SWAP, and halts/reboots machine over a 4 hour cycle. Then repeats.

    Definitely NOT solved, however I've been able to stabilise memory to more typical usage pattern when I stopped an LXC container running Grafana/Influx DB (Ubuntu 18.04) - which was monitoring the same PVE host. Something in pvestatd must be having trouble with that type of resource usage profile.
  9. B

    pvestatd leak creates 3000+ processes, consumes all RAM & SWAP, and halts/reboots machine over a 4 hour cycle. Then repeats.

    Following my update above, is it possible that somewhere between PVE 5.x and PVE 6.2, the default limit has been increased for a clean install (i.e. ulimit -n)? output of ulimit -a : core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority...
  10. B

    pvestatd leak creates 3000+ processes, consumes all RAM & SWAP, and halts/reboots machine over a 4 hour cycle. Then repeats.

    Looking into it a bit further, I am see the following typical in syslog: Oct 04 16:00:47 pm01 pvestatd[28666]: ipcc_send_rec[3] failed: Too many open files Oct 04 16:00:47 pm01 pvestatd[18976]: ipcc_send_rec[1] failed: Too many open files Oct 04 16:00:47 pm01 pvestatd[19126]: ipcc_send_rec[1]...
  11. B

    pvestatd leak creates 3000+ processes, consumes all RAM & SWAP, and halts/reboots machine over a 4 hour cycle. Then repeats.

    This has occurred from the later PVE 6.1 updates (I think) and definitely all throughout PVE 6.2 (including 6.2-12) since about March 2020. Prior to this the system was rock-solid for about 1.5 years. Normal operation is ~10-12Gig usage (of 32G total). See attached picture for the cycle...
  12. B

    How to NFS/SMB share already populated ZFS array the right way

    Have a look at the Turn Key Linux File Server LXC template (which can be loaded from the proxmox UI). You could bind mount your storage to it, then on share via Samba via its UI. https://www.turnkeylinux.org/fileserver

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!