Recent content by bindi

  1. B

    pvestatd.pm/rebalance_lxc_containers - NUMA awareness?

    I manually assigned my LXCs to certain CCDs. Gemini wrote this code. (Have a 4 CCD TR now, upgraded from when I originally posted this)
  2. B

    Epyc Milan Vm to VM Communication Slow

    Threadripper 5955WX (16c/32t) (NPS=1), 8x32GB 3200MT/s-CL22, VirtIO, amd_pstate, powersave/balance_performance (boosts up to 4.7GHz with PBO and +200MHz offset) I am also getting 22Gbit/s VM to VM. Wonder if NPS=2 would help, but too lazy to reboot. LXC to LXC 41Gbit/s
  3. B

    Opt-in Linux 6.17 Kernel for Proxmox VE 9 available on test & no-subscription

    Those drivers are too old for this kernel, grab the 580.x one.
  4. B

    pvestatd.pm/rebalance_lxc_containers - NUMA awareness?

    I'm just minmaxing out of boredom :-) But I think it would be a nice feature to have, and let the user choose their preference. Especially with Threadrippers/Epycs with 4CCDs or even dual CPU machines.
  5. B

    pvestatd.pm/rebalance_lxc_containers - NUMA awareness?

    Couldn't fit into one post, here's the code if anyone is interested: the command I used: ps aux | grep -v grep | awk '{print $2}' | xargs -n 1 python3 pid.py What's interesting is that I'm seeing kvm across CCDs as well, even though I've enabled "NUMA aware" in the VM settings. Unless the...
  6. B

    pvestatd.pm/rebalance_lxc_containers - NUMA awareness?

    I asked chatgpt help with some python, it seems like some processes are indeed across two CCDs:
  7. B

    pvestatd.pm/rebalance_lxc_containers - NUMA awareness?

    I'm also running unlimited cores, but I think it doesn't work properly because some programs can request Nthreads amount of CPU(processes(threads?)), meaning it's spread across two CCDs. Or even if they're using less threads, but once again, the rebalance_lxc_containers has pinned it to all CPUs...
  8. B

    pvestatd.pm/rebalance_lxc_containers - NUMA awareness?

    Hey, Is it possible to make the rebalance_lxc_containers function NUMA-aware? Currently it can assign LXCs across CCDs, which is not optimal. I have a Zen3 processor with two CCDs (NPS2 enabled in bios), so the OS is aware of it: node 0 cpus: 0 1 2 3 4 5 6 7 16 17 18 19 20 21 22 23 node 1 cpus...
  9. B

    GPU passthrough and transcode in Jellyfin

    Maybe it's because even 1 transcode is at 58% GPU Utilization (according to your first picture). I tried and mine (GTX 1660) is at 24% with a HEVC->HEVC 4K 40Mbps->1080p 10Mbps transcode
  10. B

    GPU passthrough and transcode in Jellyfin

    Sure, my GTX 1660 has done 5 concurrent sessions according to my Tautulli history
  11. B

    GPU passthrough and transcode in Jellyfin

    The NVDEC table doesnt say anything about the number of sessions. Just the # of "chips". Not the same thing.
  12. B

    GPU passthrough and transcode in Jellyfin

    You're wrong. The P1000 is limited to 8 concurrent sessions. They've started to *lift* these session limits, it used to be 2, then 3, then 5, and now 8 on consumer cards. Quadros are generally unrestricted...
  13. B

    Setting up nvidia gpu for stable diffusion in a LXC container ?

    Yes, I'm using it with my Plex container. Can you run nvidia-smi in the container? Here's my ctid.conf: lxc.hook.pre-start: sh -c '[ ! -f /dev/nvidia0 ] && /usr/bin/nvidia-modprobe -c0 -u' lxc.environment: NVIDIA_VISIBLE_DEVICES=all lxc.environment...
  14. B

    Setting up nvidia gpu for stable diffusion in a LXC container ?

    I'd use nvidia-container-toolkit so only the host requires drivers, guest doesn't need them at all. Much easier to manage.
  15. B

    Use vGPU on LXC

    You do need drivers on the host for vGPU.. I'm using nvidia-container-toolkit, maybe it will work alongside with vGPU enabled drivers? https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html <CTID>.conf additions: lxc.hook.pre-start: sh -c '[ ! -f...