Search results

  1. B

    Epyc Milan Vm to VM Communication Slow

    Threadripper 5955WX (16c/32t) (NPS=1), 8x32GB 3200MT/s-CL22, VirtIO, amd_pstate, powersave/balance_performance (boosts up to 4.7GHz with PBO and +200MHz offset) I am also getting 22Gbit/s VM to VM. Wonder if NPS=2 would help, but too lazy to reboot. LXC to LXC 41Gbit/s
  2. B

    Opt-in Linux 6.17 Kernel for Proxmox VE 9 available on test & no-subscription

    Those drivers are too old for this kernel, grab the 580.x one.
  3. B

    pvestatd.pm/rebalance_lxc_containers - NUMA awareness?

    I'm just minmaxing out of boredom :-) But I think it would be a nice feature to have, and let the user choose their preference. Especially with Threadrippers/Epycs with 4CCDs or even dual CPU machines.
  4. B

    pvestatd.pm/rebalance_lxc_containers - NUMA awareness?

    Couldn't fit into one post, here's the code if anyone is interested: the command I used: ps aux | grep -v grep | awk '{print $2}' | xargs -n 1 python3 pid.py What's interesting is that I'm seeing kvm across CCDs as well, even though I've enabled "NUMA aware" in the VM settings. Unless the...
  5. B

    pvestatd.pm/rebalance_lxc_containers - NUMA awareness?

    I asked chatgpt help with some python, it seems like some processes are indeed across two CCDs:
  6. B

    pvestatd.pm/rebalance_lxc_containers - NUMA awareness?

    I'm also running unlimited cores, but I think it doesn't work properly because some programs can request Nthreads amount of CPU(processes(threads?)), meaning it's spread across two CCDs. Or even if they're using less threads, but once again, the rebalance_lxc_containers has pinned it to all CPUs...
  7. B

    pvestatd.pm/rebalance_lxc_containers - NUMA awareness?

    Hey, Is it possible to make the rebalance_lxc_containers function NUMA-aware? Currently it can assign LXCs across CCDs, which is not optimal. I have a Zen3 processor with two CCDs (NPS2 enabled in bios), so the OS is aware of it: node 0 cpus: 0 1 2 3 4 5 6 7 16 17 18 19 20 21 22 23 node 1 cpus...
  8. B

    GPU passthrough and transcode in Jellyfin

    Maybe it's because even 1 transcode is at 58% GPU Utilization (according to your first picture). I tried and mine (GTX 1660) is at 24% with a HEVC->HEVC 4K 40Mbps->1080p 10Mbps transcode
  9. B

    GPU passthrough and transcode in Jellyfin

    Sure, my GTX 1660 has done 5 concurrent sessions according to my Tautulli history
  10. B

    GPU passthrough and transcode in Jellyfin

    The NVDEC table doesnt say anything about the number of sessions. Just the # of "chips". Not the same thing.
  11. B

    GPU passthrough and transcode in Jellyfin

    You're wrong. The P1000 is limited to 8 concurrent sessions. They've started to *lift* these session limits, it used to be 2, then 3, then 5, and now 8 on consumer cards. Quadros are generally unrestricted...
  12. B

    Setting up nvidia gpu for stable diffusion in a LXC container ?

    Yes, I'm using it with my Plex container. Can you run nvidia-smi in the container? Here's my ctid.conf: lxc.hook.pre-start: sh -c '[ ! -f /dev/nvidia0 ] && /usr/bin/nvidia-modprobe -c0 -u' lxc.environment: NVIDIA_VISIBLE_DEVICES=all lxc.environment...
  13. B

    Setting up nvidia gpu for stable diffusion in a LXC container ?

    I'd use nvidia-container-toolkit so only the host requires drivers, guest doesn't need them at all. Much easier to manage.
  14. B

    Use vGPU on LXC

    You do need drivers on the host for vGPU.. I'm using nvidia-container-toolkit, maybe it will work alongside with vGPU enabled drivers? https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html <CTID>.conf additions: lxc.hook.pre-start: sh -c '[ ! -f...
  15. B

    Installing MacOS Ventura With RTX 3070TI Stuck On Apple Logo

    Ampere isn't supported by macOS at all. https://dortania.github.io/GPU-Buyers-Guide/modern-gpus/nvidia-gpu.html#ampere-series-rtx-30xx
  16. B

    proxmox requires email to set up. how to bypass it

    You can type whatever e-mail you want during the installation, it's the SENDER email (when PVE sends you a notification of something) - you have to configure the e-mail settings further if you want the notifications to actually work.
  17. B

    Mount propagation in LXC containers

    lxc.mount.entry: /tank/dataset mnt/dataset none rbind,create=dir 0 0 For recursive mounting. By the way, datasets are different file systems and hard linking does not work across them - if that's applicable to your setup.
  18. B

    Cloud-init and RANDOM passwords

    Hey, when using user:RANDOM password in a custom.yaml, is it possible to get the output of the passwords set to the task status window in the webgui? The config works, but I have to be fast to attach to the VM on a first boot to catch the passwords. I'm aware that I should use public keys, but...
  19. B

    Server and Proxmox show 255 cores

    grep CONFIG_NR_CPUS /boot/config-$(uname -r) What does this say on both of the hosts? I remember that Ubuntu's kernel used to be limited to 255 cores. Or maybe it was 256.. Anyway, are all of the CPUs online? This should print the CPUs which are offline: grep -L 1...