Search results

  1. M

    Network packet loss in high traffic VMs

    Unfortunately not. Still using a 5.x kernel
  2. M

    Network packet loss in high traffic VMs

    I've solved the problem with multi queue on kernel 5.15, but since kernel 6.x the problem is back. I can't figure out where, why and how. The only workaround is to start with the old kernel again. Really frustrating
  3. M

    How to change virtio-pci tx/rx queue size

    Did you try multiqueue with virtio network adapter?
  4. M

    Network packet loss in high traffic VMs

    As you can't change the ring buffer size from virtio interfaces in the vm, I've allocated an e1000 interface to it and here I could changed the size with ethtool from 256 to 1024. That's why I've tried with multiqueue afterwards.
  5. M

    Network packet loss in high traffic VMs

    Ok, nice! I tried with an e1000 adapter, and changed the rx- and tx ring-buffer in the vm to 1024, which helped in the first step. Then I figured out that multiqueue on the virtio interface also fixes my issue. Maybe you could verify this?
  6. M

    Continuously increasing memory usage until oom-killer kill processes

    #!/bin/bash state=`/usr/sbin/pct status 117 | awk -F " " '{print $2}'` if [[ $state == running ]] then if grep "max" /sys/fs/cgroup/lxc/117/ns/memory.high then max_memory=`cat /etc/pve/lxc/117.conf | grep memory: | awk -F":" '{print $2}'`...
  7. M

    Network packet loss in high traffic VMs

    So you finally increased the ring buffer size of the physical network interface and the bridge interface to tx 4096 rx 4096 and the issue has gone? I'm also having problems with losing udp/rtp packets since the upgrade to version 7. Thanks in advance!
  8. M

    Proxmox 7.1-7 bzw. letztes Update 6.4-11 macht Probleme mit tvheadend

    Hi, auf welcher TVH-Version bist du? Danke! VG
  9. M

    Proxmox 7.1-7 bzw. letztes Update 6.4-11 macht Probleme mit tvheadend

    Ich bin in einer nested VM und habe dort den Container am Laufen, das macht hald das ganze nochmal komplexer... :/ Hab jetzt mal versucht NW-Interface auf E2000 zu ändern - mal gucken, hab langsam keine Hoffnung mehr. DONT CHANGE A RUNNING SYSTEM ;)
  10. M

    Proxmox 7.1-7 bzw. letztes Update 6.4-11 macht Probleme mit tvheadend

    Das ist nur der Workaround für RAM-Überlauf. Hmm... Bin jetzt auf einem Debian 11 Image, aber nach ca. 1 Stunde bekomme ich bei mehreren parallelen Streams auf einem Transponder nur noch Continuity Errors und die Streams freezen. Ich verstehs nicht
  11. M

    Proxmox 7.1-7 bzw. letztes Update 6.4-11 macht Probleme mit tvheadend

    Hmm okay.. Welches LXC-Container Image verwendest du?
  12. M

    Proxmox 7.1-7 bzw. letztes Update 6.4-11 macht Probleme mit tvheadend

    Mein Fokus lag auch auf den RAM Überlauf. Nachdem ich dieses jedoch behoben habe, kamen nach ca. einer Stunde streamen auf dem gleichen Transponder wieder Continuity Errors, welche ab dem ersten Zeitpunkt dann einfach weiter und weiter ansteigen, was ein Streamen so gut wie unmöglich machen...
  13. M

    Proxmox 7.1-7 bzw. letztes Update 6.4-11 macht Probleme mit tvheadend

    Endlich bin ich auf diesen Thread gestoßen und weiß jetzt somit endlich mal wo der Fehler liegt. Ich habe den Sat2Ip Receiver ausgetauscht, neue VMs installiert, verschieden Container ausprobiert: Alles ohne Erfolg und bei Proxmox 6.4 keinerlei Probleme. Jetzt ist hald die Frage wie wir das...
  14. M

    Continuously increasing memory usage until oom-killer kill processes

    Yeah, this fixes the memory issue - thank you for this. I made a script as a workaround for now.
  15. M

    Continuously increasing memory usage until oom-killer kill processes

    Thanks, can you tell me where I can make this change persisten?
  16. M

    Continuously increasing memory usage until oom-killer kill processes

    Now what of all the output values is the current hard limit for memory usage? edit: I guess you meant cat /sys/fs/cgroup/lxc/CTID/memory.high ? Used this and calculated 90 percent of the memory, then written to /sys/fs/cgroup/lxc/CTID/memory.high and /sys/fs/cgroup/lxc/CTID/ns/memory.high -...
  17. M

    Continuously increasing memory usage until oom-killer kill processes

    Okay.. Output: cat /sys/fs/cgroup/lxc/102/memory.stat anon 81432576 file 424726528 kernel_stack 1769472 pagetables 2281472 percpu 1521472 sock 20480 shmem 122880 file_mapped 62537728 file_dirty 7905280 file_writeback 0 swapcached 0 anon_thp 0 file_thp 0 shmem_thp 0 inactive_anon 81444864...
  18. M

    Continuously increasing memory usage until oom-killer kill processes

    Ok, to reproduce the issue a bit faster, I've changed the memory of the container to 512 MB. root@ct-tvh-02:~# cat /proc/meminfo MemTotal: 524288 kB MemFree: 208 kB MemAvailable: 406652 kB Buffers: 0 kB Cached: 406444 kB SwapCached: 0...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!