Continuously increasing memory usage until oom-killer kill processes

With tvheadend or with an other application?
tvheadend! Would you mind sharing your script?

I'm running this (via an @reboot cronjob based on the 90% values calculated from instructions in fabian's post)

Code:
#!/bin/bash
echo 15461882265 > /sys/fs/cgroup/lxc/102/memory.high
echo 15461882265 > /sys/fs/cgroup/lxc/102/ns/memory.high

@mm553, does this fix the tvheadend continuity issues you're seeing?
 
Last edited:
I'm running this (via an @reboot cronjob based on the 90% values calculated from instructions in fabian's post)
It should be enough to use this in the container configuration in `/etc/pve/lxc/$vmid.conf` instead:
Code:
lxc.cgroup2.memory.high: 15461882265
Note that this can only be set in the container config manually, not via the API or `pct` commands.
 
  • Like
Reactions: Lyve and Moayad
tvheadend! Would you mind sharing your script?

I'm running this (via an @reboot cronjob based on the 90% values calculated from instructions in fabian's post)

Code:
#!/bin/bash
echo 15461882265 > /sys/fs/cgroup/lxc/102/memory.high
echo 15461882265 > /sys/fs/cgroup/lxc/102/ns/memory.high

@mm553, does this fix the tvheadend continuity issues you're seeing?

Code:
#!/bin/bash
state=`/usr/sbin/pct status 117 | awk -F " " '{print $2}'`
if [[ $state == running ]]
then
        if grep "max" /sys/fs/cgroup/lxc/117/ns/memory.high
        then
                max_memory=`cat /etc/pve/lxc/117.conf | grep memory: | awk -F":" '{print $2}'`
                high_memory=$(($max_memory*1024*1024*90/100))
                echo $high_memory > /sys/fs/cgroup/lxc/117/ns/memory.high
                echo $high_memory > /sys/fs/cgroup/lxc/117/memory.high
        fi
fi

Made a cronjob with this script which runs every minute and checks if the container is running, if it's running it its checks if the "memory.high" is set to max and if this condition is also true, it sets the memory.high to 90% of the allocated memory - so I can change the memory on the gui without any impacts on the container. You could also set the pct-id in a variable.

It should be enough to use this in the container configuration in `/etc/pve/lxc/$vmid.conf` instead:
Code:
lxc.cgroup2.memory.high: 15461882265
Note that this can only be set in the container config manually, not via the API or `pct` commands.

Did you try this @splendid ?
 
So I limited the cgroup2.memory.high to 60% in order to make the issue happen faster, but as you can see once it hit the high memory cap it started dumping things to swap and then once swap filled it spiked the CPU usage and quit responding on the network. Any other suggestions on how to troubleshoot this? This CT was fine prior to upgrade to Proxmox 7.3, no other CT's appear to be having issues either so I am guessing its something to do with the packages this CT uses.

https://github.com/nebulous/infinitude/wiki/Installing-Infinitude-on-Raspberry-PI-(raspbian)

1673113324695.png
 
So I limited the cgroup2.memory.high to 60% in order to make the issue happen faster, but as you can see once it hit the high memory cap it started dumping things to swap and then once swap filled it spiked the CPU usage and quit responding on the network. Any other suggestions on how to troubleshoot this? This CT was fine prior to upgrade to Proxmox 7.3, no other CT's appear to be having issues either so I am guessing its something to do with the packages this CT uses.

https://github.com/nebulous/infinitude/wiki/Installing-Infinitude-on-Raspberry-PI-(raspbian)

View attachment 45347
I was too annoyed with Proxmox 7.3 and reinstalled the whole system with a Proxmox 7.2-3 iso and for me all the issues are gone by now.
I've also got some issues on 7.3 with some USB devices were not recognized only inside a container or a VM after half a day.
For me it's clearly not an issue with the kernel. I've tried a whole bunch of kernels on 7.3 and the issue was not gone.
Seems like an issue with some package.

At the moment the following packages are due to upgrade. Any of these packages could be the troiblemaker:
proxmox-mail-forward proxmox-offline-mirror-docs
proxmox-offline-mirror-helper pve-kernel-5.15.83-1-pve
The following packages will be upgraded:
base-files bash bind9-dnsutils bind9-host bind9-libs cifs-utils corosync
dbus dirmngr distro-info-data dpkg e2fsprogs gnupg gnupg-l10n gnupg-utils
gnutls-bin gpg gpg-agent gpg-wks-client gpg-wks-server gpgconf gpgsm gpgv
grub-common grub-efi-amd64-bin grub-pc grub-pc-bin grub2-common
isc-dhcp-client isc-dhcp-common krb5-locales libavahi-client3
libavahi-common-data libavahi-common3 libc-bin libc-l10n libcfg7 libcmap4
libcom-err2 libcorosync-common4 libcpg4 libcups2 libcurl3-gnutls libdbus-1-3
libexpat1 libext2fs2 libfreetype6 libfribidi0 libgssapi-krb5-2
libhttp-daemon-perl libk5crypto3 libknet1 libkrb5-3 libkrb5support0 libksba8
libldap-2.4-2 libldb2 libnftables1 libnozzle1 libnss-systemd libnvpair3linux
libpam-systemd libpcre2-8-0 libpixman-1-0 libproxmox-acme-perl
libproxmox-acme-plugins libproxmox-backup-qemu0 libproxmox-rs-perl
libpve-access-control libpve-cluster-api-perl libpve-cluster-perl
libpve-common-perl libpve-guest-common-perl libpve-http-server-perl
libpve-rs-perl libpve-storage-perl libquorum5 librados2-perl libsmbclient
libss2 libssl1.1 libsystemd0 libtasn1-6 libtiff5 libtpms0 libudev1
libuutil3linux libvirglrenderer1 libvotequorum8 libwbclient0 libxml2
libxslt1.1 libzfs4linux libzpool5linux locales logrotate logsave lxc-pve
nano nftables openssh-client openssh-server openssh-sftp-server openssl
procmail proxmox-archive-keyring proxmox-backup-client
proxmox-backup-file-restore proxmox-ve proxmox-widget-toolkit pve-cluster
pve-container pve-docs pve-edk2-firmware pve-firewall pve-firmware
pve-ha-manager pve-i18n pve-kernel-5.15 pve-kernel-helper pve-lxc-syscalld
pve-manager pve-qemu-kvm python3-ldb qemu-server rsyslog samba-common
samba-libs smbclient spl ssh swtpm swtpm-libs swtpm-tools systemd
systemd-sysv tcpdump tzdata udev xsltproc zfs-initramfs zfs-zed
zfsutils-linux
 
So I limited the cgroup2.memory.high to 60% in order to make the issue happen faster, but as you can see once it hit the high memory cap it started dumping things to swap and then once swap filled it spiked the CPU usage and quit responding on the network. Any other suggestions on how to troubleshoot this? This CT was fine prior to upgrade to Proxmox 7.3, no other CT's appear to be having issues either so I am guessing its something to do with the packages this CT uses.

https://github.com/nebulous/infinitude/wiki/Installing-Infinitude-on-Raspberry-PI-(raspbian)

View attachment 45347

i’m having an identical issue, also with using a media server application. can i use one of my tickets for my 2x community subs to get us all to the bottom of this?

Ticket 6407321 created.
 
Last edited:
@warrentc3 - did you try the "high" memory limit workaround from this thread? does it work? if so, can you provide more details here (which software, CT config, host setup, ..)
 
@warrentc3 - did you try the "high" memory limit workaround from this thread? does it work? if so, can you provide more details here (which software, CT config, host setup, ..)

i did try setting the high memory limit. it just caused the container to start dumping to swap, seize, fall over that much sooner. do you have access to the support case? i also setup a cron for every 15 mins to free vm cache.

honestly, i have no doubt this is likely isolated to an issue with Mono… but there’s just so little your average enthusiast can do to help developers triage these things.

https://emby.media/community/index.php?/topic/113691-apparent-memory-leak-on-48015/page/2/
 
that forum links requires registering.. it could be that the Mono runtime does something weird memory wise.

you or a developer of the software in question could probably try to validate that even without PVE - the same memory limits are also settable directly using cgroups, or by executing software using systemd with resource limits (MemoryHigh and MemoryMax directives).

note that in pve-container >=4.4-4 , the 'high' limit is automatically set slightly below the 'max' limit (with a gap of at most 128MB), so if you are not on that version already, please give it a try as well! another workaround would of course be to run the software in question in a proper VM.
 
So, running "echo 3 > /proc/sys/vm/drop_caches" in shell typically sorts out the runaway memory...
however, am not having a lot of luck with achieving the same results when running it in cron... maybe I've missed something?
 
......

Code:
echo VALUE > /sys/fs/cgroup/lxc/CTID/memory.high
echo VALUE > /sys/fs/cgroup/lxc/CTID/ns/memory.high

....
Happy to report back that this works for me, too. I use tvheadend in an lxc and encountered exactly the same issue, already after the first of these two commands, the issue is gone and stable for half an hour now.

top constantly displaying 10% free, very nice!


Code:
MiB Mem :    256.0 total,     25.9 free,    107.8 used,    122.3 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.    148.2 avail Mem
 
FWIW, since pve-container 4.4-4 memory.high will be set to 99.6% of the memory.max hard limit automatically!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!