Proxmox 6.1 OOM killed a VM

logging inside a container eats up RAM unless you've enabled persistent journaling.
 
logging inside a container eats up RAM unless you've enabled persistent journaling.

Could you please elaborate, why does it eat up the RAM? It's logging to the disk.
Even if it does, it's not reflected in the RAM stats, it's always some percents used, all the rest - free

Even so, before moving to the Proxmox CT, the config has been working years under openvz and years under Ubuntu 16.04 LXC/LXD - all flawlessly.
 
You may see the ram usage of the host. And see where is the outage (OOM killed) and than quota increase
memuse.png
 
Would it be possible to use a virtual machine instead of the container?
Certanly, but it also sounds as workaround. LXC is always LXC on Ubuntu or Proxmox. Also on LXC/LXD I mostly have no memory limits for my CT's. But with Proxmox - I just do not have this option.
 
Last edited:
Could you please elaborate, why does it eat up the RAM? It's logging to the disk.
For example journald can log to RAM.

Even if it does, it's not reflected in the RAM stats, it's always some percents used, all the rest - free
This depends more on the host system.
Processes in a LXC container are visible in the host system.
You can for example run cat in a container and filter htop output on the host for that word and then see the process.

and than quota increase
That is really strange. Just to be sure: You haven't touched any configuration file between OOM killer at 16:41 and around 00:30 when the quota increased?
Did HA take so long to bring your LXC container up again?
 
That is really strange. Just to be sure: You haven't touched any configuration file between OOM killer at 16:41 and around 00:30 when the quota increased?
Did HA take so long to bring your LXC container up again?

This particular container is not under HA, I found it dead, increased RAM quota and started it
 
And if you run free -h on the host, then everything looks ok?
 
I noticed the same problem since somewhere 5.x or earlier.
OOM is killing random processes inside lxc containers after some runtime (currently at ~90 days).

The host has plenty of RAM (256GB)
Might be the issue with ZFS ARC cache as it defaults to use 50% of host ram for caching.
Currently proxmox shows RAM usage at 67% (168GB out of 256GB) and oom is already happily killing processes as small as 30MB.
arc_summary shows current usage at 125GB (100% target)
I am not sure if 'free -m' includes arc cache, but I guess it does not.

edit:
I reduced arc size to 24GB as suggested in this howto
https://fibrevillage.com/storage/169-zfs-arc-on-linux-how-to-set-and-monitor-on-linux

Total ram usage went down to 67GB so 'free -m' does include arc cache.
It also means that server never fully utilized whole ram while killing processes.
(now I will have to investigate ram fragmentation)
 
Last edited:
I couldn't take a detailed look at it yet, but maybe this thread and bug are relevant?
Could you maybe try to adapt
Code:
RuntimeMaxFileSize=5M
RuntimeMaxFiles=3
in /etc/systemd/journald.conf, too?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!