Oom-kill process inside lxc : Memory cgroup out of memory

Dark26

Renowned Member
Nov 27, 2017
273
25
68
47
Bonjour,

This is the message i have on 2 node, running repectively iredmail and proxmox mail gateway :

Code:
[2064598.795126] oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=ns,mems_allowed=0,oom_memcg=/lxc/199,task_memcg=/lxc/199/ns/system.slice/clamav-daemon.service,task=clamd,pid=3647031,uid=100107
[2064598.796609] Memory cgroup out of memory: Killed process 3647031 (clamd) total-vm:1342176kB, anon-rss:1099032kB, file-rss:0kB, shmem-rss:0kB
[2064598.916492] oom_reaper: reaped process 3647031 (clamd), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB

In both the oom-kill kill the antivirus ( clamd) which is the processus consuming the maximum of ram.

i try to add memory and swap to the container but the same result.

Any idea what to change to make it works correctly?

Before iredmail was on a vm ( not conatainer) , and i have to problem, with even less memory


Thanks Dark26
 
How much memory does the container have?
clamav needs around 1.3 G
 
This is for the Poxmox mail gateway ( 2 Go )

Code:
top - 21:31:06 up 3 days, 17:19,  1 user,  load average: 3.74, 3.80, 3.87
Tasks:  48 total,   1 running,  47 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
MiB Mem :   2048.0 total,    269.0 free,   1702.0 used,     77.0 buff/cache
MiB Swap:   2304.0 total,   2304.0 free,      0.0 used.    346.0 avail Mem

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                                                     
  23446 clamav    20   0 1342820   1.1g   1688 S   0.0  52.5   3:51.98 clamd


The same for the iredmail LXC


Code:
top - 21:32:34 up 23:59,  1 user,  load average: 2.61, 2.39, 2.23
Tasks:  72 total,   1 running,  71 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
MiB Mem :   2148.0 total,    379.9 free,   1521.1 used,    247.0 buff/cache
MiB Swap:   2404.0 total,   2367.0 free,     37.0 used.    626.9 avail Mem

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                                                     
     86 clamav    20   0 1252624 864520   8200 S   0.0  39.3   0:27.11 clamd



i try to move all the other container on the third node to see if is better, but it's not


i try to put 3 Go to see if its's help.
 
I have the same problem with Proxmox Mail Gateway. It has 2GB Ram and 2GB swap assigned and in proxmox I only see 600-800 MB used. Still this error. Why?
 
did you enable persistent journaling in your container(s)? if not and your container uses systemd, it will log to memory with limits derived from the host RAM which can lead to unexpected OOM situations..
 
1) How to enable persistent journaling?
2) If ram is a problem (e.g. because of journaling), I would have to see it in the ram usage in proxmox or top, wouldn't I?
 
1) How to enable persistent journaling?
short answer: `mkdir /var/log/journal && systemctl restart systemd-journald` - longer answer in ystemd/man/journald.conf.html
also if possible i would increase the amount of ram (clamav does use quite a bit)
 
  • Like
Reactions: flotho and carsten2
In my case problem still exist.
4 identical containers with 16Gb of ram each running ubuntu 20.10

I have disabled journaling in all containers and at node as well

Code:
echo "Storage=none" >> /etc/systemd/journald.conf
service systemd-journald restart

No luck

Code:
Apr 11 15:49:29 golf2 kernel: [47877.894306] ThreadPoolForeg invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=200
Apr 11 15:52:52 golf2 kernel: [48081.027589] NetworkService invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0
Apr 11 15:54:03 golf2 kernel: [48151.973780] lxcfs invoked oom-killer: gfp_mask=0x400dc0(GFP_KERNEL_ACCOUNT|__GFP_ZERO), order=1, oom_score_adj=0
Apr 11 16:25:10 golf2 kernel: [50019.304050] ThreadPoolForeg invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=300
 
Last edited:
In my case problem still exist.

please open a new thread instead of replying to an unrelated one (this thread is specifically about clamav).
if you do - please include which services are running inside the container - I'd also watch how much memory they use... (maybe the simply need more than the configured 16G)?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!