container gets oom killed every now and then when mem usage is low

ibigbug

Member
Jan 10, 2020
23
2
8
35
Hi,

I have a PiHole running on PVE inside a lxc, it was running fine for a while but recently it's getting killed every few minutes when the memory usage of the container and the PVE host if relatively low

any ideas how to debug?

thanks

Code:
root@pve:~# cat /etc/pve/lxc/100.conf
arch: amd64
cores: 1
hostname: pihole
memory: 1024
nameserver: 127.0.0.1
net0: name=eth0,bridge=vmbr0,firewall=1,gw=10.0.1.2,hwaddr=8E:44:6A:30:A2:8C,ip=10.0.1.5/24,type=veth
onboot: 1
ostype: debian
rootfs: local-lvm:vm-100-disk-0,size=8G
searchdomain: x
swap: 512
unprivileged: 1

Code:
root@pve:~# pveversion -v
proxmox-ve: 6.2-2 (running kernel: 5.4.65-1-pve)
pve-manager: 6.2-12 (running version: 6.2-12/b287dd27)
pve-kernel-5.4: 6.2-7
pve-kernel-helper: 6.2-7
pve-kernel-5.4.65-1-pve: 5.4.65-1
pve-kernel-5.4.60-1-pve: 5.4.60-2
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.5
libpve-access-control: 6.1-2
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.2-2
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.2-6
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-backup-client: 0.9.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-12
pve-cluster: 6.1-8
pve-container: 3.2-2
pve-docs: 6.2-6
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-1
pve-qemu-kvm: 5.1.0-2
pve-xtermjs: 4.7.0-2
qemu-server: 6.2-14
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.4-pve1

Code:
[500156.783121] DNS client invoked oom-killer: gfp_mask=0xcc0(GFP_KERNEL), order=0, oom_score_adj=0
[500156.783123] CPU: 2 PID: 32168 Comm: DNS client Tainted: P        W  O      5.4.65-1-pve #1
[500156.783123] Hardware name: BIOSTAR Group B250GT3/B250GT3, BIOS 5.12 01/07/2020
[500156.783124] Call Trace:
[500156.783129]  dump_stack+0x6d/0x9a
[500156.783130]  dump_header+0x4f/0x1e1
[500156.783131]  oom_kill_process.cold.33+0xb/0x10
[500156.783132]  out_of_memory+0x1ad/0x490
[500156.783134]  mem_cgroup_out_of_memory+0xc4/0xd0
[500156.783136]  try_charge+0x76b/0x7e0
[500156.783138]  ? __alloc_pages_nodemask+0x16a/0x330
[500156.783139]  mem_cgroup_try_charge+0x71/0x190
[500156.783141]  mem_cgroup_try_charge_delay+0x22/0x50
[500156.783142]  wp_page_copy+0x11c/0x750
[500156.783144]  ? record_times+0x1b/0x90
[500156.783145]  ? reuse_swap_page+0x144/0x330
[500156.783147]  do_wp_page+0x91/0x680
[500156.783148]  __handle_mm_fault+0xbb5/0x12e0
[500156.783149]  handle_mm_fault+0xc9/0x1f0
[500156.783151]  __do_page_fault+0x233/0x4c0
[500156.783152]  do_page_fault+0x2c/0xe0
[500156.783154]  page_fault+0x34/0x40
[500156.783156] RIP: 0010:copy_user_generic_unrolled+0x89/0xc0
[500156.783157] Code: 38 4c 89 47 20 4c 89 4f 28 4c 89 57 30 4c 89 5f 38 48 8d 76 40 48 8d 7f 40 ff c9 75 b6 89 d1 83 e2 07 c1 e9 03 74 12 4c 8b 06 <4c> 89 07 48 8d 76 08 48 8d 7f 08 ff c9 75 ee 21 d2 74 10 89 d1 8a
[500156.783158] RSP: 0000:ffffbe328944fe50 EFLAGS: 00050202
[500156.783159] RAX: 00007ff2c0cbeec0 RBX: 0000000000000000 RCX: 0000000000000002
--
[500156.783222] oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=ns,mems_allowed=0,oom_memcg=/lxc/100,task_memcg=/lxc/100/ns/user.slice/user-998.slice/session-c10.scope,task=pihole-FTL,pid=32162,uid=100998
[500156.783232] Memory cgroup out of memory: Killed process 32162 (pihole-FTL) total-vm:401088kB, anon-rss:19520kB, file-rss:0kB, shmem-rss:264kB, UID:100998 pgtables:132kB oom_score_adj:0
[500156.783953] oom_reaper: reaped process 32162 (pihole-FTL), now anon-rss:0kB, file-rss:0kB, shmem-rss:268kB
[500160.491897] audit: type=1325 audit(1602133220.840:147230): table=filter family=7 entries=0

Mem of host:

1602134017887.png


Mem of container:

1602134049658.png
 
Last edited:
would be nice to hear something about it because we are running into an issue that 2-4 containers have problems regarding oom-killer
 
Hello,

I have the same problem. When the memory usage of the container and the PVE host if relatively low inside Proxmox GUI. Any new info?

Regards,
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!