oom-killer activity on debian stretch LXC

dm15

New Member
Mar 4, 2018
13
3
3
84
Hello, since the last update I have strange activity in dmesg related to oom-killer with some processes in Debian LXC images and the problem doesn't really seem to be due to lack of memory, because the system has 70G of memory and 15G available. I guess many users would confirm the issue and it happen due to some bad inrepretation of free/available memory by LXC containers.

Practical examples:

Try to start Tor inside the Debian Stretch LXC container using systemctl start tor and we will receive the following in dmesg:

[51178.341156] tor invoked oom-killer: gfp_mask=0x14000c0(GFP_KERNEL), nodemask=(null), order=0, oom_score_adj=0
[51178.341158] tor cpuset=ns mems_allowed=0-1
[51178.341164] CPU: 15 PID: 7908 Comm: tor Tainted: G IO 4.15.18-7-pve #1
[51178.341164] Hardware name: HP ProLiant DL160 G6 , BIOS O33 07/01/2013
[51178.341165] Call Trace:
[51178.341174] dump_stack+0x63/0x8b
[51178.341178] dump_header+0x6e/0x285
[51178.341180] oom_kill_process+0x21d/0x440
[51178.341182] out_of_memory+0x11d/0x4d0
[51178.341184] mem_cgroup_out_of_memory+0x4b/0x80
[51178.341187] mem_cgroup_oom_synchronize+0x324/0x340
[51178.341188] ? mem_cgroup_css_reset+0xe0/0xe0
[51178.341190] pagefault_out_of_memory+0x36/0x7b
[51178.341193] mm_fault_error+0x8f/0x190
[51178.341195] __do_page_fault+0x4c8/0x500
[51178.341197] do_page_fault+0x2e/0xe0
[51178.341199] ? page_fault+0x2f/0x50
[51178.341201] page_fault+0x45/0x50
[51178.341202] RIP: 0033:0x7f2a81d8e371
[51178.341203] RSP: 002b:00007f2a777196c8 EFLAGS: 00010202
[51178.341205] RAX: 00007f29d62f43c0 RBX: 00007f2a20001fd0 RCX: 0000000000000000
[51178.341206] RDX: 00007f2a14a3cafd RSI: 000000000080aaee RDI: 00007f29d5f73ec0
[51178.341207] RBP: 00007f2a20002394 R08: 000000000080aaee R09: 00000000000003d6
[51178.341208] R10: 0000000000000040 R11: 00007f29d5f6e2c0 R12: 00007f29d5f6e010
[51178.341209] R13: 00007f2a81d98dc0 R14: 00007f2a20001fd0 R15: 0000000000000000
[51178.341210] Task in /lxc/165/ns killed as a result of limit of /lxc/165
[51178.341215] memory: usage 524288kB, limit 524288kB, failcnt 0
[51178.341216] memory+swap: usage 524288kB, limit 524288kB, failcnt 2247046
[51178.341217] kmem: usage 9828kB, limit 9007199254740988kB, failcnt 0
[51178.341217] Memory cgroup stats for /lxc/165: cache:0KB rss:0KB rss_huge:0KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
[51178.341232] Memory cgroup stats for /lxc/165/ns: cache:16676KB rss:497784KB rss_huge:0KB shmem:16476KB mapped_file:8316KB dirty:0KB writeback:0KB swap:0KB inactive_anon:257332KB active_anon:256928KB inactive_file:148KB active_file:52KB unevictable:0KB
[51178.341245] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
[51178.341637] [32614] 0 32614 51083 356 163840 0 0 systemd
[51178.341640] [32698] 0 32698 13572 2592 143360 0 0 systemd-journal
[51178.341642] [32725] 0 32725 6999 61 98304 0 0 cron
[51178.341644] [32726] 0 32726 62530 152 139264 0 0 rsyslogd
[51178.341646] [32728] 107 32728 11284 118 126976 0 -900 dbus-daemon
[51178.341649] [32738] 0 32738 49657 5038 282624 0 0 firewalld
[51178.341651] [32739] 0 32739 11606 132 126976 0 0 systemd-logind
[51178.341654] [ 631] 0 631 17487 193 180224 0 -1000 sshd
[51178.341656] [ 632]
0 632 3168 34 65536 0 0 agetty
[51178.341658] [ 633] 0 633 3168 35 73728 0 0 agetty
[51178.341660] [ 819] 0 819 20292 201 139264 0 0 master
[51178.341663] [ 822] 101 822 20808 193 163840 0 0 pickup
[51178.341665] [ 823] 101 823 20820 195 163840 0 0 qmgr
[51178.341667] [ 1749] 1000 1749 16220 202 167936 0 0 systemd
[51178.341669] [ 1750] 1000 1750 41075 375 204800 0 0 (sd-pam)
[51178.341714] [ 7431] 108 7431 782914 116713 1695744 0 0 tor
[51178.341719] Memory cgroup out of memory: Kill process 7431 (tor) score 893 or sacrifice child
[51178.343142] Killed process 7431 (tor) total-vm:3131656kB, anon-rss:466848kB, file-rss:4kB, shmem-rss:0kB
[51178.384851] oom_reaper: reaped process 7431 (tor), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB


Now let's just start Tor by calling /usr/bin/tor:

Oct 25 20:03:05.151 [notice] Tor 0.2.9.16 (git-9ef571339967c1e5) running on Linux with Libevent 2.0.21-stable, OpenSSL 1.1.0f and Zlib 1.2.8.
Oct 25 20:03:05.151 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning
Oct 25 20:03:05.151 [notice] Read configuration file "/etc/tor/torrc".
Oct 25 20:03:05.158 [notice] Opening Socks listener on 127.0.0.1:9050
Oct 25 20:03:05.000 [notice] Parsing GEOIP IPv4 file /usr/share/tor/geoip.
Oct 25 20:03:05.000 [notice] Parsing GEOIP IPv6 file /usr/share/tor/geoip6.
Oct 25 20:03:05.000 [warn] You are running Tor as root. You don't need to, and you probably shouldn't.
Oct 25 20:03:05.000 [notice] Bootstrapped 0%: Starting
Oct 25 20:03:05.000 [notice] Bootstrapped 80%: Connecting to the Tor network
Oct 25 20:03:06.000 [notice] Bootstrapped 85%: Finishing handshake with first hop
Oct 25 20:03:06.000 [notice] Bootstrapped 90%: Establishing a Tor circuit
Oct 25 20:03:07.000 [notice] Tor has successfully opened a circuit. Looks like client functionality is working.
Oct 25 20:03:07.000 [notice] Bootstrapped 100%: Done

As we see there is no problem with memory at all and everything works fine unless started as a service.

I found out that commenting these two lines in /lib/systemd/system/tor@default.service resolve the issue:
#AppArmorProfile=system_tor
...
#CapabilityBoundingSet=CAP_SETUID CAP_SETGID CAP_NET_BIND_SERVICE CAP_DAC_OVERRIDE

Is there way to avoid this problem globally for all containers? It seems that it's some kind of conflict caused by apparmor on host's side.
 
[51178.341719] Memory cgroup out of memory: Kill process 7431 (tor) score 893 or sacrifice child
That's why the OOM-killer goes into action. The service fails to set the capabilities and tries that recursively, consuming all the assigned memory.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!