CentOS 8 on LXC and the output of getconf

LMC

Active Member
Apr 16, 2019
19
0
41
46
Hi,

As you know, LiteSpeed have licenses for 2 and 8 GB of RAM. It uses getconf to read the available memory. There have been no problem with any of this licenses on CentOS 7, but now, with availability of CentOS 8, I decided to get it a try on my dev server... and here comes the problem.

[root@apps ~]# getconf -a | grep PAGES
PAGESIZE 4096
_AVPHYS_PAGES 245662
_PHYS_PAGES 18552933

That means, 4096 * 18552933 / 1024 / 1024 = 72,472MB (72GB). But:

[root@apps ~]# free -m
total used free shared buff/cache available
Mem: 2048 361 1440 7 246 1686
Swap: 2560 20 2539

The same thing, on CentOS 7:

[root@apps ~]# getconf -a | grep PAGES
PAGESIZE 4096
_AVPHYS_PAGES 244399
_PHYS_PAGES 524288
[root@apps ~]# free -m
total used free shared buff/cache available
Mem: 2048 461 954 22 632 1586
Swap: 2560 0 2559

Anyone encountered this problem? It could be a bug related to LXC or the version of getconf on CentOS 8?


Best regards,
Daniel
 
hi,

Anyone encountered this problem? It could be a bug related to LXC or the version of getconf on CentOS 8?

i just reproduced it here.

it seems like centos7 has getconf v2.17 and centos8 has getconf v2.28

in v2.28 a sysinfo() call is made, which returns the value from the host
 
I'm seeing this issue in a CentOS 8 container too. For me it's manifesting in the tmpfs mounts getting sized as half of the host memory, rather than half of the memory of the guest (the host has 32GB, the guest is set up with 256mb).

Code:
# df -h
Filesystem                    Size  Used Avail Use% Mounted on
rpool/data/subvol-102-disk-0   60G  518M   60G   1% /
none                          492K     0  492K   0% /dev
udev                           16G     0   16G   0% /dev/tty
tmpfs                          16G     0   16G   0% /dev/shm
tmpfs                          16G  8.1M   16G   1% /run
tmpfs                          16G     0   16G   0% /sys/fs/cgroup
tmpfs                         3.2G     0  3.2G   0% /run/user/0

Looks like this is happening with upstream lxc too: https://discuss.linuxcontainers.org/t/strange-size-and-behavior-with-tmpfs/6329

I'm really not sure if this is a bug in LXC, in systemd, or somewhere else in CentOS 8. Any thoughts? I'm looking if there's some way I can pass through an lxc parameter to try and override this memory detection...
 
Hmmm, nevermind, I see this same tmpfs behaviour in CentOS 7 and Debian containers too. None the less, I am having OOM problems with this container as discussed in the linked linuxcontainers.org thread. Here's one where systemd-journal was killed:
Code:
Dec 16 14:10:49 HOST kernel: dnf invoked oom-killer: gfp_mask=0xcc0(GFP_KERNEL), order=0, oom_score_adj=0
Dec 16 14:10:49 HOST kernel: CPU: 1 PID: 30987 Comm: dnf Tainted: P        W  O      5.3.10-1-pve #1
Dec 16 14:10:49 HOST kernel: Hardware name: Supermicro Super Server/X10SRi-F, BIOS 3.1c 05/02/2019
Dec 16 14:10:49 HOST kernel: Call Trace:
Dec 16 14:10:49 HOST kernel:  dump_stack+0x63/0x8a
Dec 16 14:10:49 HOST kernel:  dump_header+0x4f/0x1e1
Dec 16 14:10:49 HOST kernel:  oom_kill_process.cold.34+0xb/0x10
Dec 16 14:10:49 HOST kernel:  out_of_memory+0x1ad/0x490
Dec 16 14:10:49 HOST kernel:  mem_cgroup_out_of_memory+0xc4/0xd0
Dec 16 14:10:49 HOST kernel:  try_charge+0x734/0x7c0
Dec 16 14:10:49 HOST kernel:  mem_cgroup_try_charge+0x71/0x190
Dec 16 14:10:49 HOST kernel:  mem_cgroup_try_charge_delay+0x22/0x50
Dec 16 14:10:49 HOST kernel:  wp_page_copy+0x119/0x740
Dec 16 14:10:49 HOST kernel:  ? mem_cgroup_commit_charge+0x63/0x480
Dec 16 14:10:49 HOST kernel:  do_wp_page+0x91/0x590
Dec 16 14:10:49 HOST kernel:  __handle_mm_fault+0xb40/0x1250
Dec 16 14:10:49 HOST kernel:  ? __switch_to_xtra+0x189/0x5b0
Dec 16 14:10:49 HOST kernel:  handle_mm_fault+0xc5/0x1e0
Dec 16 14:10:49 HOST kernel:  __do_page_fault+0x233/0x4c0
Dec 16 14:10:49 HOST kernel:  do_page_fault+0x2c/0xe0
Dec 16 14:10:49 HOST kernel:  page_fault+0x34/0x40
Dec 16 14:10:49 HOST kernel: RIP: 0033:0x7f607a0969f7
Dec 16 14:10:49 HOST kernel: Code: 00 00 00 81 c6 00 00 00 80 48 8d 34 76 49 63 34 b2 85 f6 78 ee 48 8d 74 b5 00 44 8b 16 4f 8d 14 90 41 39 12 0f 84 c1 01 00 00 <41> 89 52 fc 83 2e 01 48 85 c0 74 ad 8b 36 49 83 c1 04 89 3c b0 41
Dec 16 14:10:49 HOST kernel: RSP: 002b:00007ffc15cde570 EFLAGS: 00010297
Dec 16 14:10:49 HOST kernel: RAX: 0000557ae432fe60 RBX: 0000557ae3979660 RCX: 0000557ae3f22ec0
Dec 16 14:10:49 HOST kernel: RDX: 0000000000000567 RSI: 0000557ae41c695c RDI: 00000000000077ab
Dec 16 14:10:49 HOST kernel: RBP: 0000557ae41a8ab0 R08: 00007f606c00e010 R09: 00007f606c305310
Dec 16 14:10:49 HOST kernel: R10: 00007f606c060008 R11: 0000000000000000 R12: 0000000000020ce1
Dec 16 14:10:49 HOST kernel: R13: 0000000000000000 R14: 0000000000020ce1 R15: 00000000000774ee
Dec 16 14:10:49 HOST kernel: memory: usage 262144kB, limit 262144kB, failcnt 0
Dec 16 14:10:49 HOST kernel: memory+swap: usage 262144kB, limit 262144kB, failcnt 6505
Dec 16 14:10:49 HOST kernel: kmem: usage 31532kB, limit 9007199254740988kB, failcnt 0
Dec 16 14:10:49 HOST kernel: Memory cgroup stats for /lxc/102:
Dec 16 14:10:49 HOST kernel: anon 108449792
Dec 16 14:10:49 HOST kernel: Tasks state (memory values in pages):
Dec 16 14:10:49 HOST kernel: [  pid  ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
Dec 16 14:10:49 HOST kernel: [  14064] 100000 14064    43937      541   229376        0             0 systemd
Dec 16 14:10:49 HOST kernel: [  14414] 100000 14414     1630       27    61440        0             0 agetty
Dec 16 14:10:49 HOST kernel: [  14415] 100000 14415     1630       27    57344        0             0 agetty
Dec 16 14:10:49 HOST kernel: [  14368] 100000 14368    78883    39138   684032        0             0 systemd-journal
Dec 16 14:10:49 HOST kernel: [  14374] 100000 14374    24115      280   204800        0             0 systemd-udevd
Dec 16 14:10:49 HOST kernel: [  14383] 100000 14383    21312      249   204800        0             0 systemd-logind
Dec 16 14:10:49 HOST kernel: [  14387] 100081 14387    13308      247   143360        0             0 dbus-daemon
Dec 16 14:10:49 HOST kernel: [  14388] 100000 14388   113339     5312   380928        0             0 firewalld
Dec 16 14:10:49 HOST kernel: [  14391] 100000 14391    92227      638   356352        0             0 NetworkManager
Dec 16 14:10:49 HOST kernel: [  14443] 100000 14443    32912      582   249856        0             0 nginx
Dec 16 14:10:49 HOST kernel: [  14445] 100998 14445    34133      695   266240        0             0 nginx
Dec 16 14:10:49 HOST kernel: [  14446] 100998 14446    34133      695   266240        0             0 nginx
Dec 16 14:10:49 HOST kernel: [  14447] 100998 14447    34133      695   266240        0             0 nginx
Dec 16 14:10:49 HOST kernel: [  14448] 100998 14448    34133      695   266240        0             0 nginx
Dec 16 14:10:49 HOST kernel: [  14449] 100998 14449    34133      695   266240        0             0 nginx
Dec 16 14:10:49 HOST kernel: [  14450] 100998 14450    34133      695   266240        0             0 nginx
Dec 16 14:10:49 HOST kernel: [  14451] 100998 14451    34250      794   274432        0             0 nginx
Dec 16 14:10:49 HOST kernel: [  14452] 100998 14452    34133      695   266240        0             0 nginx
Dec 16 14:10:49 HOST kernel: [  14453] 100998 14453    34133      695   266240        0             0 nginx
Dec 16 14:10:49 HOST kernel: [  14454] 100998 14454    34133      695   266240        0             0 nginx
Dec 16 14:10:49 HOST kernel: [  14455] 100998 14455    34133      695   266240        0             0 nginx
Dec 16 14:10:49 HOST kernel: [  14456] 100998 14456    34133      695   266240        0             0 nginx
Dec 16 14:10:49 HOST kernel: [  14457] 100998 14457    34133      695   266240        0             0 nginx
Dec 16 14:10:49 HOST kernel: [  14458] 100998 14458    34133      695   266240        0             0 nginx
Dec 16 14:10:49 HOST kernel: [  14459] 100998 14459    34133      695   266240        0             0 nginx
Dec 16 14:10:49 HOST kernel: [  14460] 100998 14460    34133      695   266240        0             0 nginx
Dec 16 14:10:49 HOST kernel: [  14409] 100000 14409    19454      220   192512        0             0 sshd
Dec 16 14:10:49 HOST kernel: [  14419] 100000 14419   158258     5576   425984        0             0 fail2ban-server
Dec 16 14:10:49 HOST kernel: [  14416] 100000 14416     1630       27    57344        0             0 agetty
Dec 16 14:10:49 HOST kernel: [  14417] 100000 14417     5716      211    86016        0             0 crond
Dec 16 14:10:49 HOST kernel: [  14675] 100000 14675    66519     2495   233472        0             0 rsyslogd
Dec 16 14:10:49 HOST kernel: [  30987] 100000 30987   153398     9753   696320        0             0 dnf
Dec 16 14:10:49 HOST kernel: oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=ns,mems_allowed=0,oom_memcg=/lxc/102,task_memcg=/lxc/102/ns/system.slice/systemd-journald.service,task=systemd-journal,pid=14368,uid=100000
Dec 16 14:10:49 HOST kernel: Memory cgroup out of memory: Killed process 14368 (systemd-journal) total-vm:315532kB, anon-rss:2948kB, file-rss:0kB, shmem-rss:153604kB
Dec 16 14:10:49 HOST kernel: oom_reaper: reaped process 14368 (systemd-journal), now anon-rss:0kB, file-rss:0kB, shmem-rss:153756kB
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!