Hello fellas,
Recently I switched my magento2 setup, happily only dev and staging environment, from VMs to LXC. So far so good!
But I encounter a very annoying problem with the backend servers, where my crons are scheduled. Unfortunately, the lxc hangs/freezes/unresponsible after a random time. Thus it is not usable until I shutdown and start it again.
On my proxmox host
I am really interesssted in a solution to my problem, since adding X amount of ram or cpu, is not really a solution.
I also installed systemd-oomd and added some rules to protect my tasks in general and in addition i added a override-script for my php8.2-fpm instance.
If its important: I use pve 8.3.5 on a bare metal installation and my LXC run on ubuntu 22.04 LTS templates.
If I filter on my pve host for the oom's I receive, I get the following output:
my lxc.conf:
Additional notes: What leads me to suppose it's somehow related to LXC is that, with the exact same setup (lxc and vm has been configured via my ansible playbooks) I never experienced a freeze/hang/unresponsible state while using a VM.
Thanks for your patience and I am happy for every suggestion!
Recently I switched my magento2 setup, happily only dev and staging environment, from VMs to LXC. So far so good!

On my proxmox host
I am really interesssted in a solution to my problem, since adding X amount of ram or cpu, is not really a solution.
I also installed systemd-oomd and added some rules to protect my tasks in general and in addition i added a override-script for my php8.2-fpm instance.
Code:
/etc/systemd/oomd.conf
[OOM]
DefaultMemoryPressureLimit=80%
DefaultMemoryPressureDurationSec=10s
If its important: I use pve 8.3.5 on a bare metal installation and my LXC run on ubuntu 22.04 LTS templates.
If I filter on my pve host for the oom's I receive, I get the following output:
journalctl -k | grep -i oom
Code:
Apr 02 12:27:29 c-003 kernel: oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=ns,mems_allowed=0-1,oom_memcg=/lxc/502,task_memcg=/lxc/502/ns/user.slice/user-0.slice/session-6846.scope,task=php,pid=4084601,uid=101000
Apr 02 12:27:29 c-003 kernel: Memory cgroup out of memory: Killed process 4084601 (php) total-vm:301924kB, anon-rss:176916kB, file-rss:192kB, shmem-rss:0kB, UID:101000 pgtables:552kB oom_score_adj:0
Apr 02 12:27:36 c-003 kernel: php8.2 invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0
Apr 02 12:27:36 c-003 kernel: oom_kill_process+0x110/0x240
Apr 02 12:27:36 c-003 kernel: [ pid ] uid tgid total_vm rss rss_anon rss_file rss_shmem pgtables_bytes swapents oom_score_adj name
Apr 02 12:27:36 c-003 kernel: oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=ns,mems_allowed=0-1,oom_memcg=/lxc/502,task_memcg=/lxc/502/ns/user.slice/user-0.slice/session-6846.scope,task=php,pid=4084611,uid=101000
Apr 02 12:27:36 c-003 kernel: Memory cgroup out of memory: Killed process 4084611 (php) total-vm:301924kB, anon-rss:176492kB, file-rss:384kB, shmem-rss:0kB, UID:101000 pgtables:552kB oom_score_adj:0
Apr 02 12:27:56 c-003 kernel: php8.2 invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0
Apr 02 12:27:56 c-003 kernel: oom_kill_process+0x110/0x240
Apr 02 12:27:56 c-003 kernel: [ pid ] uid tgid total_vm rss rss_anon rss_file rss_shmem pgtables_bytes swapents oom_score_adj name
Apr 02 12:27:56 c-003 kernel: oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=ns,mems_allowed=0-1,oom_memcg=/lxc/502,task_memcg=/lxc/502/ns/user.slice/user-0.slice/session-6846.scope,task=php,pid=4084603,uid=101000
Apr 02 12:27:56 c-003 kernel: Memory cgroup out of memory: Killed process 4084603 (php) total-vm:301924kB, anon-rss:176240kB, file-rss:576kB, shmem-rss:0kB, UID:101000 pgtables:544kB oom_score_adj:0
my lxc.conf:
Code:
arch: amd64
cores: 4
features: nesting=1
hostname: XXX
memory: 4092
mp0: local-lvm:vm-999-disk-1,mp=/mnt/shared,size=102G
nameserver: 8.8.8.8
net0: name=eth0,bridge=dmz,firewall=1,gw=X.X.X.X,hwaddr=BC:24:11:FC:12:F3,ip=X.X.X.X,type=veth
ostype: ubuntu
rootfs: local-lvm:vm-502-disk-0,size=40G
swap: 0
unprivileged: 1
lxc.cgroup2.memory.high: 3500M
lxc.cgroup2.memory.max: 4096M
Additional notes: What leads me to suppose it's somehow related to LXC is that, with the exact same setup (lxc and vm has been configured via my ansible playbooks) I never experienced a freeze/hang/unresponsible state while using a VM.
Thanks for your patience and I am happy for every suggestion!
Last edited: