VM killed by OOM killer, I don't understand why

decibel83

Renowned Member
Oct 15, 2008
210
1
83
Hello,
on a PVE node with 256 Gb RAM, a 96 Gb qemu virtual machine is regularly killed by the Out Of Memory killer and I don't understand why this is happening:

Code:
[Wed Oct  4 19:07:29 2023] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=qemu.slice,mems_allowed=0,global_oom,task_memcg=/qemu.slice/208.scope,task=kvm,pid=3243931,uid=0
[Wed Oct  4 19:07:29 2023] Out of memory: Killed process 3243931 (kvm) total-vm:103591776kB, anon-rss:100910216kB, file-rss:9876kB, shmem-rss:4kB, UID:0 pgtables:198896kB oom_score_adj:0
[Wed Oct  4 19:07:32 2023] oom_reaper: reaped process 3243931 (kvm), now anon-rss:0kB, file-rss:68kB, shmem-rss:4kB

I'm using ZFS but the ARC cache is configured with 10 Gb maximum:

Code:
root@node01:/# cat /sys/module/zfs/parameters/zfs_arc_max
10737418240
root@node01:/# cat /sys/module/zfs/parameters/zfs_arc_min
10737418239

And it's currently using 10 Gb:

Code:
root@node01:~# cat /proc/spl/kstat/zfs/arcstats|grep size
size                            4    10729720720

The RAM usage on the host node is not high and it's constantly below the 50% (the hole you see in the graph refers when the virtual machine was killed):

Screenshot 2023-10-05 at 11.04.18.png

On this node I have three virtual machines with a total of 108 Gb, so I'm not overcommitting the memory there:

Code:
root@node01:/etc/pve/nodes/node01/qemu-server# grep memory *
101.conf:memory: 8192
108.conf:memory: 2048
208.conf:memory: 98304

This is the PVE version:

proxmox-ve: 7.4-1 (running kernel: 5.15.108-1-pve)
pve-manager: 7.4-16 (running version: 7.4-16/0f39f621)
pve-kernel-5.15: 7.4-4
pve-kernel-5.15.108-1-pve: 5.15.108-2
pve-kernel-5.15.102-1-pve: 5.15.102-1
pve-kernel-5.15.83-1-pve: 5.15.83-1
pve-kernel-5.15.74-1-pve: 5.15.74-1
ceph-fuse: 15.2.17-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx4
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4.1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.4-2
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-3
libpve-rs-perl: 0.7.7
libpve-storage-perl: 7.4-3
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.3-1
proxmox-backup-file-restore: 2.4.3-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.7.3
pve-cluster: 7.3-3
pve-container: 4.4-6
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-4~bpo11+1
pve-firewall: 4.3-5
pve-firmware: 3.6-5
pve-ha-manager: 3.6.1
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-2
qemu-server: 7.4-4
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.11-pve1

This happened two times in the last 4 days.

Could you help me to understand why it's happening and how to solve, please?

Thank you very much!
 
same problem, in my case there was no swap partition. Just adding this solved the issue
 
Hi, could you attach the full OOM killer output from the journal? It should start with something like <...> invoked oom-killer: <...>. This might give some clues why the OOM killer was even invoked.
 
Hi, could you attach the full OOM killer output from the journal? It should start with something like <...> invoked oom-killer: <...>. This might give some clues why the OOM killer was even invoked.

You can find the journalctl log attached here.
 

Attachments

  • journalctl.txt
    23.8 KB · Views: 14
I could imagine that the Problem is with a Process in the host causing the the VM to be killed by the OOM-killer.
Could you paste the full journalctl (or at least 30 min before the VM got killed) of the host from the day the VM got killed by the OOM-killer ?
 
I could imagine that the Problem is with a Process in the host causing the the VM to be killed by the OOM-killer.
Could you paste the full journalctl (or at least 30 min before the VM got killed) of the host from the day the VM got killed by the OOM-killer ?

Attached you can find the entire journalctl for the whole day.

Thank you!
 

Attachments

  • node1-journalctl-20231005.txt
    963.3 KB · Views: 6
Attached you can find the entire journalctl for the whole day.

Thank you!
I could not find a distinct cause, but it looks like your system was out of memory large memory pages due to the VM running and was not able to defragment them in the given time.
Rich (BB code):
 8913 Oct 04 19:06:40 node01 kernel: Node 0 Normal: 761989*4kB (UMEH) 680*8kB (UH) 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 3053396kB
 8914 Oct 04 19:06:40 node01 kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
 8915 Oct 04 19:06:40 node01 kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB

Did it happen again ? And have you created swap memory since ?
 
Please create swap space, preferably not on ZFS (it's an annoying default of proxmox not to provision swap when installed with root on zfs, but swap has been known to be unreliable on ZFS for a long time), and please don't set swappiness to 0.

Swap allows the kernel to migrate pages between memory zones. It mitigates memory fragmentation, allowing the kernel to allocate larger pages on demand, instead of returning ENOMEM and exercising unreliably tested parts of the code. Some good background here:

https://www.reddit.com/r/linuxadmin/comments/14g0buv/whats_your_take_on_swap_for_servers/jp31ula/
https://unix.stackexchange.com/ques...pace-if-i-have-more-than-enough-amount-of-ram
etc

If you see a lot of CPU activity of "kswapd" in `top`, you're in particular suffering from the kernel attempting to work out what to do to get contiguous blocks of memory free so it can keep a safe reserved amount of particular size pages free, and if it can't free things up quickly enough, you will get processes killed in OOM (and the only process it's going to rate highly at being killable is being your giant kvm process running your VM) even when there's seemingly large amounts of free mem (just not including enough pages of the right sizes for whatever the kernel needs at that particular moment).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!