[SOLVED] VM stops at night, dont know way?

krystianen

New Member
Jun 6, 2021
19
3
3
39
HI,

Thank you for reading my post :).
I've 2 VMs. One of them (100WinDerv2019) stops at night (not every). Then what to start it works fine. It starts since 4 days, befor was fine.

Can you please help to locate a problem? Start VM manualy every morniing its painass.

Here is my syslog from 48h to analize.

https://pastebin.com/JBTXFr1p

1655881599455.png
 
I skimmed through your syslog and it looks like the OOM (Out of memory) killer stops your VM because your host reaches critically low levels of free RAM.

As can be seen in the syslog entries around half past 1 am.
Code:
Jun 22 01:31:33 BRGKDC1 kernel: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/qemu.slice/100.scope,task=kvm,pid=6062,uid=0
Jun 22 01:31:33 BRGKDC1 kernel: Out of memory: Killed process 6062 (kvm) total-vm:16053380kB, anon-rss:12628920kB, file-rss:3360kB, shmem-rss:4kB, UID:0 pgtables:26224kB oom_score_adj:0
Jun 22 01:31:33 BRGKDC1 vzdump[10747]: VM 100 qmp command failed - VM 100 not running
Jun 22 01:31:33 BRGKDC1 kernel: oom_reaper: reaped process 6062 (kvm), now anon-rss:0kB, file-rss:112kB, shmem-rss:4kB

The solution here would be to either increase the total RAM your system has, or decrease how much RAM your VMs have. You should generally not over-provision memory, or else things like this start to happen ;)
I'd suggest decreasing the RAM of your VMs until all of their max values combined still are less than your total RAM, with some to spare for the system of course!
 
Thanku very much for replay.

My PXV server has 32gb:
- vm 100 12gb
- vm 101 4gb

so i have 16gb free, shouldnt be enought?

PXV Server overall
1655886780213.png
here is something.... almost 30gb allocated... but dont know for what


Vm 100
1655886672935.png
1655886697214.png
 
so i have 16gb free, shouldnt be enought?
Alright, it seems something else is hogging quite a lot of RAM then.

You can look at what processes are using what resources in a very tidy way with e.g. htop.
That program also allows you to sort by column, in this case memory. This should clear up what is taking up so much RAM when your system is near capacity.
 
  • Like
Reactions: krystianen
It seems your problem comes down to using ZFS. ZFS reserves 50% of the system memory by default for use in the ARC (Active Replacement Cache). However, the ARC might not release memory quickly enough if the system is at maximum capacity, leading to the VMs to sometimes being killed by the OOM killer.

The solution in this case would be to either decrease the VM memory, or to limit the maximum memory usage of the ARC. The PVE admin guide has a chapter on how to set the maximum memory available to the ARC.

(Also, thanks @dcsapak and @fabian for the very helpful input ;) )
 
  • Like
Reactions: krystianen
It seems your problem comes down to using ZFS. ZFS reserves 50% of the system memory by default for use in the ARC (Active Replacement Cache). However, the ARC might not release memory quickly enough if the system is at maximum capacity, leading to the VMs to sometimes being killed by the OOM killer.
I'm not good of proxmox, ZFS refer to Storage memory. I thought i've RAM memory problem. Can you explain for newby how to figure it out? thx
 
By default ZFS will use 50% of your RAM as a read cache called ARC. So ZFS is using up to 16GB of RAM too. So you got 12+4 GB RAM allocated to your VMs but these will use more RAM (maybe 18GB instead of 16GB) because of the virtualization overhead. Then PVE itself needs 2GB of RAM. And your ZFS will also use up to 16GB of RAM. So all together might need 36GB of RAM and you only got 32GB.
You could fix that by for example limiting your ARC size to 8GB. That way ZFS will only use up to 8GB for its ARC. How to do that is descibed in the link posted above.
 
By default ZFS will use 50% of your RAM as a read cache called ARC. So ZFS is using up to 16GB of RAM too. So you got 12+4 GB RAM allocated to your VMs but these will use more RAM (maybe 18GB instead of 16GB) because of the virtualization overhead. Then PVE itself needs 2GB of RAM. And your ZFS will also use up to 16GB of RAM. So all together might need 36GB of RAM and you only got 32GB.
You could fix that by for example limiting your ARC size to 8GB. That way ZFS will only use up to 8GB for its ARC. How to do that is descibed in the link posted above.
Nice explanation, i will try i back here
 
so adding the following line to /etc/modprobe.d/zfs.conf will fix problem of limitation?

Code:
options zfs zfs_arc_max=8589934592
 
Hello,

i'm having a similar issue (i guess) so i hope it is fine to revive this thread (if not, please let me know).
I have 48GB of RAM in my host, 1 VM with 24GB RAM running. Small 240GB FZS Array and one large 3.78TB (netto)

according to my VM logs, the VM crashed yesterday at 19:47
Code:
Das System wurde zuvor am ‎13.‎12.‎2023 um 19:47:30 unerwartet heruntergefahren.

For this time, i have nothing in my syslog, but i found some oom messages 10m later. Which is strange, as both host and VM have proper CET time.

Code:
Dec 13 19:56:18 pve01 kernel: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=init.scope,mems_allowed=0,global_oom,task_memcg=/qemu.slice/102.scope,task=kvm,pid=1986,uid=0
Dec 13 19:56:18 pve01 kernel: Out of memory: Killed process 1986 (kvm) total-vm:31217304kB, anon-rss:25235120kB, file-rss:2560kB, shmem-rss:0kB, UID:0 pgtables:51672kB oom_score_adj:0
Dec 13 19:56:18 pve01 systemd: Starting systemd-journald.service - Journal Service...
Dec 13 19:56:18 pve01 systemd: 102.scope:  process of this unit has been killed by the OOM killer.
Dec 13 19:56:18 pve01 systemd: 102.scope: Failed with result 'oom-kill'.

here is the full syslog for this timeframe:
https://pastebin.com/mni4BVL2
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!