I hope this hasn't answered before. I found a lot of threads about killing vms because of memory, but in all cases i found the assigned memory was simply greater than the total memory available. This is not the case here.
I have 3 vms on that machine
>machine 1: 14gb ram 12 cores
>machine 2: 2gb ram 2 cores
>machine 3: 4gb ram 2 cores
according to the resources tab the hardware has 27.19 GB RAM and 16 Cores. So lots of memory available
Now from time to time ( i talk about a few months with running without issues) proxmox shuts down my vms. The journal has the following entry
>Jun 04 06:11:32 pve kernel: Out of memory: Killed process 936779 (kvm) total-vm:9303024kB, anon-rss:4395844kB, file-rss:640kB, shmem->
>Jun 04 06:11:30 pve systemd[1]: 160.scope: A process of this unit has been killed by the OOM killer.
>Jun 04 06:11:30 pve systemd[1]: 160.scope: Failed with result 'oom-kill'.
>Jun 04 06:11:31 pve systemd[1]: 160.scope: Consumed 3d 6h 15min 39.054s CPU time.
>Jun 04 06:11:32 pve systemd[1]: 100.scope: A process of this unit has been killed by the OOM killer.
>Jun 04 06:11:32 pve systemd[1]: 100.scope: Failed with result 'oom-kill'.
>Jun 04 06:11:32 pve systemd[1]: 100.scope: Consumed 5d 6h 10min 33.114s CPU time.
any idea what could cause this? And especially what I can do to prevent this? I could reduce the memory but as there is already a lot of puffer I would assume this only lenghens the interval until this occurs again?
Thanks
(i am aware that the number of course is maybe to high but as the memory is the reason to shutdown i dont expect this to be the issue)
I have 3 vms on that machine
>machine 1: 14gb ram 12 cores
>machine 2: 2gb ram 2 cores
>machine 3: 4gb ram 2 cores
according to the resources tab the hardware has 27.19 GB RAM and 16 Cores. So lots of memory available
Now from time to time ( i talk about a few months with running without issues) proxmox shuts down my vms. The journal has the following entry
>Jun 04 06:11:32 pve kernel: Out of memory: Killed process 936779 (kvm) total-vm:9303024kB, anon-rss:4395844kB, file-rss:640kB, shmem->
>Jun 04 06:11:30 pve systemd[1]: 160.scope: A process of this unit has been killed by the OOM killer.
>Jun 04 06:11:30 pve systemd[1]: 160.scope: Failed with result 'oom-kill'.
>Jun 04 06:11:31 pve systemd[1]: 160.scope: Consumed 3d 6h 15min 39.054s CPU time.
>Jun 04 06:11:32 pve systemd[1]: 100.scope: A process of this unit has been killed by the OOM killer.
>Jun 04 06:11:32 pve systemd[1]: 100.scope: Failed with result 'oom-kill'.
>Jun 04 06:11:32 pve systemd[1]: 100.scope: Consumed 5d 6h 10min 33.114s CPU time.
any idea what could cause this? And especially what I can do to prevent this? I could reduce the memory but as there is already a lot of puffer I would assume this only lenghens the interval until this occurs again?
Thanks
(i am aware that the number of course is maybe to high but as the memory is the reason to shutdown i dont expect this to be the issue)