Proxmox runs out of memory if not restarted once a month

Jun 5, 2025
7
0
1
Finland
www.atva.fi
Hi, can someone help me with this strange issue..?

So, for some reason Proxmox RAM usage is steadily increasing when the system is on, until it gets totally full and then things starts to happen - as expected. Strange thing is that RAM reserved for the VMs (combined) does not exceed the total physical memory amount (64Gb) on server. Last time the memory started to run out on the server (>99% in use, SWAP full), I calculated that combined actual usage of RAM on VMs was only around 40Gb, so I wonder what caching (or other) action Proxmox does in the background to use 24Gb of RAM? I am not using ballooing for any of VMs if that could affect on things.

I have red that storage caching could cause this issue, but since I am not using ZFS, this should not be the cause? VMs use qcow2 as a disk format (not sure does this affect).

More information about the system in the screenshot.

The server is dedicated rent server from OVH and it had preinstalled Proxmox with 2GiB of SWAP: I am not sure why, because I have understood that Proxmox should work better without SWAP at all. I was thinking that probably issue could be fixed by increasing the SWAP to 64Gb but this would be bad for performance... right?

As a solution, I restart the server once a month, which is just fine since some updates require this anyways but I would like to understand what is causing this issue on the ground level.

Any ideas?
 

Attachments

  • proxmox-ram-issue.png
    proxmox-ram-issue.png
    211.1 KB · Views: 58
You should not overcommit RAM. It just does not work well.

If you need to relax that problem a little bit you may look at zram; I prefer this over a static swap-file:

Code:
~# apt show zram-tools

Description: utilities for working with zram
 zram is a Linux kernel module that allows you to set up compressed
 filesystems in RAM.
 .
 zram-tools uses this module to set up compressed swap space.
 This is useful on systems with low memory or servers
 running a large amount of services with data that's easily swappable
 but that you may wish to swap back fast without sacrificing disk
 bandwidth.

The recommendation is a clearly prioritized list:
  1. max out the technically possible RAM
  2. if #1 is already done --> reduce your (RAM-) load to a sane level, e.g. give each VM the amount of RAM it needs to have - not more!
  3. only if #1 and #2 is at its limits: think about involving zram
 
  • Like
Reactions: Kingneutron
21 running VMs on a 64 GByte RAM machine seems to be too much. I would not even try this. Dont know if this behaviour really can be called strange...
 
@ness1602
Unfortunately RAM amount cannot be increased for OVH dedicated servers unless I order a new server and transfer all settings and VMs from current server. I am not sure, is latency small enough to cluster servers for easier transfer even if I get a new server from same datacenter as the current one. About SWAP disk - this could be a "fail-safe" solution for sure, but I am still thinking that this is not a good idea for performance.

@UdoB @aderumier @LnxBil
I don't think I am overcommitting / overprovisioning memory: Allocated memory for all running VMs (combined) is 58.00GiB so there should be about 5GiB memory for Proxmox itself, even if all the VMs would use 100% of allocated memory. I might want to test zram in my dev environment, although I think that this is unnecessary if I found the real cause for this issue.

@GMBauer
Well, I called that strange because I don't understand the reason. I still don't understand it, and I have not seen any reference or document that would say how much RAM you should have at minimum per VM so that some could calculate theorical maximum VM count for a server. If you have better options (for example using containers instead?) I am open for solutions. However, reason for this many VMs for this server is purely cost-efficiency: I can get server with 128Gb of RAM but that is more expensive to run (since I don't own a real datacenter). Also, because my VMs don't really need more than 2-6Gb of RAM (depending of use case) I don's see a point to allocate more memory for VMs nor left half of the ram unused (or "reserved" for Proxmox). If someone would know some "overhead" that Proxmox needs per VM, I would like to know that.


After all, I am still wondering, what actually uses RAM in the system so much? When I took the picture attached above, VM's actual RAM usage combined was not even half of allocated, so this means that Proxmox was using very much more than 5Gib mentioned above.


edit.
Oh and Disk settings for all VMs (cache is not used):
proxmox-disk.png
And I am not using PCIe passthrough nor any GPU-releated software, except Proxmox Console.
 
Last edited:
Output of grep -r memory /etc/pve/qemu-server/
Code:
/etc/pve/qemu-server/220.conf:memory: 2048
/etc/pve/qemu-server/120.conf:memory: 2048
/etc/pve/qemu-server/110.conf:memory: 4096
/etc/pve/qemu-server/210.conf:memory: 4096
/etc/pve/qemu-server/310.conf:memory: 2048
/etc/pve/qemu-server/350.conf:memory: 2048
/etc/pve/qemu-server/450.conf:memory: 2048
/etc/pve/qemu-server/351.conf:memory: 2048
/etc/pve/qemu-server/250.conf:memory: 2048
/etc/pve/qemu-server/950.conf:memory: 3072
/etc/pve/qemu-server/311.conf:memory: 2048
/etc/pve/qemu-server/312.conf:memory: 2048
/etc/pve/qemu-server/301.conf:memory: 2048
/etc/pve/qemu-server/302.conf:memory: 2048
/etc/pve/qemu-server/302.conf:memory: 3072
/etc/pve/qemu-server/900.conf:memory: 2048
/etc/pve/qemu-server/901.conf:memory: 3072
/etc/pve/qemu-server/902.conf:memory: 3072
/etc/pve/qemu-server/303.conf:memory: 2048
/etc/pve/qemu-server/303.conf:memory: 3072
/etc/pve/qemu-server/404.conf:memory: 2048
/etc/pve/qemu-server/402.conf:memory: 2048
/etc/pve/qemu-server/405.conf:memory: 2048
/etc/pve/qemu-server/406.conf:memory: 2048
/etc/pve/qemu-server/407.conf:memory: 2048
/etc/pve/qemu-server/904.conf:memory: 3072
/etc/pve/qemu-server/304.conf:memory: 2048
/etc/pve/qemu-server/304.conf:memory: 3072
/etc/pve/qemu-server/308.conf:memory: 2048
/etc/pve/qemu-server/309.conf:memory: 2048
/etc/pve/qemu-server/309.conf:memory: 4096
/etc/pve/qemu-server/230.conf:memory: 2048
/etc/pve/qemu-server/408.conf:memory: 2048
/etc/pve/qemu-server/408.conf:memory: 2048
/etc/pve/qemu-server/305.conf:memory: 2048
/etc/pve/qemu-server/305.conf:memory: 3072
/etc/pve/qemu-server/501.conf:memory: 2048
/etc/pve/qemu-server/903.conf:memory: 2048

-I don't know why there is a few virtual machines twice in the output, some with incorrect amount of memory (3072 is not used for any VM at the moment) When I calculate memory allocation for VMs from list above, I get 49152. Unless RAM is used even for VMs that are not running??
 
I don't know why there is a few virtual machines twice in the output, some with incorrect amount of memory (3072 is not used for any VM at the moment)
Check those relevant .conf files & you will find previous states / snapshots containing their relevant memory setting/s. Hence grep finds "memory" more than once, and with the value at that time.
 
  • Like
Reactions: Sami Vakkuri
What version and what kernel?
 
Ok, thank you @gfngfn256 ! This explains duplicates.


@SteveITS, versions are visible in my original post screenshot.

Though Manager Version has been updated since:

Kernel Version​

Linux 6.8.12-10-pve (2025-04-18T07:39Z)​
Manager Version​

pve-manager/8.4.12/c2ea8261d32a5020​

But I am NOT using ZFS so could this still be an cause of the issue?
 
Last edited: