PVE showing high memory usage but VM is not

oah433

Member
Apr 8, 2021
31
1
8
40
Hi
Simply put I have the following scenario:
I am running PVE 6.3.2 and have a VM with 16 Gigs of Ram and it was all of sudden consuming all the RAM which leads to the machine hanging up. So we increased the RAM up to 32 Gigs of Ram but nothing really changed. The problem is that the VM is not running any serious tasks. The imgs below tell it all.

This is the RAM usage as reported from the PVE-Admin panel:

1660122855670.png



The VM is centos and is showing this:
1660122788694.png


So the VM internally is reporting 1.59 Gigs of Ram usage while the PVE-panel is showing 17 Gigs of ram Usage, how can I find who is hogging the RAM and any idea on how to start debugging it?


Thx.
 

Attachments

  • 1660122730131.png
    1660122730131.png
    79.9 KB · Views: 25
Please look at the yellow bar in htop. There are a ton of threads about supposedly wrong ram usage.
It's just the cache. The guest caches, show's it as cached, and the host doesn't know about it. Drop caches and check usage afterwards.
 
Please look at the yellow bar in htop. There are a ton of threads about supposedly wrong ram usage.
It's just the cache. The guest caches, show's it as cached, and the host doesn't know about it. Drop caches and check usage afterwards.
I just dropped the caches using:

Code:
echo 3 > /proc/sys/vm/drop_caches

and this is what I got (both imgs below are after dropping the caches)

1660124772671.png

I can see some gigs have been shaved off the RAM but it is still alot. Any ideas on where to go next? update to PVE7 or something similar?

1660124815799.png
 
That your VM is using a lot of RAM for "no reason" could be indicative of another problem with your VM setup, but is not the root cause here.

By default, through the ballooning driver the VM will continually map new RAM whenever it needs it, and internally it will again "release" the memory when it is no longer in use. However, your host might not actually reclaim that memory until it needs it. By default, that threshold is at 80%.
So, your host should only start reclaiming memory once it hits less than 20% total free memory.
Constantly releasing and reassigning memory to the VM wouldn't make much sense anyway, when the host just doesn't need it at the moment.

Solely going off the impressions I got from more recent posts in the forum here, but I don't think this reporting problem shows up in newer versions. The newer PVE7+ versions might have changed to take into account the "true" usage data reported by the ballooning driver.
However, please take this with a grain of salt, to be sure one would have to test it.

Still, updating has a number of other benefits and increases security for system. So you should definitely consider doing that, and if you do, feel free to report back whether the problem persists.
 
And as far as I understand the host won't reclaim RAM the guests isn't using when ballooning is enabled but min RAM is not lower than max RAM, right?
 
I just tested this a little and the "Minimum Memory" setting does not interfere with the memory reclaiming by the host when it is equal to the assigned RAM. When the host uses more than 80% memory and the VM does not use it, it will still be reclaimed, independent of the minimum.

From what I've seen, the minimum setting allows the host (or qemu perhaps) to dynamically change the amount of RAM the machine currently has available. Which means that when the host is at capacity, it will slowly decrease the amount approaching the minimum setting, and when the host is mostly free, it will increase the amount again to the maximum.
The magical 80% threshold seems to apply here as well.
 
Last edited:
Yes, but the problem is that I have never seen in two years that the KVM process releases the reserved RAM. Not even with nodes RAM usage above 90%. No matter if the guest is using that RAM or not and with ballooning enabled. I'm referring to the "RES" column in htop where the value of a KVM process can only grow but never shrink without stopping the VM.
 
Last edited:
That your VM is using a lot of RAM for "no reason" could be indicative of another problem with your VM setup, but is not the root cause here.

By default, through the ballooning driver the VM will continually map new RAM whenever it needs it, and internally it will again "release" the memory when it is no longer in use. However, your host might not actually reclaim that memory until it needs it. By default, that threshold is at 80%.
So, your host should only start reclaiming memory once it hits less than 20% total free memory.
Constantly releasing and reassigning memory to the VM wouldn't make much sense anyway, when the host just doesn't need it at the moment.

Solely going off the impressions I got from more recent posts in the forum here, but I don't think this reporting problem shows up in newer versions. The newer PVE7+ versions might have changed to take into account the "true" usage data reported by the ballooning driver.
However, please take this with a grain of salt, to be sure one would have to test it.

Still, updating has a number of other benefits and increases security for system. So you should definitely consider doing that, and if you do, feel free to report back whether the problem persists.
I got your point. I will go with the update option and report back the progress.

Thx a ton.
 
  • Like
Reactions: datschlatscher
I gave it yet another quick spin while monitoring the memory usage, and here the memory usage drops, both in the virtual machine and also on the host, as reported for the Qemu process in the RES column in htop.

One thing which took me off-guard though, was that the calculation for "total" RAM also takes the Swap space into account. So if the RAM is at nearly 100%, but the overall usage is still less than 80% of RAM size + Swap size, then the Qemu process, and subsequently the VM, will not free any of its reserved memory, even though in the GUI it kind of looks like it should.
However, there still might be other things influencing this. So there might be another reason or unknown interaction for why VM processes do not release memory in your case.
 
Last edited:
One thing which took me off-guard though, was that the calculation for "total" RAM also takes the Swap space into account. So if the RAM is at nearly 100%, but the overall usage is still less than 80% of RAM size + Swap size, then the Qemu process, and subsequently the VM, will not free any of its reserved memory, even though in the GUI it kind of looks like it should.
However, there still might be other things influencing this. So there might be another reason or unknown interaction for why VM processes do not release memory in your case.
Ah, that's interesting and could be a problem. Node got 64GB RAM + 64GB swap. Lets say I'm at 59 of 64 GB RAM and 1 of 64GB swap usage so for PVE I only got 60 of 128GB memory used and PVE won't reclaim the unused RAM as for PVE its just around 50% and not at 95%?
Because RAM stealing of ballooning still starts when RAM utilization get above 80% and not just RAM+swap utilization over 80%.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!