99% host memory in use on one kvm process

creamers

New Member
Jun 17, 2020
16
0
1
45
84% memory in use with kvm process on host. Two vm's only using around 17G ram.
Who knows what is going on ?

Two vm's showing 10GB in use and 6GB in use.
vm1.pngvm2.png


The host itself shows 61GB in use?
host.png

Free -h on the host showing the same:
free on host.png


Here is the strange part , a kvm process that uses 84% of the memory:
top.png

On host:
# zpool status
no pools available

#zfs list
no datasets available
 
Last edited:
Think it's the balloon function reserving the ram that COULD be used for the vm's.....except, does it also release ram for the host itself.
In other words, can assigning too much ram to the vm's (without the vm's really using the ram) be a problem for the host ?
Or will proxmox always 'protect' itself when in need of resources ?
 
i had noticed the same problem with my server too - one windows VM KVM process is using all of available host RAM+Swap too. VM restart won't help, full VM shutdown is required. attaching daily RAM usage - you can see huge RAM usage drop after stop/start that VM , i'm not sure, how to diagnose this further..

1593951743955.png
 
No, i don't use Ballooning, but on windows VM services BalloonService is running.
1593961363320.png
 
Have you enabled caching on the virtual disk? That could also be the culprit. If you have caching enabled, than I/O is cached on the host and that needs RAM (and speeds up everything a lot). The VM sees incredible performance.
 
I also still have the issue on my debian vm using alot of buff/cache and not releasing it. Proxmox thinks I use ~80% but in reality it's all buff/cache.
Some say to active release some, but I'm looking for a more definite solution

Code:
# free -h
              total        used        free      shared  buff/cache   available
Mem:          7.8Gi       2.2Gi       636Mi        80Mi       4.9Gi       6.6Gi
Swap:         8.0Gi       0.0Ki       8.0Gi
 
I also still have the issue on my debian vm using alot of buff/cache and not releasing it.

Why should it? Caching is what OSes do to present you will a blazing fast computing experience.

Proxmox thinks I use ~80% but in reality it's all buff/cache.

... and PVE is right, it is used indeed (as your free output shows)

Some say to active release some, but I'm looking for a more definite solution

Yes, flushing all cache frees memory, but you machine will be extremely slow after that until everything that is needed is cached again. There is no point in flushing the cache.

The only way to "not use" so much memory is to give your VM less memory. If you have free memory in your machine (either VM or not) while it is working (e.g. for a week or so), then you have too much memory. You have to monitor the cache hit ratio. The theoretical optimized value is that you will serve most if not all hits directly from cache and all entries are "hot" or frequently used (so the cache is not too big).
 
I have the exact same setup on another server where it only uses around 2GB/8GB ram, and dont see the buff/cache use.
I mean, there is still 6GB ram available, why does the vm not use that it instead of buff/cache stuff?
I would not know why it needs the 4.9Gi in buff/cache if there is still 7,8-2,2=5,6GB available ?
And if it's gonna use the 6GB ram including the cache, what is proxmox showing then? I probably miss a piece of the puzzle ;) Hope you @
LnxBil can explain some more ;)

Btw, I see someone else has reported the exact same thing: https://forum.proxmox.com/threads/virtual-machine-ram-used-95.70279/
 
Last edited:
This discussion is not related to the original thread title anymore, but related to a special use case in Linux. I hope it is somehow understandable, but it goes deep into memory architecture in virtualized systems:

First, the memory usage that is displayed in PVE can but does not have to be related to the actual space usage inside of the VM. It cannot and will not be the same (depending of the guest OS). The VM uses and blocks the memory on start, so all memory is already reserved for the VM. If you do not have enough space left, the VM will not start and will yield ' failed to initialize KVM: Cannot allocate memory'. So, the space is already fully used from the point of the hypervisor (PVE). If the VM allocate or uses its space is depended of the guest os. That said, each non-Windows operating I've ever seen uses the term "free memory" only for memory that is really free. Free memory is the worst that can happen to your OS, because you don't use and it is therefore useless. Every OS caches every file it has read until the cache is full. If a program needs space, the cache is flushed according to their cache eviction strategy, mostly last recently used (LRU) or some advanced stuff like in ZFS, but it will be evicted so that the requested memory allocation can be done.

The memory usage as seen from the hypervisor is very similar to the storage usage: If you delete a file on your storage, it is marked as deleted but the block itself is not freed so the hypervisor does see the same storage, nonetheless you just deleted a 1 GB file. The same is true for the memory itself. Just because you flushed your cache in your guest OS and the memory is technically freed, the hypervisor still sees the allocated memory pages with its former content. Only if there is some communication or freeing inside of your guest (also depended of the guest), the memory will also be seen as free from the outside. For Windows guests with activated virtio drivers and ballooning stuff, the actual free memory value is reported and displayed for the guest, just for convenience for the viewer besides that the VM actually uses much more hypervisor memory.

Imagine what could be done with actual 100% awareness of virtualization of guest OSes. Dynamic memory (ballooning), cpu hotplug, etc. everything on demand....
 
  • Like
Reactions: creamers
@LnxBil thanks for detailed explanations.
in my case i don't worry about what memory is reported on guest (Windows in my case). i'm following memory usage on host side and i see huge usage by kvm process for this particular VM. i'm running VMs for years in VMVare ESXi and from this year on Proxmox and i never seen that behavior before.
so, it's almost two weeks since i restarted(full power off to completely stop/start particular kvm process ) that Windows VM with issue, and i'm not seeing this issue anymore. so looks like it's something an edge case when there are some memory leak for this kvm process. i will report back, if i see such once more..
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!