Promox reports high memory usage where VM reports low usage

shalak

Member
May 9, 2021
44
0
11
38
Hi!

I'm using Proxmox 7.0. It reports that my openmediavault VM uses 95.32% (7.63 GiB of 8.00 GiB) or RAM, where OMV claims to be using 229M/7.77G.

OMV reports:

Screenshot 2021-09-01 at 02.35.32.png
This is confirmed by:
Code:
# free -h
              total        used        free      shared  buff/cache   available
Mem:          7.8Gi       213Mi       151Mi        13Mi       7.4Gi       7.3Gi
Swap:           9Gi       0.0Ki         9Gi

However, proxmox claims that the memory is running out:
Screenshot 2021-09-01 at 02.35.39.png


With the other VM, running ubuntu-server there's also misalignment: proxmox reports 20.38% (3.26 GiB of 16.00 GiB), where OS claims 2.1Gi - which is still worrisome, albeit not as much.

What's going on? How can I diagnose the issue?
 
The usage on the node summary from my observation will tally all allocated ram regardless if the VM is utilising it, even with balloon driver enabled, this is a good thing in my opinion.
 
The usage on the node summary from my observation will tally all allocated ram regardless if the VM is utilising it, even with balloon driver enabled, this is a good thing in my opinion.

Why doesn't it happen for other VM then? This is weird. How would I know if I need to allocate more RAM for this parcitular VM. Doesn't sound right :/
 
Inside the VM, the filesystem cache is not counted as used because it can be release at a moments notice to make room for other stuff. However, from outside the VM, this cache memory is counted as used because it actually is in use and cannot be released/reused by the host.
The 7.4Gi of buff/cache that you showed is therefore not reported by OMV but is reported by Proxmox.
If you do not use PCI passthrough for that VM, you could use ballooning to allow the host to reuse such memory from the VM.
 
Inside the VM, the filesystem cache is not counted as used because it can be release at a moments notice to make room for other stuff. However, from outside the VM, this cache memory is counted as used because it actually is in use and cannot be released/reused by the host.
The 7.4Gi of buff/cache that you showed is therefore not reported by OMV but is reported by Proxmox.
If you do not use PCI passthrough for that VM, you could use ballooning to allow the host to reuse such memory from the VM.
The memory is already set as Ballooning:

1631569798821.png

I don't use PCI passthough, although I do have a VirtIO Hard Disk there, which is a volume from my RAID controller that I have directly passed to the VM. Is that the reason? The other VM does not have such Hard Disk...
 
The memory is already set as Ballooning:

View attachment 29416
If you don't set a "Minimum memory" value that is lower than your "Memory" value the host won't lower the RAM the guest is able to use. Im not totally sure but I think the host will only free up the guests cache by forcing it to free it up. So lets say you got 500MB of apps running and 7500MB cache. If you set that "Minimum Memory" to for example 4GB and your hosts RAM usage exceeds 80%, it will slowly start to reduce the guests RAM from 8 to 4GB.Because now the guests wants to use more RAM than it actually got it is forced to free up stuff and will throw away (or swap out) the cache first. But make sure you don't set that "Minimum Memory" too low. If there is no cached RAM left to free up the host won't stop lowering the guests RAM and the guest will start to OOM and kill processes.
I don't use PCI passthough, although I do have a VirtIO Hard Disk there, which is a volume from my RAID controller that I have directly passed to the VM. Is that the reason? The other VM does not have such Hard Disk...
Then your RAM shound't be pinned. If you don't use PCI passthrough but you used the "qm set" command for the pseudo-passthrough thats still normal virtualization and not "directy passed to the VMs". Its just like a normal virtual disk but only that this virtual disk isn't stored as a file or LV/zvol but is read/written to the actual disk itself as a block device. Thats for example why no SMART is working inside the guest and your passthrough disk is using a 512B LBA even if your physical disk uses a 4K LBA. So you still get the additional abstraction and virtualization overhead.
 
Last edited:
  • Like
Reactions: shalak
If you don't set a "Minimum memory" value that is lower than your "Memory" value the host won't lower the RAM the guest is able to use. Im not totally sure but I think the host will only free up the guests cache by forcing it to free it up. So lets say you got 500MB of apps running and 7500MB cache. If you set that "Minimum Memory" to for example 4GB and your hosts RAM usage exceeds 80%, it will slowly start to reduce the guests RAM from 8 to 4GB.Because now the guests wants to use more RAM than it actually got it is forced to free up stuff and will throw away (or swap out) the cache first. But make sure you don't set that "Minimum Memory" too low. If there is no cached RAM left to free up the host won't stop lowering the guests RAM and the guest will start to OOM and kill processes.

The other VM has similar config, minimum being equal to the Memory value:
1631572887297.png

But it still shows proper memory usage :(

Then your RAM shound't be pinned. If you don't use PCI passthrough but you used the "qm set" command for the pseudo-passthrough thats still normal virtualization and not "directy passed to the VMs". Its just like a normal virtual disk but only that this virtual disk isn't stored as a file or LV/zvol but is read/written to the actual disk itself as a block device. Thats for example why no SMART is working inside the guest and your passthrough disk is using a 512B LBA even if your physical disk uses a 4K LBA. So you still get the additional abstraction and virtualization overhead.
Huh, this opens a new can of worms. I did use "qm set" approach. So it would be better for to use a PCI passthrough for this config? I want the VM to handle all the storage...
 
The other VM has similar config, minimum being equal to the Memory value:
View attachment 29417

But it still shows proper memory usage :(
Is that maybe a Win VM? This is what I have observed:
For Linux PVE will count "cached" RAM of the guest as used on the host.
For Win it will count "cached" RAM of the guest as "free" on the host.
For Unix (or atleast my OPNsenses based on FreeBSD) the host won't show any useful RAM data at all because the FreeBSD qemu-agent implementation doesn't report detailed RAM usage so PVE just shows the RAM usage of the KVM process. But that KVM process always reserves 100% of memory even if the guest has 90% of free (yes, really free, not available or somehow cached) memory.
Huh, this opens a new can of worms. I did use "qm set" approach. So it would be better for to use a PCI passthrough for this config? I want the VM to handle all the storage...
If you want that the VM has direct physical access to the phsyical disks, so you don't get that overhead and maybe unwanted abstraction, then yes. Then PCI passthrough is the only option.
 
Last edited:
  • Like
Reactions: shalak
Is that maybe a Win VM? This is what I have observed:
For Linux PVE will count "cached" RAM of the guest as used on the host.
For Win it will count "cached" RAM of the guest as "free" on the host.
For Unix (or atleast my OPNsenses based on FreeBSD) the host won't show any useful RAM data at all.

The one that shows wrong values is debian-based Linux, the openmediavault. The one that shows proper ones is Ubuntu Server.

If you want that the VM has direct physical access to the phsyical disks, so you don't get that overhead and maybe unwanted abstraction, then yes. Then PCI passthrough is the only option.
What I wanted the most is to be able to access files on this volume directly on proxmox in case of openmediavault VM crashes, or even in live-distro in case whole proxmox won't start. I observed that I'm able to mount the volume in proxmox and access files. How about the live-distro situation? Will I be able to? On the other hand - how big is the overhead and the abstraction?

I believe PCI passthrough it not possible in my case, because I would have to pass whole RAID controller to VM, right? And this controller also serves the another RAID volume on which the proxmox itself is installed. Is my understanding correct?
 
What I wanted the most is to be able to access files on this volume directly on proxmox in case of openmediavault VM crashes, or even in live-distro in case whole proxmox won't start. I observed that I'm able to mount the volume in proxmox and access files. How about the live-distro situation? Will I be able to?
I observed some strange incompatibilities here. For example I got 6 HDDs that I used with "qm set" as SCSI. On the host the first 3 HDDs are shown with a existing partition. The last 3 are shown without partitions (using both lsblk or fdisk). All 6 HDDs are pseudo-passed through to one Debian 10 VM. Inside that VM its the opposite. The first three HDDs are shown without a partition but I can see the partition of the last 3 HDDs. So in my case its not possible to open 3 of the drives directly from the host. If I run fsck on the host it is complaining about problems with the filesystem and (if I remeber right) wrong inodes and misalignment. From inside the guest everything was fine. So not sure how much you can trust it if you still want to access the drives from outside the VM.
On the other hand - how big is the overhead and the abstraction?
My benchmarks didn't showed a super big difference. Maybe 10 or 20% performance loss. But I guess that also depends on the type of workload.
I believe PCI passthrough it not possible in my case, because I would have to pass whole RAID controller to VM, right? And this controller also serves the another RAID volume on which the proxmox itself is installed. Is my understanding correct?
Yes, than it isn't possible if you don't buy a additional raid controller (or HBA if you want stuff like ZFS). You can only passthrough the complete controller with all drives attached to it and if that controller is onboard it is often also not possible to pass it through at all. For example it that onboard controller is in the same IOMMU group with some other onboard stuff.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!