[SOLVED] Memory usage in host summary keeps increasing but in guest OS stays normal

castielsn

New Member
Oct 9, 2022
10
0
1
Hello Everyone,

A few days ago I installed proxmox on an old Lenovo laptop (i5, 3rd gen, 8 GB ram, 240 GB SSD) and installed Openmediavault 6 as a guest VM, gave it 2 cores and 4 GBs of ram, and set up a simple SMB share with an external HDD passed through to the OMV VM.

Since I'm new to proxmox, I'm not entirely sure whether this is supposed to be normal behavior, but for me it just doesn't "feel right": when I start the OMV VM, memory usage shows the same amount, about 400 MBs, in both the OS's web UI and also in the proxmox web UI under the summary section. However, in the proxmox web UI the memory usage slowly starts to increase, actually, quite consistently so that in 3 days' time it hits 2 GBs (but in guest VM it stays 400 MBs). As soon as I restart the guest, the mem usage gets back to normal but starts to increase in the same way as before. Everything is up-to-date. I even set up a second OMV 6 VM and the same thing happens there as well.

I found threads about ballooning and KSM but I'm not really sure whether this strange behavior has anything to do with any of these two concepts. I also read in a reddit post that linux VMs tend to use up all the vMemory they are given, and that it is okay for the proxmox summary to show different memory usage as the guest OS itself because the proxmox host is doing some cashing as well. Can someone please clarify memory usage for me? Is this really normal behavior? And if yes, is there some built in number at which this increase stops? Like at 80% or something?

Thank you, in advance, for any clarification on this issue!
 
All filesystems will use all RAM available to them for caching if that RAM isn't used for something else. So its totally normal that a OS (host and/or guest) will utilize nearly all RAM. Because free RAM is wasted performance.

What is free -h and arc_summary ran on your host reporting?
 
Thank you Dunuin, for the explanation. I thought that the memory usage section in the proxmox web UI is supposed to show the same amount of memory used than what the VM OS indicates (this seemed logical to me). Isn't that a bit strange though, that by just simply looking at the summary section of a VM one is not able to see how much memory the guest OS is actually using? I just can't seem to wrap my head around this... I mean sure, if there is any kind of a process which uses a lot of memory, then of course, I understand that RAM usage goes up, but when a system is basically sitting in idle, I don't understand why RAM usage increases. Well, I guess I do now that you explained that it is using up almost all the available RAM for caching. I need to study caching in more detail... Also, I think I'm going to have to rethink how much vmemory I should allocate to VMs. I think I'm just going to give the OMV VM 1 GB, or maybe 1,5, instead of 4 gigs.

free -h returned:
Code:
 total        used        free      shared  buff/cache   available
Mem:           7.6Gi       2.6Gi       4.5Gi        46Mi       570Mi       4.8Gi
Swap:          7.0Gi          0B       7.0Gi

and arc_summary:

Code:
------------------------------------------------------------------------
ZFS Subsystem Report                            Sun Oct 09 20:29:32 2022
Linux 5.15.60-1-pve                                           2.1.5-pve1
Machine: proxmox (x86_64)                                     2.1.5-pve1

ARC status:                                                      HEALTHY
        Memory throttle count:                                         0

ARC size (current):                                   < 0.1 %    1.2 KiB
        Target size (adaptive):                         6.2 %  244.2 MiB
        Min size (hard limit):                          6.2 %  244.2 MiB
        Max size (high water):                           16:1    3.8 GiB
        Most Frequently Used (MFU) cache size:            n/a    0 Bytes
        Most Recently Used (MRU) cache size:              n/a    0 Bytes
        Metadata cache size (hard limit):              75.0 %    2.9 GiB
        Metadata cache size (current):                < 0.1 %    1.2 KiB
        Dnode cache size (hard limit):                 10.0 %  293.0 MiB
        Dnode cache size (current):                     0.0 %    0 Bytes

ARC hash breakdown:
        Elements max:                                                  0
        Elements current:                                 n/a          0
        Collisions:                                                    0
        Chain max:                                                     0
        Chains:                                                        0

ARC misc:
        Deleted:                                                       0
        Mutex misses:                                                  0
        Eviction skips:                                                0
        Eviction skips due to L2 writes:                               0
        L2 cached evictions:                                     0 Bytes
        L2 eligible evictions:                                   0 Bytes
        L2 eligible MFU evictions:                        n/a    0 Bytes
        L2 eligible MRU evictions:                        n/a    0 Bytes
        L2 ineligible evictions:                                 0 Bytes

ARC total accesses (hits + misses):                                    0
        Cache hit ratio:                                  n/a          0
        Cache miss ratio:                                 n/a          0
        Actual hit ratio (MFU + MRU hits):                n/a          0
        Data demand efficiency:                           n/a          0
        Data prefetch efficiency:                         n/a          0

Cache hits by cache type:
        Most frequently used (MFU):                       n/a          0
        Most recently used (MRU):                         n/a          0
        Most frequently used (MFU) ghost:                 n/a          0
        Most recently used (MRU) ghost:                   n/a          0
        Anonymously used:                                 n/a          0

Cache hits by data type:
        Demand data:                                      n/a          0
        Demand prefetch data:                             n/a          0
        Demand metadata:                                  n/a          0
        Demand prefetch metadata:                         n/a          0

Cache misses by data type:
        Demand data:                                      n/a          0
        Demand prefetch data:                             n/a          0
        Demand metadata:                                  n/a          0
        Demand prefetch metadata:                         n/a          0

DMU prefetch efficiency:                                               0
        Hit ratio:                                        n/a          0
        Miss ratio:                                       n/a          0

L2ARC not detected, skipping section

VDEV cache disabled, skipping section

ZIL committed transactions:                                            0
        Commit requests:                                               0
        Flushes to stable storage:                                     0
        Transactions to SLOG storage pool:            0 Bytes          0
        Transactions to non-SLOG storage pool:        0 Bytes          0
 
Last edited:
Thank you Dunuin, for the explanation. I thought that the memory usage section in the proxmox web UI is supposed to show the same amount of memory used than what the VM OS indicates (this seemed logical to me). Isn't that a bit strange though, that by just simply looking at the summary section of a VM one is not able to see how much memory the guest OS is actually using?
No, not really. PVE can't know what the guest OS is doing. Only information PVE gets from the guest OS is the total memory size and free memory and that only if you setup the QEMU guest agent. So PVE has no idea what that RAM is used for. It can't differentiate between virtual RAM used for caching or some processes.
I just can't seem to wrap my head around this... I mean sure, if there is any kind of a process which uses a lot of memory, then of course, I understand that RAM usage goes up, but when a system is basically sitting in idle, I don't understand why RAM usage increases. Well, I guess I do now that you explained that it is using up almost all the available RAM for caching. I need to study caching in more detail... Also, I think I'm going to have to rethink how much vmemory I should allocate to VMs. I think I'm just going to give the OMV VM 1 GB, or maybe 1,5, instead of 4 gigs.
Your VM is using virtual RAM, not physical RAM. And the KVM process virtualizing your VM needs RAM too, so there is overhead and a 4GB RAM might actually use even more, for example 5GB of physical RAM. Then there could be RAM fragmentation. And in case you choose any disk caching mode other than "none" for your VM, for example "writeback", all writes to the virtual disk will be additionally cached in the PVE nodes RAM too. In that case a 4GB VM could for example use something like 7 or 8 GB of physical RAM. And then there is the pagefile caching of linux itself, where any read from disk will be cached with the free RAM, so the next read of the same data will be faster because it can read it from the fast RAM instead of the slow disk.
free -h returned:
Code:
         total        used        free     shared  buff/cache   available
Mem:           7.6Gi       2.6Gi       4.5Gi        46Mi            570Mi       4.8Gi
Swap:          7.0Gi           0B        7.0Gi
So according to this 2.6 GB are used by processes, 570MB is used for caching and 4.5GB are free. "available" are 4.8GB, because PVE could easily drop the cache if that 570MB RAM is needed for something else. As long as there is always some "available" RAM and you are not heavily swapping, all is fine.

You also might want to have a look at: https://www.linuxatemyram.com/
 
  • Like
Reactions: castielsn
Dunuin, thank you for the detailed explanation. I see now that memory usage in a hypervisor context is more complex than I thought. (So this is why people often have 128 GB+ on their DIY homelab servers; seems the more ram there is there the better...) Thank you for the link, too, it's much enlightening! I guess I can mark this thread as solved.
 
Dunuin, thank you for the detailed explanation. I see now that memory usage in a hypervisor context is more complex than I thought. (So this is why people often have 128 GB+ on their DIY homelab servers; seems the more ram there is there the better...) Thank you for the link, too, it's much enlightening! I guess I can mark this thread as solved.
Jup, RAM is usually the first thing that runs out. Then disk space or PCIe slots. Got 112GB RAM, still would need 64GB more to be able to run all VMs I would like to. And the CPU is usually at 4-7% utilization only because nearly all VMs are idling all the time. Running LXCs would be more resource efficient, as the LXC would share the kernel with the host, but you also run into more problems and got lower security because of the weaker isolation.
 
Last edited:
  • Like
Reactions: castielsn

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!