Ram report issue to multiple Hosts

scopic88

New Member
Sep 12, 2025
2
0
1
Hi,

I am running into an issue.
Running PVE9.1.7 2 x Zeon Silver + 128GB Ram Supermicro Machine.

I have 3 guests (all Ubuntu)
1- 8 GB ram using 3.02GB
2- 84GB ram Using 2.82GB
3- 16GB ram Using 3.97GB

The Host keeps showing 94.69 out of 125.05 !

All guests agents are installed and reporting to the Host.

Can someone point to the right direction ?

Thanks
 
Hi scopic88,
welcome to the forum. :)

RAM utilization depends a little bit in the guest configuration.
Can you elaborate, what RAM configurations are in use for the guests?
(VM or LXC - KSM enabled or not - Ballooning devices in use? and the balloning configuration - is NUMA in use?)

And maybe point out your issue, or is it more a question of general understandment?

BR, Lucas
 
Last edited:
What are you defining as "using" for the 2.82 GB?

PVE will show the total used by the VM. If you've assigned 84 GB the VM could use that for caching, ARC, lots of things. It could use it then stop using it.
 
  • Like
Reactions: Johannes S
Hey,
This is actually pretty normal on Proxmox 9 (especially 9.1.7) and catches a lot of people off guard.

The high host RAM usage (~94GB out of 125) is what QEMU/KVM has reserved/allocated on the host side for your VMs — basically close to the full 108GB you assigned across the three guests plus overhead. The low numbers you see on the VM summaries (3GB, 2.8GB, 4GB) are just the guest agent reporting actual usage inside the VM.

Ballooning (the part that lets the VMs hand unused RAM back to the host) usually doesn't kick in aggressively until the host is around 78-80% used, so right now it's not doing much.

Quick things to check/fix:
Go into each VM, Hardware, Memory and make sure it's set to "Automatically allocate memory within this range" (not checked). On the big 84GB VM especially, drop the Minimum memory to something more reasonable like 20-32GB if it doesn't constantly need the full amount.

After that, test ballooning from the Proxmox shell: qm monitor <VMID> then type: info balloon

Also inside each guest, if Linux based, run lsmod | grep balloon just to confirm the driver is loaded.

Once it's working the host usage should come down a lot.

Sometimes ZFS ARC on the host eats a ton of free RAM too, worth checking with free -h. It shows as buff/cache.

There is a difference between free RAM, Unused RAM, and Available RAM. If ARC is using a lot of RAM, it is speeding up reads and writes. So it is good for the RAM to be used.
 
It shows as buff/cache
Are you sure?
Bash:
# free -h
               total        used        free      shared  buff/cache   available
Mem:            21Gi        10Gi       8.0Gi        54Mi       3.5Gi        10Gi

# zarcsummary -s arc | grep "Current size:"
Current size:                                 100.0 %    8.0 GiB
 
Are you sure?
Bash:
# free -h
               total        used        free      shared  buff/cache   available
Mem:            21Gi        10Gi       8.0Gi        54Mi       3.5Gi        10Gi

# zarcsummary -s arc | grep "Current size:"
Current size:                                 100.0 %    8.0 GiB
Hi Impact,

Here are my results :
total used free shared buff/cache available
Mem: 125Gi 99Gi 25Gi 74Mi 1.4Gi 26Gi
Swap: 0B 0B 0B
zarcsummary -s arc | grep "Current size:"
Current size: < 0.1 % 11.2 KiB
root@Host31:~#
Hi scopic88,
welcome to the forum. :)

RAM utilization depends a little bit in the guest configuration.
Can you elaborate, what RAM configurations are in use for the guests?
(VM or LXC - KSM enabled or not - Ballooning devices in use? and the balloning configuration - is NUMA in use?)

And maybe point out your issue, or is it more a question of general understandment?

BR, Lucas
Hi Lucas,

Here :
Guest 31001 (Ubuntu)
  • 48 (2 Socket, 24 cores) X86-64-v2-AES NUMA=1 VCPU=2
  • Memory 8194, Minimum 4096, Shares default, Baloon and KSM = OK
Guest 31002 (Ubuntu)
  • 48 (2 Socket, 24 cores) X86-64-v2-AES NUMA=1 VCPU=18
  • Memory 86016, Minimum 65536, Shares default, Baloon and KSM = ON
Guest 31003 (Ubuntu)
  • 48 (2 Socket, 24 cores) X86-64-v2-AES NUMA=1 VCPU=28
  • Memory 16384, Minimum 8192, Shares default, Baloon and KSM = ON

Not using any ZFS, All LVM-Thin.

Thanks
 
Guest 31002 (Ubuntu)
  • 48 (2 Socket, 24 cores) X86-64-v2-AES NUMA=1 VCPU=18
  • Memory 86016, Minimum 65536, Shares default, Baloon and KSM = ON
Virtualization makes the most sense when use it to break down work into small chunks. Imagine trying to fit tetris pieces that are 4 squares, and then you have a piece that is 48 squares in size. If your use case is really that big, the only reason NOT to run it on metal is if you'rein a cluster, and if its not assigning too much resources is actually detrimental. which brings up point 2- The reason to use x86-64-v2 is when you have a cluster with heterogeneous CPU features, since v2 only exposes cpu features from v1/v2 generation. if you dont have a cluster you're robbing your guests from all the later features your CPU has.

as for actual ram utilization- you WANT your host to eat all available ram. otherwise its sitting idle. Linux is smart enough to free it up when needed.