Windows VM RAM usage

matto

New Member
Dec 19, 2024
9
0
1
Hi All,

I have seen a number of post regarding Windows VM RAM usage, but none have seen to be able to answer my question.

ISSUE:

Whenever i clone or create a new Windows VM (We are using Windows Server 2022), the VM deploys but the RAM usage goes and stays at approx 95%,.

image 1 you can see this behaviour, the VM (999 FORUM-TEST-SERV) has been newly cloned, is turned on and is just running the OS in idle, it is currently on node S-797 and the RAM usage is at 93%

image 2/3 you can see that i have migrated the same VM to another node (S-1121) and the RAM usage has stabalised to what it should be (and what it is showing on the windows VM)

Migration details:

Migration is a live migration and the requested state is running.

Code:
root@S-797:~# pveversion -v
proxmox-ve: 8.2.0 (running kernel: 6.8.12-3-pve)
pve-manager: 8.2.7 (running version: 8.2.7/3e0176e6bb2ade3b)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.12-3
proxmox-kernel-6.8.12-3-pve-signed: 6.8.12-3
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
ceph: 18.2.4-pve3
ceph-fuse: 18.2.4-pve3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx9
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.4
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.8
libpve-cluster-perl: 8.0.8
libpve-common-perl: 8.2.5
libpve-guest-common-perl: 5.1.4
libpve-http-server-perl: 5.1.2
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.10
libpve-storage-perl: 8.2.5
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-4
proxmox-backup-client: 3.2.7-1
proxmox-backup-file-restore: 3.2.7-1
proxmox-firewall: 0.5.0
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.7
proxmox-widget-toolkit: 4.2.4
pve-cluster: 8.0.8
pve-container: 5.2.0
pve-docs: 8.2.3
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.2
pve-firewall: 5.0.7
pve-firmware: 3.14-1
pve-ha-manager: 4.0.5
pve-i18n: 3.2.4
pve-qemu-kvm: 9.0.2-3
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.4
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.6-pve1


Code:
root@S-1121:~# pveversion -v
proxmox-ve: 8.2.0 (running kernel: 6.8.12-3-pve)
pve-manager: 8.2.7 (running version: 8.2.7/3e0176e6bb2ade3b)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.12-3
proxmox-kernel-6.8.12-3-pve-signed: 6.8.12-3
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
ceph: 18.2.4-pve3
ceph-fuse: 18.2.4-pve3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx9
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.4
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.8
libpve-cluster-perl: 8.0.8
libpve-common-perl: 8.2.5
libpve-guest-common-perl: 5.1.4
libpve-http-server-perl: 5.1.2
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.10
libpve-storage-perl: 8.2.5
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-4
proxmox-backup-client: 3.2.7-1
proxmox-backup-file-restore: 3.2.7-1
proxmox-firewall: 0.5.0
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.7
proxmox-widget-toolkit: 4.2.4
pve-cluster: 8.0.8
pve-container: 5.2.0
pve-docs: 8.2.3
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.2
pve-firewall: 5.0.7
pve-firmware: 3.14-1
pve-ha-manager: 4.0.5
pve-i18n: 3.2.4
pve-qemu-kvm: 9.0.2-3
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.4
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.6-pve1
 

Attachments

  • image 1.png
    image 1.png
    40.2 KB · Views: 22
  • image 2.png
    image 2.png
    40.1 KB · Views: 21
  • image 3.png
    image 3.png
    136 KB · Views: 21
Hey matto, could you please post a screenshot with the task manager in Windows when the RAM usage is high? Windows uses some RAM as cache, so for example Windows might show 5% RAM usage, while 90% is actually the cache. This would mean that 95% of RAM is available in Windows (cache + 5% free), while PVE would show that only 5% is available.
 
Hi l.leahu-vladucu

as requested:
image 5 is a screenshot of the proxmox summary showing approx 95% RAM usage
Image 6 is a screenshot of windows task manager at the same time showing 2-3GB of usage.

If you migrate the VM to another host the RAM usage is shown to be the same in both the proxmox summary and the OS task maager, but as soon as any power cycle actions take place, it goes back up and stays around 95% on the proxmox summary, whilst the OS is still only using 2-3GB.

Thanks for taking a look :)
 

Attachments

  • image 5.png
    image 5.png
    40.9 KB · Views: 21
  • image 6.png
    image 6.png
    162.5 KB · Views: 21
On windows vms to see correct ram usage the vm configuration must have balloning enabled and qemu guest agent enabled in the configuration; and qemu guest agent installed and running correctly in windows.
You also need to make sure that the qemu guest agent service is not crashed when the information showed is incorrect.
 
Last edited:
You also need to make sure that the qemu guest agent service is not crashed when the information showed is incorrect.
On Windows, the Ballooning Agent provides the RAM usage information to the Proxmox VE host via the Ballooning device. Even if Memory and Minimum Memory are set to the same value -> no ballooning enabled.

If that information is not available, Proxmox VE will show what the VM process is actually using on the host, which usually differs a lot from the point of view of the guest.
 
On Windows, the Ballooning Agent provides the RAM usage information to the Proxmox VE host via the Ballooning device. Even if Memory and Minimum Memory are set to the same value -> no ballooning enabled.
This seems partially incorrect ("Even if ... no ballooning enabled"), with balloning disabled don't show ram usage of windows.
I tried also now on a Windows vm, with balloning enabled show the correct ram usage of windows.
Tried to restart vm with balloning disabled and also waited some minutes but didn't show the ram usage of windows.
Enabled again balloning, restart the vm and shortly after full windows boot return to show the ram usage of windows.

If that information is not available, Proxmox VE will show what the VM process is actually using on the host, which usually differs a lot from the point of view of the guest.
Yes
 
with balloning disabled don't show ram usage of windows.
What do you mean with ballooning disabled? Maybe I wasn't precise enough. The ballooning device needs to be enabled, and the ballooning agent running in Windows. When having both, memory and minimum memory set to the same value, there will be no memory ballooning happening. That's what I meant with "ballooning disabled".
 
  • Like
Reactions: Fantu
Thanks, now it's clear, from your previous message I understood that you meant the ballooning disabled in the configuration (and in that case it doesn't work).

Edit:
I also did an error on my first message, is not qemu guest agent service to be enabled and running in windows (for windows ram information visible on proxmox gui) but a Ballooning Agent as what wrote @aaron (more exactly a service named "Balloon Service")

To recap in the vm configuration there must be the "Balloning Device" enabled and in windows the virtio drivers installed and the service "Balloon Service" enabled and running
 
Last edited:
  • Like
Reactions: aaron
Hi All,

Thanks for your replies, they have been really knowlegable!

I did the following:

Enabled Ballooning on the VM, but kept the min and max the same.

Result:

Proxmox showed the correct value on the summary graph for the windows VM!

Unintended result:

Now the proxmox host believes it is using the full amount of memory provisioned to the windows VM, rather than the actual amount being used basically the issue has flipped o_O

The original behaviour was the host showed what was actually being used and the windows VM was not, now it seems this has reversed XD
 
In fact you have a single view of the RAM usage, if you have the ballooning service configured (VM configuration), active and working (in the OS of the VM) you have the RAM used internally, otherwise the RAM used by the vm host side (which should corresponds to the one currently allocated host side).
It would be useful to have 2 views to see both data but not implemented yet as far as I know.
 
Hi All,

Sorry it has been so long since i last posted. I hope everyone had a great holiday season!

I have been doing lots of testing and found the following:

If i have a windows VM with 32GB or less of RAM, ballooning disabled - i find the stats read as normal for CPU/RAM/HDD etc.

If i have a windows VM with 33GB of more RAM, ballooning disabled - the stats for CPU/HDD are correct, but the RAM stats are shown incorrectly.
-> if i then enable ballooning all stats are shown correctly on the VM summary page, but the Host summary page then shows incorrect RAM stats (effectivley the problem changes from showing on the VM to showing on the host).

In regards to the stats being show differently when ballooning is activated, this might not be an issue, this might be a misunderstanding from me about how ballooning works. In the host stats it looks like it allocates the upper band of the balloon settings, even if it is not using it at that time by the VM. Maybe this is how memory over provision is shown in proxmox?

If it is not, then maybe it is a bug?