Balloon has 100% RAM on Windows.

flako777

New Member
Oct 13, 2025
3
0
1
Hi, I'm trying to get memory ballooning to work on Proxmox 8.4.14 with Windows 2022 +VirtioIO 1.266 (it also fails on Win 2012).
Basically, I set it to a minimum of 2G and a maximum of 6G. Over time, the RAM reaches 100% (although a process consumes it).

I only managed to free up RAM by disabling the VirtioIO Balloon Driver from Device Manager. (If I re-enable it, the RAM slowly grows again.)

This happens when I start copying files or running MSSQL.

vm.conf:
agent: 1
balloon: 2048
bios: ovmf
boot: order=ide0;virtio1
cores: 5
cpu: host
efidisk0: zpool_4T:vm-104-disk-2,efitype=4m,pre-enrolled-keys=1,size=1M
ide0: none,media=cdrom
machine: pc-q35-8.1
memory: 6144
meta: creation-qemu=8.1.5,ctime=1735219775
name: DESA
net0: virtio=BC:24:11:3C:70:CA,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: win11
scsihw: virtio-scsi-pci
serial0: socket
smbios1: uuid=4bc1976b-1a98-4b51-91cc-16bde73dd072
sockets: 2
vga: std
virtio1: zpool_4T:vm-104-disk-1,aio=threads,size=83G
virtio2: zpool_4T:vm-104-disk-0,iothread=1,size=2T
vmgenid: 0f086493-dddd-480a-b0af-fe8fa244a42d
vmstatestorage: zpool_4T


Some images
1Ram_ini.PNG:

RAM status at startup SO
2Ram_Fin-Full.PNG
2Ram_Fin-Full-processes.PNG

The status is at 93% RAM. If I leave it for another 24 hours, it reaches 100% (but I couldn't take the picture)

3Ram_Fin-Full-RamMap.PNG
You can see that the consumption is due to the locked driver, which appears to be the Balloon Driver.

Honestly, I've tried disabling KSM, trying different versions of the virtioIO driver, and other AI recommendations.
But I can't find a way to fix it.

Should ballooning be working? Or is there some kind of bug?
Thank you for coming this far.
 

Attachments

  • 1Ram_ini.PNG
    1Ram_ini.PNG
    47.4 KB · Views: 6
  • 2Ram_Fin-Full.PNG
    2Ram_Fin-Full.PNG
    47.2 KB · Views: 6
  • 2Ram_Fin-Full-procesos.PNG
    2Ram_Fin-Full-procesos.PNG
    54.1 KB · Views: 7
  • 3Ram_Fin-Full-RamMap.PNG
    3Ram_Fin-Full-RamMap.PNG
    18.9 KB · Views: 5
As far back as I remember, memory ballooning has always been a little 'buggy' with windows, the general consensus is to leave it disabled for Windows guests.
 
The problem with Microsoft Windows is that I do not believe it is in their list of priorities to make sure they don't break memory ballooning for QEMU/KVM across their update cycles, for them its all about Hyper-V (their own virtualisation layer).

So basically any 'ballooning' explosion resulting from an update tends to be left to the 'community' to face and then raise some noise to get some attention somewhere upstream where someone then eventually gets round to updating the ballooning driver and/or Qemu codebase, or even Microsoft if the noise is loud enough.

My two pence, based on this fact alone, it really wouldn't be wise to use Qemu/KVM ballooning with a Microsoft OS in a production environment.
 
  • Like
Reactions: Johannes S
when your host is reaching 80% memory, the balloon driver of each vm where it's running is increasing . The memory take by the balloon drivers, is gived back to the hypervisor. Only linux, you don't see it, it's like the memory of the vm is decreasing, because the balloon driver is well integrated.
on windows, it's like more memory is used, but this is correct,the balloon driver is taking memory from the vm (up to min_free) to give it back to hypervisor.
 
  • Like
Reactions: Johannes S
By default, if host don't use 80% memory, the balloon driver is using 0 memory. (so the vm is using the max memory).

and when the host is > 80% memory, pvestatd deamon try to increase the balloon a little bit on each vm to decrease host memory to 80% again, with min_size as maximum target for each vm.
 
  • Like
Reactions: Johannes S
Hi, I think I found the initial flaw.
The ZFS ARC cache was using 12GB of the host's 32GB.
This seems to be causing pve to activate ballooning, and the ballooning driver to crash due to RAM usage with ZFS ARC.


With
# Minimum 1GB
echo 1073741824 > /sys/module/zfs/parameters/zfs_arc_min
# Maximum 3GB
echo 3221225472 > /sys/module/zfs/parameters/zfs_arc_max

My flaw was fixed (I can't repeat it anymore).

I would have to force RAM usage to 80% otherwise.
But I understand why it's an optimal solution for the "Balloon Driver" to consume all the guest's RAM. (I don't recall having similar issues with Balloon on VMware or Xen.)

Thanks everyone for the responses.
 
I had better luck by strangling ZFS ARC, especially after upgrading to Proxmox v9 where it went out of control. I set mine to around 2GB and so far everything has been fine and bot Proxmox and Windows show good values for memory when ballooning is enabled.

I always thought the issue was somewhere in my Proxmox home-lab setup, because working daily with Hyper-V, VMware and Nutanix none them ever showed issues with their equivalent for any Windows servers.
 
  • Like
Reactions: flako777
The behavior remains the same (because it's normal).
The Unlocked driver consumes 1.5G when the host is using 80% of the RAM.
In 4Ram_arc3G.PNG
you can see the dips and surges produced by disabling the VirtioIO Balloon Driver:

I find the Balloon implementation to be unintuitive.
I just rewrote it in case it helps someone for future reference.

I understand that ZFS ARC vs ballooning is a known bug https://bugzilla.proxmox.com/show_bug.cgi?id=4482

Thanks
 

Attachments

  • 4Ram_arc3G.PNG
    4Ram_arc3G.PNG
    47.6 KB · Views: 5
  • 4Ram_arc3G_pve Sumary.PNG
    4Ram_arc3G_pve Sumary.PNG
    9.1 KB · Views: 5
  • 4Ram_arc3G_ramMaps.PNG
    4Ram_arc3G_ramMaps.PNG
    24.8 KB · Views: 5
Last edited: