[SOLVED] Memory Ballooning - events_freezable update_balloon_size_func [virtio_balloon]

Mr.BlueBear

Active Member
Apr 3, 2018
16
0
41
45
I have a PVE cluster where a host is currently running with 80-82% of physical memory in use and SWAP at 99% in use.
VMs are configured with memory ballooning (min and max values). However, when the VM requests additional memory, it fails:
Code:
kernel: kworker/0:1: page allocation failure: order:0, mode:0x310da
kernel: CPU: 0 PID: 23148 Comm: kworker/0:1 Kdump: loaded Not tainted 3.10.0-1127.el7.x86_64 #1
kernel: Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
kernel: Workqueue: events_freezable update_balloon_size_func [virtio_balloon]
kernel: Call Trace:
kernel: [<ffffffff9b77ff85>] dump_stack+0x19/0x1b
kernel: [<ffffffff9b1c4ac0>] warn_alloc_failed+0x110/0x180
kernel: [<ffffffff9b77b4a0>] __alloc_pages_slowpath+0x6bb/0x729
kernel: [<ffffffff9b1c9146>] __alloc_pages_nodemask+0x436/0x450
kernel: [<ffffffff9b218e18>] alloc_pages_current+0x98/0x110
kernel: [<ffffffff9b2496a5>] balloon_page_alloc+0x15/0x20
kernel: [<ffffffffc0381811>] update_balloon_size_func+0xb1/0x290 [virtio_balloon]
kernel: [<ffffffff9b0be6bf>] process_one_work+0x17f/0x440
kernel: [<ffffffff9b0bf7d6>] worker_thread+0x126/0x3c0
kernel: [<ffffffff9b0bf6b0>] ? manage_workers.isra.26+0x2a0/0x2a0
kernel: [<ffffffff9b0c6691>] kthread+0xd1/0xe0
kernel: [<ffffffff9b0c65c0>] ? insert_kthread_work+0x40/0x40
kernel: [<ffffffff9b792d37>] ret_from_fork_nospec_begin+0x21/0x21
kernel: [<ffffffff9b0c65c0>] ? insert_kthread_work+0x40/0x40

Code:
kernel: virtio_balloon virtio0: Out of puff! Can't get 1 pages

According to the docs "When setting the minimum memory lower than memory, Proxmox VE will make sure that the minimum amount you specified is always available to the VM, and if RAM usage on the host is below 80%, will dynamically add memory to the guest up to the maximum memory specified."

Question: When memory is above 80%, what is supposed to happen when the VM requests additional memory ?
In my case, it fails. Is that the intended behavior ?
 
As far as I understand the VM won't request anything. If the hosts RAM will go above 80% it will slowly remove allocated RAM from all the VMs at a fixed ratio (you can define by setting the "share") and won't care if that VMs needs that RAM or not. It will just remove it as long as the hosts RAM is above 80% or until every VMs RAM is already down to the defined minimum value. If the hosts RAM drop below 80% the host will give the RAM back to the VMs. Also at a fixed ratio.
 
Last edited:
In the end, the issue was with SWAP. Seeing that this cluster was using PVE 6.4, I disabled SWAP on all nodes and the issue hasn't returned.
In PVE 7.x, SWAP is disabled by default so that could explain why I don't see this issue on the PVE 7.1 cluster.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!