Balloon memory behaviour in version 5.3 changed compared to 5.1

Pablo Alcaraz

Member
Jul 6, 2017
53
8
13
54
Hello,

The balloon memory behavior in version 5.3 changed compared to 5.1. And not for good.
I have a VM with a memory configuration like this:

min memory 448mb
max memory 16gb

In Proxmox VE 5.1, the host provided memory to the guest machine as it was needed. Usually, it had 3gb, and when a process requiring memory requested more RAM from the guest OS (Ubuntu 16.04), the host inflates memory in the guest VM and all good.

This was convenient because memory was dynamically assigned to guest VMs running memory intensive apps for a limited time. After that, in version 5.1, the host used to claim the extra memory when it was freed assigning them to other VMs.

My only concern was to run memory hungry processes in the VMs up to the max of host memory.

The problem is that in Proxmox VE 5.3, that behavior changed. Now the host refuses to provide memory up to 16gb. For example, it gives 4gb more (so the guest VM memory jumps from 3gb to 7gb). If the process running on it requires more memory, the guest OS refuses to provide it; I get an OOM error.

And there is like 10gb of RAM available in the host!

I need to have the same memory behavior than in Proxmox 5.1. That means: Proxmox Host gives to the guest VM all the memory it requires up to the max memory assigned.

How could I make Proxmox 5.3 to assign the required memory as it uses to do PRoxmox 5.1?
 
I pasted below the top output of the host machine with Proxmox VE 5.3 installed.

As you can observe, it has plenty of memory availabe, but it refused to provide 8 extra gb to one of the linux guest VM.

How could I reconfigure the host so it behaves like Proxmox VE 5.1?

top - 15:31:16 up 20:42, 1 user, load average: 2.42, 2.40, 2.32
Tasks: 235 total, 1 running, 136 sleeping, 0 stopped, 0 zombie
%Cpu(s): 26.4 us, 0.8 sy, 0.0 ni, 72.0 id, 0.5 wa, 0.0 hi, 0.3 si, 0.0 st
KiB Mem : 32361196 total, 14608380 free, 13024264 used, 4728552 buff/cache
KiB Swap: 48828412 total, 48617980 free, 210432 used. 18833060 avail Mem

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2785 root 20 0 5276544 266664 3456 S 100.3 0.8 1230:07 kvm
3179 root 20 0 4726636 293932 2192 S 100.3 0.9 1232:33 kvm
2389 root 20 0 4124504 1.862g 3624 S 3.3 6.0 38:37.99 kvm
32474 root 20 0 16.889g 913992 9444 S 3.3 2.8 2:55.48 kvm
2312 root 20 0 3178328 1.998g 3688 S 2.3 6.5 96:49.37 kvm
2314 root 20 0 3161880 1.802g 3500 S 2.0 5.8 34:08.70 kvm
2298 root 20 0 1989444 1.019g 3784 S 1.7 3.3 30:47.88 kvm
16674 root 20 0 9342700 851168 3732 S 1.7 2.6 21:13.30 kvm
1494 root rt 0 201636 70672 51624 S 0.7 0.2 8:43.06 corosync
285 root 20 0 0 0 0 S 0.3 0.0 0:04.30 btrfs-transacti
2912 root 20 0 1208668 405888 3664 S 0.3 1.3 2:29.81 kvm
3440 root 20 0 1339224 413788 3652 S 0.3 1.3 1:56.48 kvm
3628 root 20 0 1081176 151212 2824 S 0.3 0.5 2:25.31 kvm
7686 root 20 0 557300 64140 10984 S 0.3 0.2 0:02.58 pvedaemon worke
1 root 20 0 57400 7104 5248 S 0.0 0.0 0:04.16 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:00.02 kthreadd
4 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker/0:0H
6 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 mm_percpu_wq
7 root 20 0 0 0 0 S 0.0 0.0 0:00.24 ksoftirqd/0
 
I was digging deeper in the problem. It happens because pvestatd. claims memory from the guest VMs using a criteria that at least does not apply to my scenario.

It decides, first, that a Proxmox Node must keep 20% of the RAM for itself. That alone is at least too generic. 20% of what? It is not the same if the node hosts a ZFS volume or not. It is not the same if the node has 1Tb of RAM or 4Gb. The amount of RAM required by the host is different in each case.

The other point is, when should a Proxmox Node decide to ask back for the memory? Excluding extreme cases of memory exhaustation, guests VMs should "enjoy" the memory they just received until other guest VMs or the host need them.

I am not sure yet what it is best. Right now memory is taken from the guests VMs ASAP. Any feedback is welcome.

Except if there is other way, perhaps the only option I have is to modify pvestatd or AutoBalloon.pm. If I find free time perhaps I will create a patch for this. But before that:
  1. I would like to know if there is other way to tune memory assignation to guest VM without coding.
  2. Any feedback about what criteria could be interesting would be welcome.
Right now I feel I will replace the 20% of RAM assigned to the hosts to 3Gb (I do not use ZFS or ceph).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!