Proxmox min/max memory, ballooning, KSM sharing and RAM overprovisioning

Pavel Hruška

Member
May 1, 2018
75
8
8
45
Hello all, I'd like to get someone's view on using min/max memory values, ballooning device and KSM sharing, all those things related to memory management (KVM and Windows Server guests especially).

I do understand how they works and what they do, at least i think so :eek:)...

But my point is WHY would anyone should or would use those things, because it looks like there is no way how to manage RAM overprovisioning with Proxmox. When starting all VMs, there is ALWAYS need to allocate MAX memory for all guests.

So why would I bother to use ballooning with min/max values, when I just cannot overprovision the host memory? Every VM will always have all that MAX memory available...which one would then ask for more RAM when running? Why?

The same with KSM sharing. It can save memory, but for who? How can it improve guest density and make that density persistent across host reboots?

In theory overprovisioning is probably possible, but then I cannot start all those VMs as a bulk on host reboot, I need to ensure delayed guest startups...

Am I missing something? Any ideas?

Thanx!
 
but that is not true at all, here:

Code:
root@host:~# free -h
              total        used        free      shared  buff/cache   available
Mem:            30G        9.0G         20G        470M        1.7G         21G
Swap:          7.5G          0B        7.5G
root@host:~# qm config 100
bootdisk: scsi0
cores: 4
cpu: host
memory: 4096
name: test
net0: virtio=02:5F:AA:13:DE:7D,bridge=vmbr0
numa: 0
ostype: l26
scsi0: local:100/vm-100-disk-1.qcow2,size=32G
scsihw: virtio-scsi-pci
smbios1: uuid=23de5e89-9614-4947-a80e-9c3f879c9fc9
sockets: 1
root@host:~# qm set 100 -memory 25000
update VM 100: -memory 25000
root@host:~# qm start 100
root@host:~# free -h
              total        used        free      shared  buff/cache   available
Mem:            30G        9.1G         19G        469M        1.7G         21G
Swap:          7.5G          0B        7.5G
root@host:~# qm config 101
bootdisk: scsi0
cores: 2
memory: 4096
name: test2
net0: virtio=62:60:A8:B0:AE:53,bridge=vmbr0
numa: 0
ostype: l26
scsi0: local:101/vm-101-disk-1.qcow2,size=32G
scsihw: virtio-scsi-pci
smbios1: uuid=0f327e04-d2c5-402c-838c-87ddbc46e9bd
sockets: 1
root@host:~# qm set 101 -memory 25000
update VM 101: -memory 25000
root@host:~# qm start 101
root@host:~# qm list
      VMID NAME                 STATUS     MEM(MB)    BOOTDISK(GB) PID
      100 test               running    25000             32.00 1406
      101 test2              running    25000             32.00 1776
root@host:~# free
              total        used        free      shared  buff/cache   available
Mem:       32266996    10384348    20134732      481960     1747916    21244152
Swap:       7812092           0     7812092
root@host:~#

as you can see, i can create multiple vms wich have more memory than i have free
granted, you can start vms only if there there is enough free+swap available once, but they dont use all of it and you can start multiple such vms = overcommiting
 
Ah, you are right, I've tested it now and was successful.

So if I wan to start the VM there must be in total RAM + swap memory available of the VM's memory configuration to be able to start the VM?

And please I need to clarify - min/max RAM values in the VM configuration - is that where balloon operates? Or does it affect something else?
 
So if I wan to start the VM there must be in total RAM + swap memory available of the VM's memory configuration to be able to start the VM?
yes, though there is a kernel setting where you can even more overcommit ( with vm.overcommit_memory = 1 an allocate never fails, so you could have a vm with more memory than the host) but use it on your own risk

And please I need to clarify - min/max RAM values in the VM configuration - is that where balloon operates? Or does it affect something else?
yes exactly, though ballooning only happens with >= 80% memory usage and then only gradually, more information is in our reference documentation see https://pve.proxmox.com/wiki/Qemu/KVM_Virtual_Machines#qm_virtual_machines_settings under memory
 
  • Like
Reactions: Pavel Hruška
Thank you very much for answers!

One more thing closly related to this, I've been playing with "vm.swappiness" setting (https://en.wikipedia.org/wiki/Swappiness), which defaults to value 60, but I had bad experience with one host, where only single VM was virtualized - host with 64GB RAM, VM with 48GB RAM, where on VM start all of 8GB swap size was suddenly used and it had really bad impact on performance. Setting the value to vm.swappiness = 1 in this case to avoid swapping was enough, but what other secarios, is leaving the default value (60) preffered in generic usage?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!