Ballooning issues

Nov 17, 2019
27
0
21
Hi one of my vm's seems to run out of memory, the balloon device never inflates the memory even with 6Gb free on server nor does swap get used.

my Proxmox server:
4 x Intel(R) Xeon(R) CPU E3-1220 v5 @ 3.00GHz (1 Socket)
Linux 5.3.18-3-pve #1 SMP PVE 5.3.18-3 (Tue, 17 Mar 2020 16:33:19 +0100)
pve-manager/6.1-8/806edfe1

memory on server:
Code:
root@pve:~# free -h
              total        used        free      shared  buff/cache   available
Mem:           62Gi        55Gi       489Mi        67Mi       6.7Gi       6.4Gi
Swap:         3.6Gi       1.5Gi       2.1Gi

the vm in question is ubuntu 18.04.3 64bit
it's config:
Code:
root@pve:~# cat /etc/pve/local/qemu-server/102.conf
agent: 1
audio0: device=AC97,driver=spice
balloon: 3072
boot: c
bootdisk: scsi0
cores: 4
cpu: host
hostpci0: 06:00.0
hotplug: disk,network,usb
machine: q35
memory: 6144
name: turbolxle
numa: 0
onboot: 1
ostype: l26
protection: 1
scsi0: vm-storage:102/vm-102-disk-0.qcow2,discard=on,size=64G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=e98b2a07-13c4-466f-92e4-cd789d48e15b
sockets: 1
startup: order=99,down=120
vga: virtio
vmgenid: 310e14c0-436d-4ff2-9ff2-710c412f0fde
memory on the vm when the oom happens:
Code:
sammael@turbolxle:~$ free -h
              total        used        free      shared  buff/cache   available
Mem:           2.8G        1.8G        190M         14M        854M        786M
Swap:          4.9G        1.0M        4.9G
relevant dmesg output from the vm:
Code:
[  287.827175] kworker/2:1: page allocation failure: order:0, mode:0x14310ca(GFP_HIGHUSER_MOVABLE|__GFP_NORETRY|__GFP_NOMEMALLOC), nodemask=(null)
[  287.827176] kworker/2:1 cpuset=/ mems_allowed=0
[  287.827179] CPU: 2 PID: 37 Comm: kworker/2:1 Not tainted 4.15.0-96-generic #97-Ubuntu
[  287.827180] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.12.1-0-ga5cab58e9a3f-prebuilt.qemu.org 04/01/2014
[  287.827184] Workqueue: events_freezable update_balloon_size_func
[  287.827185] Call Trace:
[  287.827200]  dump_stack+0x6d/0x8e
[  287.827202]  warn_alloc+0xff/0x1a0
[  287.827204]  __alloc_pages_slowpath+0xdc5/0xe00
[  287.827206]  ? detach_buf+0x71/0x120
[  287.827208]  __alloc_pages_nodemask+0x29a/0x2c0
[  287.827210]  alloc_pages_current+0x6a/0xe0
[  287.827213]  balloon_page_alloc+0x15/0x20
[  287.827214]  update_balloon_size_func+0xdc/0x290
[  287.827216]  process_one_work+0x1de/0x420
[  287.827217]  worker_thread+0x32/0x410
[  287.827218]  kthread+0x121/0x140
[  287.827219]  ? process_one_work+0x420/0x420
[  287.827220]  ? kthread_create_worker_on_cpu+0x70/0x70
[  287.827225]  ret_from_fork+0x35/0x40
[  287.827227] Mem-Info:
[  287.827229] active_anon:85197 inactive_anon:95456 isolated_anon:0
                active_file:91662 inactive_file:429896 isolated_file:0
                unevictable:16 dirty:3121 writeback:0 unstable:1043
                slab_reclaimable:14176 slab_unreclaimable:8374
                mapped:66512 shmem:4891 pagetables:7687 bounce:0
                free:24728 free_pcp:0 free_cma:0
[  287.827232] Node 0 active_anon:340788kB inactive_anon:381824kB active_file:366648kB inactive_file:1719584kB unevictable:64kB isolated(anon):0kB isolated(file):0kB mapped:266048kB dirty:12484kB writeback:0kB shmem:19564kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:4172kB all_unreclaimable? no
[  287.827232] Node 0 DMA free:15908kB min:176kB low:220kB high:264kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15908kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[  287.827234] lowmem_reserve[]: 0 1913 5884 5884 5884
[  287.827236] Node 0 DMA32 free:37780kB min:21916kB low:27392kB high:32868kB active_anon:102744kB inactive_anon:564kB active_file:8108kB inactive_file:1306680kB unevictable:0kB writepending:9588kB present:2080592kB managed:1487360kB mlocked:0kB kernel_stack:512kB pagetables:7196kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[  287.827238] lowmem_reserve[]: 0 0 3970 3970 3970
[  287.827239] Node 0 Normal free:45224kB min:45484kB low:56852kB high:68220kB active_anon:238036kB inactive_anon:381260kB active_file:358540kB inactive_file:412704kB unevictable:64kB writepending:2160kB present:4194304kB managed:1602644kB mlocked:64kB kernel_stack:5856kB pagetables:23552kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[  287.827241] lowmem_reserve[]: 0 0 0 0 0
[  287.827242] Node 0 DMA: 1*4kB (U) 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15908kB
[  287.827246] Node 0 DMA32: 187*4kB (UME) 118*8kB (UME) 53*16kB (UME) 29*32kB (UME) 9*64kB (UM) 6*128kB (UM) 1*256kB (M) 2*512kB (UE) 3*1024kB (UM) 8*2048kB (M) 3*4096kB (M) = 37836kB
[  287.827250] Node 0 Normal: 392*4kB (UME) 151*8kB (UME) 82*16kB (UM) 23*32kB (UME) 18*64kB (UME) 8*128kB (UM) 5*256kB (UME) 2*512kB (UM) 3*1024kB (UME) 4*2048kB (UM) 6*4096kB (U) = 45144kB
[  287.827258] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[  287.827259] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[  287.827259] 526459 total pagecache pages
[  287.827261] 0 pages in swap cache
[  287.827261] Swap cache stats: add 52, delete 52, find 0/0
[  287.827262] Free swap  = 5096748kB
[  287.827262] Total swap = 5097516kB
[  287.827263] 1572722 pages RAM
[  287.827263] 0 pages HighMem/MovableOnly
[  287.827263] 796244 pages reserved
[  287.827264] 0 pages cma reserved
[  287.827264] 0 pages hwpoisoned
[  287.827266] virtio_balloon virtio1: Out of puff! Can't get 1 pages

Is there anything I can make this work or am I better off just allocating more memory with balloon=0 ?

Thanks!
 
Last edited:
Hi,

it's not about an arbitrary absolute amount
never inflates the memory even with 6Gb free on server
but (according to the reference documentation) about 80% of your total memory
if RAM usage on the host is below 80%, will dynamically add memory to the guest up to the maximum memory specified.
 
Thanks, I see how it works now, I'll admit I wasn't looking into documentation too much I just assumed the ballooning plays with the free ram. Is this 80% a hardcoded limit or is it adjustable? All things considered I'm likely better off provisioning the ram to vm's manually so as to use all of it.
 
If VM ballooning allocation only kicks in if RAM usage on the host is below 80%, then I don't think a Proxmox host using ZFS and ballooning are compatible.

ZFS gobbles up RAM - sure it will release it quickly enough if the host OS needs it, but from the above it sounds like Proxmox will never make the request in the first place?
 
Your VM uses PCI passthrough (hostpci0: 06:00.0) and therefore cannot use ballooning. The PCI(e) device can do Direct Memory Access (DMA) and all memory of the VM must be in actual RAM (pinned) because it can be read or written to at any time by the device. Giving some parts of memory back to the host or other VMs (ballooning) is therefore not possible. Please let me know if I'm mistaken, I would like ballooning with PCI passthrough as well.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!