Balooning problem - "Out of puff!" but host has plenty of free memory

masgo

Well-Known Member
Jun 24, 2019
68
15
48
74
I am having some problems with one of my VMs. The dmesg is full of "Out of puff!" errors while the host never got close to running out of memory.

Any idea why this happens?


1678881291681.png

Code:
[Mi Mär 15 01:10:59 2023] kworker/2:0: page allocation failure: order:0, mode:0x6310ca(GFP_HIGHUSER_MOVABLE|__GFP_NORETRY|__GFP_NOMEMALLOC), nodemask=(null)
[Mi Mär 15 01:10:59 2023] kworker/2:0 cpuset=/ mems_allowed=0
[Mi Mär 15 01:10:59 2023] CPU: 2 PID: 1507 Comm: kworker/2:0 Not tainted 4.19.0-23-amd64 #1 Debian 4.19.269-1
[Mi Mär 15 01:10:59 2023] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.15.0-0-g2dd4b9b3f840-prebuilt.qemu.org 04/01/2014
[Mi Mär 15 01:10:59 2023] Workqueue: events_freezable update_balloon_size_func [virtio_balloon]
[Mi Mär 15 01:10:59 2023] Call Trace:
[Mi Mär 15 01:10:59 2023]  dump_stack+0x66/0x81
[Mi Mär 15 01:10:59 2023]  warn_alloc.cold.122+0x6c/0xec
[Mi Mär 15 01:10:59 2023]  __alloc_pages_slowpath+0xd1d/0xd30
[Mi Mär 15 01:10:59 2023]  ? check_preempt_curr+0x7a/0x90
[Mi Mär 15 01:10:59 2023]  __alloc_pages_nodemask+0x28b/0x2b0
[Mi Mär 15 01:10:59 2023]  update_balloon_size_func+0x109/0x2c0 [virtio_balloon]
[Mi Mär 15 01:10:59 2023]  process_one_work+0x1a7/0x3a0
[Mi Mär 15 01:10:59 2023]  worker_thread+0x30/0x390
[Mi Mär 15 01:10:59 2023]  ? create_worker+0x1a0/0x1a0
[Mi Mär 15 01:10:59 2023]  kthread+0x112/0x130
[Mi Mär 15 01:10:59 2023]  ? kthread_bind+0x30/0x30
[Mi Mär 15 01:10:59 2023]  ret_from_fork+0x35/0x40
[Mi Mär 15 01:10:59 2023] Mem-Info:
[Mi Mär 15 01:10:59 2023] active_anon:1207333 inactive_anon:194526 isolated_anon:0
                            active_file:426371 inactive_file:849347 isolated_file:0
                            unevictable:0 dirty:99172 writeback:0 unstable:0
                            slab_reclaimable:1334597 slab_unreclaimable:15158
                            mapped:26833 shmem:11414 pagetables:4879 bounce:0
                            free:33726 free_pcp:314 free_cma:0
[Mi Mär 15 01:10:59 2023] Node 0 active_anon:4829332kB inactive_anon:778104kB active_file:1705484kB inactive_file:3397388kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:107332kB dirty:396688kB writeba                     ck:0kB shmem:45656kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 5347328kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no
[Mi Mär 15 01:10:59 2023] Node 0 DMA free:15908kB min:64kB low:80kB high:96kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15908kB mlock                     ed:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[Mi Mär 15 01:10:59 2023] lowmem_reserve[]: 0 2963 15994 15994 15994
[Mi Mär 15 01:10:59 2023] Node 0 DMA32 free:64452kB min:12508kB low:15632kB high:18756kB active_anon:38844kB inactive_anon:1136kB active_file:176188kB inactive_file:817932kB unevictable:0kB writepending:74828kB prese                     nt:3129192kB managed:3034328kB mlocked:0kB kernel_stack:60kB pagetables:124kB bounce:0kB free_pcp:408kB local_pcp:320kB free_cma:0kB
[Mi Mär 15 01:10:59 2023] lowmem_reserve[]: 0 0 13030 13030 13030
[Mi Mär 15 01:10:59 2023] Node 0 Normal free:54544kB min:55004kB low:68752kB high:82500kB active_anon:4790488kB inactive_anon:777188kB active_file:1529684kB inactive_file:2579636kB unevictable:0kB writepending:320848                     kB present:13631488kB managed:13280764kB mlocked:0kB kernel_stack:4596kB pagetables:19392kB bounce:0kB free_pcp:848kB local_pcp:592kB free_cma:0kB
[Mi Mär 15 01:10:59 2023] lowmem_reserve[]: 0 0 0 0 0
[Mi Mär 15 01:10:59 2023] Node 0 DMA: 1*4kB (U) 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15908kB
[Mi Mär 15 01:10:59 2023] Node 0 DMA32: 19*4kB (UME) 102*8kB (UME) 1873*16kB (UME) 1052*32kB (UME) 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 64524kB
[Mi Mär 15 01:10:59 2023] Node 0 Normal: 21*4kB (UME) 25*8kB (UME) 14*16kB (UMH) 1702*32kB (UEH) 1*64kB (H) 1*128kB (H) 1*256kB (H) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 55420kB
[Mi Mär 15 01:10:59 2023] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[Mi Mär 15 01:10:59 2023] 1303219 total pagecache pages
[Mi Mär 15 01:10:59 2023] 16017 pages in swap cache
[Mi Mär 15 01:10:59 2023] Swap cache stats: add 421020, delete 405003, find 17749/20908
[Mi Mär 15 01:10:59 2023] Free swap  = 6230268kB
[Mi Mär 15 01:10:59 2023] Total swap = 7811068kB
[Mi Mär 15 01:10:59 2023] 4194168 pages RAM
[Mi Mär 15 01:10:59 2023] 0 pages HighMem/MovableOnly
[Mi Mär 15 01:10:59 2023] 111418 pages reserved
[Mi Mär 15 01:10:59 2023] 0 pages hwpoisoned
[Mi Mär 15 01:10:59 2023] virtio_balloon virtio0: Out of puff! Can't get 1 pages
 
Hi,
do you have memory hotplug enabled for the VM? Otherwise, I'd guess it's not that relevant that the host has lots of available memory. How does the memory usage/load within the VM look like?

Please share the output of pveversion -v and qm config <ID> with the ID of the VM.
 
The secret is in /etc/ksmtuned.conf setting
Code:
KSM_THRES_COEF=20
which reserves 20% of ram for your host OS. Requires restarting ksmtuned.service.
I'm not sure if that's always safe when VM's are running.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!