How to enable 1G hugepages?

huggy3233

New Member
May 29, 2023
10
0
1
I've been trying to enable 1G hugepages on my proxmox server (R7 5700X, 48GB RAM, Proxmox 7.4, Kernel: 6.2.11-2-pve) but haven't been able to get it to work. I am aiming to allocate 16 1Gibibyte hugepages to a vm for a total of 16 Gibibytes. So far I have added: hugepagesz=1G hugepages=16 default_hugepagesz=1G to my grub boot parameters, and added: hugepages: 1024 to my vms config. When running hugeadm --explain it appears that the pages have been created successfully. But when I run this command: grep -e AnonHugePages /proc/*/smaps | awk '{ if($2>4) print $0} ' | awk -F "/" '{print $0; system("ps -fp " $3)} ' to check which processes are using hugepages my vm is still using 2M pages.

What am I doing wrong here? or is that check command not the proper way to check hugepage allocation?
 
grep -e AnonHugePages /proc/*/smaps | awk '{ if($2>4) print $0} ' | awk -F "/" '{print $0; system("ps -fp " $3)} ' to check which processes are using hugepages my vm is still using 2M pages.
You grep for anonymous hugepages, If you configure hugepages, they are not anonymous anymore.
 
  • Like
Reactions: gseeley
What is the difference between anonymous and normal hugepages?
How do I check for normal hugepage usage?
 
What is the difference between anonymous and normal hugepages?
normal hugepages are allocated manually and anonymous hugepages are allocated in the background and on demand. They are a little bit unpredictable and often people advice in deactivating them completely. Once allocated, they cannot be used by normal processes and only a fraction of all programs can use hugepages. One of them is KVM, so therefore it may be an advantage of using them.

I personally work a lot with hugepages, yet with databases, not especially with VMs. Hugepages are often only allocatable before the memory is filled and fragmented, so you may require a reboot on online/live hugepage increase.
How do I check for normal hugepage usage?
Normalle via /proc/meminfo (general) or via smap similar as you did:

Code:
 grep -B 11 -E 'KernelPageSize:\s+2048 kB' /proc/$PID/smaps | grep "^Size:" | awk 'BEGIN{sum=0}{sum+=$2}END{print sum/2048}'
 
  • Like
Reactions: gseeley
/proc/meminfo shows that I have 16 1Gib hugepages, in my vms .conf file I have added hugepages: 1024.
 
That flag is enabled, I think the problem here is that I am unsure how to check if hugepages are working properly.
 
There are no free hugepages so I guess they are being used, if I restart the vm will the hugepages still work?
 
Changing the command to the correct sizes returns 16, which I assume is the correct result. Hugepages are also working after a restart. Thanks for your help.
 
Changing the command to the correct sizes returns 16, which I assume is the correct result. Hugepages are also working after a restart. Thanks for your help.
In my case, I configure on Proxmox server (v8.2.7) (two CPU Xeon Gold 6152, 384GB memory, 3.4TB ssd) GRUB_CMDLINE_LINUX_DEFAULT="quite default_hugepagesz=2M hugepages=165888", and on each VM I set hugepages: 2, memory: 32768. When I restart Proxmox and first bulk start 10 VM, it's ok. But I stop and start any 1 VM I get error "start failed: hugepage allocation failed at /usr/share/perl5/PVE/QemuServer/Memory.pm line 663". /proc/meminfo => HugePages_Free: 18432 still enough for 1 VM. I don't know why. Do you have any idea?
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!