How to enable 1G hugepages?

I did exactly what is described above with 8 hugepages of 1GB

Code:
cat /proc/cmdline
initrd=\EFI\proxmox\6.17.4-2-pve\initrd.img-6.17.4-2-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on iommu=pt transparent_hugepage=never default_hugepagesz=1G hugepagesz=1G hugepages=8

Code:
nano /etc/pve/qemu-server/xxx.conf

hugepages: 1024 # line added

> proxmox-boot-tool refresh

then reboot

# Before runing my VM

Code:
grep -i huge /proc/meminfo
AnonHugePages:         0 kB
ShmemHugePages:     8192 kB
FileHugePages:         0 kB
HugePages_Total:       8
HugePages_Free:        8
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:    1048576 kB
Hugetlb:         8388608 kB

# After running my VM
Code:
grep -i huge /proc/meminfo
AnonHugePages:         0 kB
ShmemHugePages:     8192 kB
FileHugePages:         0 kB
HugePages_Total:       8
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:    1048576 kB
Hugetlb:         8388608 kB

HugePages_Free: 8 => HugePages_Free: 0
It is working
 
Last edited:
  • Like
Reactions: Whatever
It is working
Glad to see.

Are you seeing any performance improvement in your use case? If not, using hugepages may not be the best thing to do. In my tests, I could see an increase in 10% memory allocation performance within benchmarks, yet you pay with a lot of unflexability for this improvement.
 
"..you pay with a lot of unflexability for this improvement."

i agree, then i haved removed it
 
In my case it wasn't working at all.

I used to have this in /etc/default/grub.d/hugepages.cfg:
Code:
GRUB_CMDLINE_LINUX="${GRUB_CMDLINE_LINUX} default_hugepagesz=2M hugepagesz=1G hugepages=64 transparent_hugepage=never"
GRUB_CMDLINE_LINUX_DEFAULT="${GRUB_CMDLINE_LINUX_DEFAULT} default_hugepagesz=2M hugepagesz=1G hugepages=64 transparent_hugepage=never"

Confirmed in /proc/cmdline:
Code:
BOOT_IMAGE=/vmlinuz-6.17.9-1-pve root=ZFS=/ROOT/debian ro amdgpu.ppfeaturemask=0xffffbfff systemd.unified_cgroup_hierarchy=1 initcall_blacklist=sysfb_init default_hugepagesz=2M hugepagesz=1G hugepages=64 transparent_hugepage=never iommu=pt amd_iommu=on pcie_port_pm=off root=ZFS=rpool/ROOT/debian zfs_force=1 boot=zfs amdgpu.ppfeaturemask=0xffffbfff systemd.unified_cgroup_hierarchy=1 initcall_blacklist=sysfb_init default_hugepagesz=2M hugepagesz=1G hugepages=64 transparent_hugepage=never iommu=pt amd_iommu=on pcie_port_pm=off root=ZFS=rpool/ROOT/debian zfs_force=1 boot=zfs

However grep -i huge /proc/meminfo:
Code:
AnonHugePages:         0 kB
ShmemHugePages:        0 kB
FileHugePages:         0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
Hugetlb:        67108864 kB

So it seems that we need BOTH default_hugepagesz=1G AND hugepagesz=1G, right?

Funnily enough it seems that this was working "good enough" because VM Startup Times (when using it with lots of RAM) greatly improved, however it seems to not actually be used. Wonder :rolleyes: .

But if we set default_hugepagesz=1G as Default, won't it basically mean that ALL VMs will always get multiples of 1GB (or 1GiB) allocated, even if we only use e.g. 512MB or 1.5GiB etc?

Or that will "only" occur until all HugePages have been exhausted (until default_hugepagesz * hugepages), after which, the "default" of 2MB will be used ?

So for a 128GB RAM System with default_hugepagesz=1G hugepagesz=1G hugepages=64, will "only" the first 64GB of RAM be used as HugePages, while the rest can be allocated in lower-than-1GB Chunks each ?
 
In my case it wasn't working at all.

I used to have this in /etc/default/grub.d/hugepages.cfg:
Code:
GRUB_CMDLINE_LINUX="${GRUB_CMDLINE_LINUX} default_hugepagesz=2M hugepagesz=1G hugepages=64 transparent_hugepage=never"
GRUB_CMDLINE_LINUX_DEFAULT="${GRUB_CMDLINE_LINUX_DEFAULT} default_hugepagesz=2M hugepagesz=1G hugepages=64 transparent_hugepage=never"

Confirmed in /proc/cmdline:
Code:
BOOT_IMAGE=/vmlinuz-6.17.9-1-pve root=ZFS=/ROOT/debian ro amdgpu.ppfeaturemask=0xffffbfff systemd.unified_cgroup_hierarchy=1 initcall_blacklist=sysfb_init default_hugepagesz=2M hugepagesz=1G hugepages=64 transparent_hugepage=never iommu=pt amd_iommu=on pcie_port_pm=off root=ZFS=rpool/ROOT/debian zfs_force=1 boot=zfs amdgpu.ppfeaturemask=0xffffbfff systemd.unified_cgroup_hierarchy=1 initcall_blacklist=sysfb_init default_hugepagesz=2M hugepagesz=1G hugepages=64 transparent_hugepage=never iommu=pt amd_iommu=on pcie_port_pm=off root=ZFS=rpool/ROOT/debian zfs_force=1 boot=zfs

However grep -i huge /proc/meminfo:
Code:
AnonHugePages:         0 kB
ShmemHugePages:        0 kB
FileHugePages:         0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
Hugetlb:        67108864 kB

So it seems that we need BOTH default_hugepagesz=1G AND hugepagesz=1G, right?

Funnily enough it seems that this was working "good enough" because VM Startup Times (when using it with lots of RAM) greatly improved, however it seems to not actually be used. Wonder :rolleyes: .

But if we set default_hugepagesz=1G as Default, won't it basically mean that ALL VMs will always get multiples of 1GB (or 1GiB) allocated, even if we only use e.g. 512MB or 1.5GiB etc?

Or that will "only" occur until all HugePages have been exhausted (until default_hugepagesz * hugepages), after which, the "default" of 2MB will be used ?

So for a 128GB RAM System with default_hugepagesz=1G hugepagesz=1G hugepages=64, will "only" the first 64GB of RAM be used as HugePages, while the rest can be allocated in lower-than-1GB Chunks each ?
you need to either use 2MB pages or 1GB pages not mox like your trying to do.
huge pages will only be used on VMs you have set to use huge pages with else they will use standard 4k pages.
huge page allocation also needs to take into account numa so you will have a huge page slab per numa node. so ensure you take that into account. you can preallocate across numas. just read the linux kernel docs for how to correctly do it
 
Last edited:
you need to either use 2MB pages or 1GB pages not mox like your trying to do.
huge pages will only be used on VMs you have set to use huge pages with else they will use standard 4k pages.
huge page allocation also needs to take into account numa so you will have a huge page slab per numa node. so ensure you take that into account. you can preallocate across numas. just read the linux kernel docs for how to correctly do it
Thanks for the Explanation.

Yeah, i guess the Variable Name is just very confusing. Why have 2 Settings if they have to be the same :) ?

I should have read the Description on Kernel.org but I just though it was something like:
use 1GB Pages if requested, otherwise use 2MB.

Obviously that wasn't the Case .