Enable 1GB HugePages for VM Guest

tinfever

Member
Jun 30, 2019
16
1
6
30
Hello all,

I've spent the last couple of days trying to enable 1GB hugepages on one of my Proxmox nodes and I'm afraid I'm getting nowhere. The gist of my current roadblock is this:

Code:
kvm: -object memory-backend-file,id=ram-node0,size=65536M,mem-path=/run/hugepages/kvm/1048576kB,share=on,prealloc=yes: can't open backing store /run/hugepages/kvm/1048576kB for guest RAM: No such file or directory

It seems like for some reason the 1GB hugepages aren't being automatically mounted. I've tried manually creating the /run/hugepages/kvm mount point and then manually mounting them with a command like this 'mount hugetlbfs-kvm /run/hugepages/kvm -t hugetlbfs -o rw,relatime,gid=130,mode=775,pagesize=1G' but that really doesn't feel like the proper way to do it. If I did this I was able to start the VM but then the Windows guest started bootlooping. That could be unrelated though.

Here is my grub configuration:

Code:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on hugepagesz=1G default_hugepagesz=1G"

and here is the VM configuration:

Code:
agent: 1
#args: -device 'vfio-pci,host=01:00.0,multifunction=on'
balloon: 0
bios: ovmf
bootdisk: scsi0
cores: 12
hugepages: 1024
cpu: host,flags=+pdpe1gb
efidisk0: local-zfs:vm-100-disk-1,size=128K
ide0: local:iso/virtio-win-0.1.171.iso,media=cdrom,size=363020K
ide2: local:iso/en_windows_10_enterprise_ltsc_2019_x64_dvd_5795bb03.iso,media=cdrom,size=4228938K
machine: q35
memory: 131072
name: [redacted]
net0: virtio=[redacted],bridge=vmbr0,firewall=1
numa: 1
ostype: win10
scsi0: local-zfs:vm-100-disk-0,size=128G
scsihw: virtio-scsi-pci
smbios1: uuid=[redacted]
sockets: 2
vmgenid: [redacted]

I've tried manually specifying in the grub file 'hugepages=176' to get it to create 176 of the 1GB hugepages on boot but that still didn't create the /run/hugepages/kvm mount point.

This thread sounds like almost exactly what I'm dealing with but the result there was that 1GB hugepages suddenly started working with no apparent reason which doesn't really help: https://forum.proxmox.com/threads/vm-start-timeout-pci-pass-through-related.28876

I believe I've been able to get 2M hugepages working without much issue and even if there wouldn't be much of a benefit from 1G vs 2M hugepages...now it's a challenge!
 

tinfever

Member
Jun 30, 2019
16
1
6
30
Now this is interesting... if I change the grub config to this the VM starts working just fine:

Code:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on hugepagesz=1G default_hugepagesz=2M"

hugeadm --explain shows that the VM is properly using the 1GB hugepages:

Code:
Mount Point                  Options
/dev/hugepages               rw,relatime,pagesize=2M
/run/hugepages/kvm/2048kB    rw,relatime,pagesize=2M
/run/hugepages/kvm/1048576kB rw,relatime,pagesize=1024M

Huge page pools:
      Size  Minimum  Current  Maximum  Default
   2097152        0        0        0        *
1073741824        0      128        0

As soon as I change the grub config back to this, the 1GB hugepages stop working and the "can't open backing store /run/hugepages/kvm/1048576kB" error returns:

Code:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on hugepagesz=1G default_hugepagesz=1G"


Code:
Mount Point          Options
/dev/hugepages       rw,relatime,pagesize=1024M

Huge page pools:
      Size  Minimum  Current  Maximum  Default
1073741824        0        0        0        *

It seems like unless the grub config has the default_hugepagez set to 2M, the 1GB hugepages aren't mounted or able to be used. Also, even if you manually create the /run/hugepages/kvm mount point and manually mount them, the VM will start but something is definitely still broken. When manually mounting the hugepages and starting the VM with 128GB of RAM (and thus 128 x 1GB hugepages) and watching "cat /proc/meminfo | grep -i huge" it will look like it's working:

Code:
AnonHugePages:         0 kB
ShmemHugePages:        0 kB
HugePages_Total:     128
HugePages_Free:      114
HugePages_Rsvd:       50
HugePages_Surp:        0
Hugepagesize:    1048576 kB
Hugetlb:        134217728 kB

but then this happens:

Code:
AnonHugePages:      6144 kB
ShmemHugePages:        0 kB
HugePages_Total:      64
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:       64
Hugepagesize:    1048576 kB
Hugetlb:        67108864 kB

Code:
numactl -H
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 12 13 14 15 16 17
node 0 size: 96707 MB
node 0 free: 72614 MB
node 1 cpus: 6 7 8 9 10 11 18 19 20 21 22 23
node 1 size: 96765 MB
node 1 free: 51640 MB
node distances:
node   0   1
  0:  10  21
  1:  21  10

It's almost like it's not handling NUMA properly when the hugepages are manually mounted like this and then it only gives the VM 64 x 1GB hugepages but the VM still thinks it has 128GB of RAM so when the Windows guest tries to start, it just bootloops. Weird all around.

This almost seems like a bug where the hugepages aren't setup automatically if you specify "hugepagez=1GB default_hugepagez=1GB" but they are setup just fine if you specify "hugepagez=1GB default_hugepagez=2M". This could also be normal operation for all I know though.

Either way, it seems like specifying "hugepagez=1GB default_hugepagez=2M" in GRUB fixes the issue and the VM correctly uses the 1GB hugepages at that point. Soooo...problem solved?
 
  • Like
Reactions: xxmail

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!