Hello all,
I've spent the last couple of days trying to enable 1GB hugepages on one of my Proxmox nodes and I'm afraid I'm getting nowhere. The gist of my current roadblock is this:
It seems like for some reason the 1GB hugepages aren't being automatically mounted. I've tried manually creating the /run/hugepages/kvm mount point and then manually mounting them with a command like this 'mount hugetlbfs-kvm /run/hugepages/kvm -t hugetlbfs -o rw,relatime,gid=130,mode=775,pagesize=1G' but that really doesn't feel like the proper way to do it. If I did this I was able to start the VM but then the Windows guest started bootlooping. That could be unrelated though.
Here is my grub configuration:
and here is the VM configuration:
I've tried manually specifying in the grub file 'hugepages=176' to get it to create 176 of the 1GB hugepages on boot but that still didn't create the /run/hugepages/kvm mount point.
This thread sounds like almost exactly what I'm dealing with but the result there was that 1GB hugepages suddenly started working with no apparent reason which doesn't really help: https://forum.proxmox.com/threads/vm-start-timeout-pci-pass-through-related.28876
I believe I've been able to get 2M hugepages working without much issue and even if there wouldn't be much of a benefit from 1G vs 2M hugepages...now it's a challenge!
I've spent the last couple of days trying to enable 1GB hugepages on one of my Proxmox nodes and I'm afraid I'm getting nowhere. The gist of my current roadblock is this:
Code:
kvm: -object memory-backend-file,id=ram-node0,size=65536M,mem-path=/run/hugepages/kvm/1048576kB,share=on,prealloc=yes: can't open backing store /run/hugepages/kvm/1048576kB for guest RAM: No such file or directory
It seems like for some reason the 1GB hugepages aren't being automatically mounted. I've tried manually creating the /run/hugepages/kvm mount point and then manually mounting them with a command like this 'mount hugetlbfs-kvm /run/hugepages/kvm -t hugetlbfs -o rw,relatime,gid=130,mode=775,pagesize=1G' but that really doesn't feel like the proper way to do it. If I did this I was able to start the VM but then the Windows guest started bootlooping. That could be unrelated though.
Here is my grub configuration:
Code:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on hugepagesz=1G default_hugepagesz=1G"
and here is the VM configuration:
Code:
agent: 1
#args: -device 'vfio-pci,host=01:00.0,multifunction=on'
balloon: 0
bios: ovmf
bootdisk: scsi0
cores: 12
hugepages: 1024
cpu: host,flags=+pdpe1gb
efidisk0: local-zfs:vm-100-disk-1,size=128K
ide0: local:iso/virtio-win-0.1.171.iso,media=cdrom,size=363020K
ide2: local:iso/en_windows_10_enterprise_ltsc_2019_x64_dvd_5795bb03.iso,media=cdrom,size=4228938K
machine: q35
memory: 131072
name: [redacted]
net0: virtio=[redacted],bridge=vmbr0,firewall=1
numa: 1
ostype: win10
scsi0: local-zfs:vm-100-disk-0,size=128G
scsihw: virtio-scsi-pci
smbios1: uuid=[redacted]
sockets: 2
vmgenid: [redacted]
I've tried manually specifying in the grub file 'hugepages=176' to get it to create 176 of the 1GB hugepages on boot but that still didn't create the /run/hugepages/kvm mount point.
This thread sounds like almost exactly what I'm dealing with but the result there was that 1GB hugepages suddenly started working with no apparent reason which doesn't really help: https://forum.proxmox.com/threads/vm-start-timeout-pci-pass-through-related.28876
I believe I've been able to get 2M hugepages working without much issue and even if there wouldn't be much of a benefit from 1G vs 2M hugepages...now it's a challenge!