memory hotplug for CentOS/Red Hat VMs

May 12, 2022
8
1
1
Hi,

can somebody explain to me why memory hotplug for CentOS/Red Hat VMs works out of the box although their kernel doesn't set CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE?
Debian kernels also don't set CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE, and for Debian VMs I therefore have to use kernel parameter "memhp_default_state=online".
See https://pve.proxmox.com/wiki/Hotplug_(qemu_disk,nic,cpu,memory)

I finally want to understand this memory hotplug behaviour in Proxmox (coming from oVirt, where this was never a topic at all...). Is this because of the famous Red Hat frankenkernel?

thx
Matthias
 
My question is strictly speaking not about PVE, but about QEMU guest behaviour, especially Red Hat guest behaviour. If this doesn't belong here please excuse me. Still to me this is something I have to deal with for the first time with Proxmox, with oVirt (QEMU management solution) I never had to think about memory hotplug, it just "works" (without doing any special configuration for VM definition or guest OS). Proxmox docs tell me about using "memhp_default_state=online", I tried to find out why I sometimes have to use it and sometimes I don't. This why I'm asking.
 
My question is strictly speaking not about PVE, but about QEMU guest behaviour, especially Red Hat guest behaviour. If this doesn't belong here please excuse me. Still to me this is something I have to deal with for the first time with Proxmox, with oVirt (QEMU management solution) I never had to think about memory hotplug, it just "works" (without doing any special configuration for VM definition or guest OS). Proxmox docs tell me about using "memhp_default_state=online", I tried to find out why I sometimes have to use it and sometimes I don't. This why I'm asking.
Ah, now I get it. Unfortunately, I cannot comment on that, never tried hotplugging before. There is often a kernel update that already requires a reboot, so I just reconfigure and wait.