Memory hotplug : 1G RAM in VM

Rico29

Active Member
Jan 10, 2018
28
0
41
38
Hello,
I'm trying to make "memory hotplug" work correctly with linux VMs
I'm running latest proxmox (6.2-6), added memhp_default_state=online to my grub

my vm .conf file looks like this :

# cat qemu-server/107.conf
balloon: 2048
bootdisk: virtio0
cores: 2
hotplug: disk,network,usb,memory,cpu
ide2: local:iso/debian-10.1.0-amd64-netinst.iso,media=cdrom
memory: 4096
name: cluster1
net0: virtio=3E:65:6E:F3:2C:7D,bridge=vmbr0,firewall=1
net1: virtio=6A:3F:65:89:5D:47,bridge=vmbr999,firewall=1
numa: 1
ostype: l26
scsihw: virtio-scsi-pci
smbios1: uuid=cc205bb2-7195-412a-8910-2cf9e4cde60d
sockets: 1
virtio0: local:107/vm-107-disk-0.qcow2,size=15G
vmgenid: 4e484f09-a1b7-4ae0-84cf-70aa4639a488

So I've setup my VM to have 2G minimum of ram, ballooning (or not ballooning, I have the same problem without ballooning), and enabled memory hotplug

When starting the VM, kvm process is like this :

root 2126 1 65 16:01 ? 00:12:39 /usr/bin/kvm -id 107 -name cluster1 -chardev socket,id=qmp,path=/var/run/qemu-server/107.qmp,server,nowait -mon chardev=qmp,mode=control -chardev socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5 -mon chardev=qmp-event,mode=control -pidfile /var/run/qemu-server/107.pid -daemonize -smbios type=1,uuid=cc205bb2-7195-412a-8910-2cf9e4cde60d -smp 1,sockets=1,cores=2,maxcpus=2 -device kvm64-x86_64-cpu,id=cpu2,socket-id=0,core-id=1,thread-id=0 -nodefaults -boot menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg -vnc unix:/var/run/qemu-server/107.vnc,password -cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep -m size=1024,slots=255,maxmem=4194304M -object memory-backend-ram,id=ram-node0,size=1024M -numa node,nodeid=0,cpus=0-1,memdev=ram-node0 -object memory-backend-ram,id=mem-dimm0,size=512M -device pc-dimm,id=dimm0,memdev=mem-dimm0,node=0 -object memory-backend-ram,id=mem-dimm1,size=512M -device pc-dimm,id=dimm1,memdev=mem-dimm1,node=0 -object memory-backend-ram,id=mem-dimm2,size=512M -device pc-dimm,id=dimm2,memdev=mem-dimm2,node=0 -object memory-backend-ram,id=mem-dimm3,size=512M -device pc-dimm,id=dimm3,memdev=mem-dimm3,node=0 -object memory-backend-ram,id=mem-dimm4,size=512M -device pc-dimm,id=dimm4,memdev=mem-dimm4,node=0 -object memory-backend-ram,id=mem-dimm5,size=512M -device pc-dimm,id=dimm5,memdev=mem-dimm5,node=0 -device pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e -device pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f -device vmgenid,guid=4e484f09-a1b7-4ae0-84cf-70aa4639a488 -device piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2 -device usb-tablet,id=tablet,bus=uhci.0,port=1 -device VGA,id=vga,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 -iscsi initiator-name=iqn.1993-08.org.debian:01:e4e67abc4154 -drive file=/var/lib/vz/template/iso/debian-10.1.0-amd64-netinst.iso,if=none,id=drive-ide2,media=cdrom,aio=threads -device ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200 -drive file=/var/lib/vz/images/107/vm-107-disk-0.qcow2,if=none,id=drive-virtio0,format=qcow2,cache=none,aio=native,detect-zeroes=on -device virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100 -netdev type=tap,id=net0,ifname=tap107i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on -device virtio-net-pci,mac=3E:65:6E:F3:2C:7D,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300 -netdev type=tap,id=net1,ifname=tap107i1,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on -device virtio-net-pci,mac=6A:3F:65:89:5D:47,netdev=net1,bus=pci.0,addr=0x13,id=net1,bootindex=301 -machine type=pc+pve0

So the memory is set to 1024M
Min memory set in the VM configuration is 2048.

Is that a bug ?
I have the same behavior without ballooning.
 
  • Like
Reactions: janssensm
did you add memhp_default_state=online to your guest's kernel command line as described in the wiki? You can check with cat /proc/cmdline.
Hi Fabian.
I understood badly. I added this in proxmox host, not in guest.
Adding it in guest fixes my problem.
Sorry for that mistake !
 
Hi,


did you add memhp_default_state=online to your guest's kernel command line as described in the wiki? You can check with cat /proc/cmdline.
Thanks for pointing out @Fabian_E . Ran into the same issue and OOM knocked a process out ;).
Ended up with the kernel command line parameter, I tried the udev rule as in the pve wiki, but somehow it didn´t have any effect.
Perhaps this can be included in the pve admin guide, same as the already documented udev rule for cpu hotplug.
 
Thanks for pointing out @Fabian_E . Ran into the same issue and OOM knocked a process out ;).
Ended up with the kernel command line parameter, I tried the udev rule as in the pve wiki, but somehow it didn´t have any effect.
Perhaps this can be included in the pve admin guide, same as the already documented udev rule for cpu hotplug.
what is your distro ?
 
The guest I'm running is Debian 11, latest updates.

So I ran some extra tests. My conclusion for applying the udev rule for memory hotplug was too soon.
The udev rule has effect immediately after setting, the guest memory responds to changing memory in the vm config, while running.
But to get it to start with the same amount of memory as configured it appears necessary to reboot the vm after applying the udev rule.
So the info from the wiki is still correct.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!