Second VM not starting (failed 12: cannot allocate memory)

jaceqp

Well-Known Member
May 28, 2018
95
7
48
43
Hi there...
I'm running PM on a pretty cheap server (4-core Xeon, 16GB ECC, 2x1TB SATA@ZFS).
For now there is only one VM running with 2 cores and 4GB RAM.
I've tweaked zfs arc RAM usage to max 4GB (min. 2GB, max 4GB).
Currently ZFS RAM usage shows ca 2,5GB.

Thing is I have to replace old Win2003 VM (yes... 2003) with 2012R2. But for data migration etc. I need to run both simultaneously for a while.
So, I've created 2nd VM dor 2012R2 with 4GB RAM aswell (will do for basic installation+setup).
Thing is that after creating one I can basically run it once. Then after a shutdown/poweroff I can no longer run it:
One thing (if important) is that 2012R2 VD is currently stored on an NFS storage attached to PM (through datacenter->storage) - just to ease use of PM's SATA drives and since performance is not as important on new VM atm.

Anyway here's the launch vm task description:

Code:
ioctl(KVM_CREATE_VM) failed: 12 Cannot allocate memory
kvm: failed to initialize KVM: Cannot allocate memory
TASK ERROR: start failed: command '/usr/bin/kvm -id 101 -name Win2k12 -chardev 'socket,id=qmp,path=/var/run/qemu-server/101.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon 'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/101.pid -daemonize -smbios 'type=1,uuid=47259c47-2532-45fc-84d3-3f1fdfc6c225' -smp '2,sockets=1,cores=2,maxcpus=2' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vnc unix:/var/run/qemu-server/101.vnc,x509,password -no-hpet -cpu 'kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,hv_synic,hv_stimer,enforce' -m 4096 -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'vmgenid,guid=b582f7d1-7489-4d8e-b241-e459a636cf3b' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'VGA,id=vga,bus=pci.0,addr=0x2' -chardev 'socket,path=/var/run/qemu-server/101.qga,server,nowait,id=qga0' -device 'virtio-serial,id=qga0,bus=pci.0,addr=0x8' -device 'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:68e28772b34' -drive 'if=none,id=drive-ide0,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0,bootindex=200' -drive 'file=/mnt/pve/zyxelnas/template/iso/Win2k12r2_ess_pl.iso,if=none,id=drive-ide2,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=201' -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' -drive 'file=/mnt/pve/zyxelnas/images/101/vm-101-disk-0.qcow2,if=none,id=drive-scsi0,format=qcow2,cache=none,aio=native,detect-zeroes=on' -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap101i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=32:15:2D:B3:5E:A4,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300' -rtc 'driftfix=slew,base=localtime' -machine 'type=pc' -global 'kvm-pit.lost_tick_policy=discard'' failed: exit code 1

BTW: Total node's RAM usage is not exceeding 8GB like... ever.
Now if I reduce total RAM for new VM to 2GB it runs... So how to determine REAL available RAM for VMs?
PS. With both running VMs (4GB + 2GB) PM node RAM utilization shows atm 8.16GB of 15,70GB (~53%). So even if it doesn't show ZFS usage (or maybe it is) there should be som GB's left unused, eh?

Code:
root@t?????:~# qm list
      VMID NAME                 STATUS     MEM(MB)    BOOTDISK(GB) PID
       100 Win2003              running    4096             464.73 2760
       101 Win2k12              running    2048              60.00 9086

root@?????:~# arcstat
    time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz     c
10:16:48     0     0      0     0    0     0    0     0    0   889M  894M

PS. The PM is 'a bit' outdated since it's: pve-manager/5.3-5/97ae681d (running kernel: 4.15.18-9-pve
 
Hi,
please run free -h or check the output of top in order to see the memory utilization of your host.
For the sake of data migration you might consider creating a new empty disk on the new VM and clone the content of the old one via dd. By this you could avoid starting both at the same time.
PS. The PM is 'a bit' outdated since it's: pve-manager/5.3-5/97ae681d (running kernel: 4.15.18-9-pve
You should definitely consider an upgrade to PVE 6.1, even 5.4 will be EOL soon.
 
Here's the output:
Code:
root@??????:~# free -h
              total        used        free      shared  buff/cache   available
Mem:            15G        8.9G        5.8G         54M        1.0G        6.5G
Swap:            0B          0B          0B

It actually doesn't look any suspicious BUT (forgot to mention) earlier I saw buff/cache usage showing ca 5-6GB. Not sure what made it near empty at the moment. Perhapse VM installation process sucks decent chunk of RAM to cache itself.
Weired thing is that now both VM's are up and running (4GB + 3GB addressed to VM's atm) and cache is only @ 1GB...
Not sure what exactly affects buff/cache. Are there any tweaks/RAM limitations for cache itself? Is ZFS itself relevant to node's cache usage or it just affects arc?

PS. Obviously I'd like to upgrade to 6.1 but I might be well prepared (backups etc.) for this since I had some issues while upgrading lately causing long downtimes to recover.
 
  • Like
Reactions: Chris
Here's the output:
Code:
root@??????:~# free -h
              total        used        free      shared  buff/cache   available
Mem:            15G        8.9G        5.8G         54M        1.0G        6.5G
Swap:            0B          0B          0B

It actually doesn't look any suspicious BUT (forgot to mention) earlier I saw buff/cache usage showing ca 5-6GB. Not sure what made it near empty at the moment. Perhapse VM installation process sucks decent chunk of RAM to cache itself.
Weired thing is that now both VM's are up and running (4GB + 3GB addressed to VM's atm) and cache is only @ 1GB...
Not sure what exactly affects buff/cache. Are there any tweaks/RAM limitations for cache itself? Is ZFS itself relevant to node's cache usage or it just affects arc?

PS. Obviously I'd like to upgrade to 6.1 but I might be well prepared (backups etc.) for this since I had some issues while upgrading lately causing long downtimes to recover.
Caches might be used by a lot of things on your system, for example inode caches of the filesystem, you should not limit memory for these things.
You can check what is currently using your cache with the following command slabtop -s c.
To free some space from the caches, you can run echo 3 > /proc/sys/vm/drop_caches.
To flush the buffers, you can run sync.
 
  • Like
Reactions: janssensm

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!