Hi everyone,
First of, I am well aware of the famous "Linux ate my RAM" page and I post here because my issue makes no sense.
I am also aware of the usual RAM requirements of 1GB per TB or storage for ZFS, but again, seems a bit odd in my situation.
I have 5 VMs on my server, of which only ONE is started (in bold, I named them below for simpler explanation). All others are off.
2 MacOS VMs:
(off) Template Macos, 8GB Ram, 60GB disc
(on) Work Macos, 32GB Ram, 900GB disc
3 Windows VMs:
(off) Template Win10, 8GB RAM, 60GB disc
(off) Test Win10, 2GB RAM, 60GB disc
(off) Gaming Win10, 32GB RAM, 60GB disc
When trying to start the "Test" one of 2GB, I get the classic "Cannot allocate memory"
Complete error at the bottom of this post.
I have a total of 64GB of RAM on this system, with a Raidz1 pool of 3x1TB hard drives. Most of the storage of my mac vm is on a NVMe drive that I passed-through, so not much storage overall so far on the node's pool, 400GB allocated total accross the node.
How is all my RAM used, and why (or how?) can't I recover some to open a tiny 2GB vm on the go? Is there something I am missing here?
3GB for storage
8GB for Hypervisor
32GB for running VM
----------
We're at 43GB max, I should have at least 10 to 15GB left.
Should I just throw a spare consumer 120GB and add it as a L2ARC/ZIL drive? From what I understood, I shouldn't really need such.
Or should I just assign only very small amounts of RAM to other VMs even when they're off, and eventually bump up their ram when I need it before starting it? Is that why all the ram is considered used in that situation? I might have completely overseen that parameter, but I don't understand how I could run out of RAM or how the system would consider that used.
I understand the principles of overcommitting though, so if that is normal behavior, fine by me, I can adjust all RAM sizes of my unused VMs and bump it up before starting them whenever needed,
Sorry in advance if that's a stupid question.
Complete Task error below:
First of, I am well aware of the famous "Linux ate my RAM" page and I post here because my issue makes no sense.
I am also aware of the usual RAM requirements of 1GB per TB or storage for ZFS, but again, seems a bit odd in my situation.
I have 5 VMs on my server, of which only ONE is started (in bold, I named them below for simpler explanation). All others are off.
2 MacOS VMs:
(off) Template Macos, 8GB Ram, 60GB disc
(on) Work Macos, 32GB Ram, 900GB disc
3 Windows VMs:
(off) Template Win10, 8GB RAM, 60GB disc
(off) Test Win10, 2GB RAM, 60GB disc
(off) Gaming Win10, 32GB RAM, 60GB disc
When trying to start the "Test" one of 2GB, I get the classic "Cannot allocate memory"
Code:
kvm: cannot set up guest memory 'pc.ram': Cannot allocate memory
Complete error at the bottom of this post.
I have a total of 64GB of RAM on this system, with a Raidz1 pool of 3x1TB hard drives. Most of the storage of my mac vm is on a NVMe drive that I passed-through, so not much storage overall so far on the node's pool, 400GB allocated total accross the node.
How is all my RAM used, and why (or how?) can't I recover some to open a tiny 2GB vm on the go? Is there something I am missing here?
3GB for storage
8GB for Hypervisor
32GB for running VM
----------
We're at 43GB max, I should have at least 10 to 15GB left.
Should I just throw a spare consumer 120GB and add it as a L2ARC/ZIL drive? From what I understood, I shouldn't really need such.
Or should I just assign only very small amounts of RAM to other VMs even when they're off, and eventually bump up their ram when I need it before starting it? Is that why all the ram is considered used in that situation? I might have completely overseen that parameter, but I don't understand how I could run out of RAM or how the system would consider that used.
I understand the principles of overcommitting though, so if that is normal behavior, fine by me, I can adjust all RAM sizes of my unused VMs and bump it up before starting them whenever needed,
Sorry in advance if that's a stupid question.
Complete Task error below:
Code:
TASK ERROR: start failed: command '/usr/bin/kvm -id 101 -name testwin -chardev 'socket,id=qmp,path=/var/run/qemu-server/101.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon 'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/101.pid -daemonize -smbios 'type=1,uuid=2c337003-cde9-4508-9326-38c660d9e99d' -drive 'if=pflash,unit=0,format=raw,readonly,file=/usr/share/pve-edk2-firmware//OVMF_CODE.fd' -drive 'if=pflash,unit=1,format=raw,id=drive-efidisk0,file=/dev/zvol/rpool/data/vm-101-disk-0' -smp '2,sockets=1,cores=2,maxcpus=2' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vnc unix:/var/run/qemu-server/101.vnc,x509,password -no-hpet -cpu 'host,+kvm_pv_unhalt,+kvm_pv_eoi,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,hv_synic,hv_stimer' -m 2048 -device 'vmgenid,guid=c262f99d-01d7-4312-bdd2-0ab82ec0f654' -readconfig /usr/share/qemu-server/pve-q35.cfg -device 'usb-host,vendorid=0x1b1c,productid=0x0c19,id=usb0' -device 'usb-host,vendorid=0x1b1c,productid=0x0c0b,id=usb1' -device 'qxl-vga,id=vga,bus=pcie.0,addr=0x1' -chardev 'socket,path=/var/run/qemu-server/101.qga,server,nowait,id=qga0' -device 'virtio-serial,id=qga0,bus=pci.0,addr=0x8' -device 'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' -spice 'tls-port=61000,addr=127.0.0.1,tls-ciphers=HIGH,seamless-migration=on' -device 'virtio-serial,id=spice,bus=pci.0,addr=0x9' -chardev 'spicevmc,id=vdagent,name=vdagent' -device 'virtserialport,chardev=vdagent,name=com.redhat.spice.0' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:8b9e5279db' -drive 'file=/dev/zvol/rpool/data/vm-101-disk-1,if=none,id=drive-ide0,format=raw,cache=none,aio=native,detect-zeroes=on' -device 'ide-hd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0,rotation_rate=1,bootindex=100' -drive 'file=/rpool/iso/template/iso/Win10_1903_V1_EnglishInternational_x64.iso,if=none,id=drive-ide2,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -netdev 'type=tap,id=net0,ifname=tap101i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown' -device 'e1000,mac=FA:78:25:78:9F:B6,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300' -rtc 'driftfix=slew,base=localtime' -machine 'type=q35' -global 'kvm-pit.lost_tick_policy=discard'' failed: exit code 1