VM memory allocation unit size

chrone

Renowned Member
Apr 15, 2015
115
18
83
planet earth
Hi Proxmoxers,

How to properly allocate VM's memory? Is it to use Mebibyte or Megabyte unit size?

I often find Proxmox allocates more memory to a VM and the host gets rebooted when it runs out of memory.

Host with 32GB and a VM memory is set to 28672MB, leaving a healthy 4GB RAM for the host. I also limited the zfs arc size to 128MB. Often, when there's a high IO operation on the VM, such as when it runs Microsoft SQL Server where it uses lots of VM RAM, the host gets rebooted.

Should I set the VM to 26GiB to get 28GB due to the Gibibyte to Gigabyte unit size conversation?
 
How to properly allocate VM's memory? Is it to use Mebibyte or Megabyte unit size?
Mebibyte

I often find Proxmox allocates more memory to a VM and the host gets rebooted when it runs out of memory.
The configured amount specifies how much memory the vm can use, not how much is used in the host. in general qemu needs to allocate a little bit more, because it needs to save things like the vm state, state of virtual hardware etc.

Host with 32GB and a VM memory is set to 28672MB, leaving a healthy 4GB RAM for the host. I also limited the zfs arc size to 128MB. Often, when there's a high IO operation on the VM, such as when it runs Microsoft SQL Server where it uses lots of VM RAM, the host gets rebooted.
if you use zfs, 128MB is definitely to little, recommended is at least 4GB plus 1GB per TB of Disk

the host gets rebooted.
what exactly happens? can you post logs? and what is the output of 'pveversion -v'
 
  • Like
Reactions: chrone
Mebibyte


The configured amount specifies how much memory the vm can use, not how much is used in the host. in general qemu needs to allocate a little bit more, because it needs to save things like the vm state, state of virtual hardware etc.


if you use zfs, 128MB is definitely to little, recommended is at least 4GB plus 1GB per TB of Disk


what exactly happens? can you post logs? and what is the output of 'pveversion -v'

Thanks for the explanation, that's why setting up 28GiB consumes more than 30GB of RAM. So I guess, I have to allocate the VM to use 26GiB so it won't use more than 28GB and leave 4GB for the host (32GB total RAM).

I limited the max zfs arc size to 128MB and when the VM uses almost all the 28GiB, the host which has total of 32GB got rebooted. We often find Proxmox with ZFS gets rebooted if it doesn't have enough RAM (below 768MB for host).

Decrease the amount of memory allocation for VM does stabilize the Proxmox host. Unfortunately, we didn't have the sweet pot to calculate how much memory should we leave Proxmox due to the unit size different.


The log and pveversion are attached. The Windows VM was allocated with 28GB, has 4 virtual drives using ZFS zvol, and it was running MS SQL Server 2008 R2 and Windows Backup simultaneously, then suddenly the host got rebooted. After we reduced the VM memory allocation from 28GB to 24GB, the host is stable.
 

Attachments

what is the output of 'arcstat'

again, running zfs with 128MB is too little and you will have performance issues
 
  • Like
Reactions: chrone
what is the output of 'arcstat'

again, running zfs with 128MB is too little and you will have performance issues

Hi dcsapak,

The arcstat is as follows:

# arcstat.py
time read miss miss% dmis dm% pmis pm% mmis mm% arcsz c
20:00:06 0 0 0 0 0 0 0 0 0 133M 128M


Luckily, the hosts we set with zfs_arc_max=134217728 are running pretty stable the last two years since PVE 3.x to PVE 4.4.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!