Hi there,
I'm looking for general advice on how to handle a VM with sporadic huge memory usage. In my case this is an ubuntu VM that wants a lot of memory for PIV calculations (or postprocessing, or something similar) in python.
The host has 256GiB available, and - first problem - when I started with something like minimum 32GiB and maximum 180GiB, the calculation would fail (with some memory related error) without ever getting close to the 180GB (according to proxmox GUI and to
At that point, I read up on ballooning again and misunderstood it (that it would offer the guest more memory when the guest's memory was below 80% - when its the other way around, that the host is telling the ballon to hog memory from the guest when the hosts's memory is not below 80% usage) and tried filling the memory with ng-stress and finally just set the minimum to the maximum memory - which worked, but I obviously didn't want this one VM to block that much memory.
This is question 2: how is this adequately handled? Allow multiple VMs access to a lot of resources temporarily (overprovisioning) - without having the host kill them because it gets scared.
The scientist on the VM wanted even more, so we tried 230GiB maximum memory and 180GB minimum. At 14:46 the VM got OOM-terminated (found it in the host's syslog), weirdly this doesn't reflect in the guest's graph - only in the hosts.
(Memory usage of Guest)
Memory usage of Host
Question 3:
What % of RAM is safe to "allocate away" from the host? It doesn't run anything specific atm (only proxmox), so even 5% should be fine for it on itself. What are / where can I find the best practices for this ? And why did nothing get killed before when the memory usage was even higher on the host?
PS: I am also urging the scientist to check whether his calculation could be refactored or split or maybe use storage for caching or junks, I'm definitely also going to look into that, but the above questions are something I want to understand.
I'm looking for general advice on how to handle a VM with sporadic huge memory usage. In my case this is an ubuntu VM that wants a lot of memory for PIV calculations (or postprocessing, or something similar) in python.
The host has 256GiB available, and - first problem - when I started with something like minimum 32GiB and maximum 180GiB, the calculation would fail (with some memory related error) without ever getting close to the 180GB (according to proxmox GUI and to
free -h
which didn't even show all memory as either available or free only up to 96GiB or so). Why ? Could this literally have been 2 giant junks of At that point, I read up on ballooning again and misunderstood it (that it would offer the guest more memory when the guest's memory was below 80% - when its the other way around, that the host is telling the ballon to hog memory from the guest when the hosts's memory is not below 80% usage) and tried filling the memory with ng-stress and finally just set the minimum to the maximum memory - which worked, but I obviously didn't want this one VM to block that much memory.
This is question 2: how is this adequately handled? Allow multiple VMs access to a lot of resources temporarily (overprovisioning) - without having the host kill them because it gets scared.
The scientist on the VM wanted even more, so we tried 230GiB maximum memory and 180GB minimum. At 14:46 the VM got OOM-terminated (found it in the host's syslog), weirdly this doesn't reflect in the guest's graph - only in the hosts.
(Memory usage of Guest)
Memory usage of Host
Question 3:
What % of RAM is safe to "allocate away" from the host? It doesn't run anything specific atm (only proxmox), so even 5% should be fine for it on itself. What are / where can I find the best practices for this ? And why did nothing get killed before when the memory usage was even higher on the host?
PS: I am also urging the scientist to check whether his calculation could be refactored or split or maybe use storage for caching or junks, I'm definitely also going to look into that, but the above questions are something I want to understand.