Reserving HW resources for Hypervisor (physical machine) itself

mipsH

Renowned Member
Hello.

I have a question related to the Proxmox VE nodes (hardware/bare metal).
Are there any possibility to reserve some hardware resources for Hypervisor itself, like:
  • Reserved amount of RAM
  • Reserved CPU cores
  • Reserved disk spaces (if needed)

Similar as it is possible on OpenStack:
https://access.redhat.com/documenta...e_compute_service_for_instance_creation/index

I am aware that we have some recommendations like:
https://pve.proxmox.com/wiki/System_Requirements

--> for example we have recommendations like:
"
Memory: Minimum 2 GB for the OS and Proxmox VE services, plus designated memory for guests.
For Ceph and ZFS, additional memory is required; approximately 1GB of memory for every TB of used storage.

"

But there is no limit/reservation that you can do on every specific Proxmox VE (physical) server.

For example on an environment with three Proxmox VE servers like:
  1. Server A: 32 CPU cores, 128 GB RAM.
  2. Server B: 64 CPU cores, 64 GB RAM.
  3. Server C: 72 CPU cores, 256 GB RAM.

I want to reserve for the system:
  1. Server A: reservation for the system/hypervisor: 2 CPU cores, 8 GB RAM.
  2. Server B: reservation for the system/hypervisor: 4 CPU cores, 4 GB RAM.
  3. Server C: reservation for the system/hypervisor: 4 CPU cores, 16 GB RAM.

Is this possible?

If not, it will be very helpful to see it in the future, since it will be much easy to work on such a system that reserves the resources for itself, that can not be shared/used by the guests (VMs).



BR,
Hrvoje.
 
  • Like
Reactions: mjw
For example, OpenStack on every Compute node (one with Hypervisor) had a config file related to Hypervisor/Host specific configuration/tunning.

Specificaly this file is named: /etc/nova/nova.conf

And it has (among the other things) three sections like this one (with which you can reserve system resources for Hypervisor/Host itself):

"
#
# Amount of disk resources in MB to make them always available to host. The
# disk usage gets reported back to the scheduler from nova-compute running
# on the compute nodes. To prevent the disk resources from being considered
# as available, this option can be used to reserve disk space for that host. For
# more information, refer to the documentation. (integer value)
# Minimum value: 0
reserved_host_disk_mb=
XX

#
# Amount of memory in MB to reserve for the host so that it is always available
# to host processes. The host resources usage is reported back to the scheduler
# continuously from nova-compute running on the compute node. To prevent the
# host memory from being considered as available, this option is used to reserve
# memory for the host. For more information, refer to the documentation.
# (integer value)
# Minimum value: 0
#reserved_host_memory_mb=8192
reserved_host_memory_mb=
YY

#
# Number of host CPUs to reserve for host processes. For more information, refer
# to the documentation. (integer value)
# Minimum value: 0

reserved_host_cpus=ZZ

"
--> Entire document for nova.conf file can be accessed at:
https://docs.openstack.org/nova/latest/configuration/config.html
 
Last edited:
I really don't see the point in doing such a thing. Normally it's bad if your host swaps constantly (in and out), so avoid that at all costs. Until then, everything is stored in memory and therefore fast. Reserving resources goes against everything virtualization stands for. Every KVM-VM runs in a linux process as well as anything that runs in an LX(C) container and shares everything you have.
 
I really don't see the point in doing such a thing. Normally it's bad if your host swaps constantly (in and out), so avoid that at all costs. Until then, everything is stored in memory and therefore fast. Reserving resources goes against everything virtualization stands for. Every KVM-VM runs in a linux process as well as anything that runs in an LX(C) container and shares everything you have.
Hello.

So as it is possible on OpenStack and VMware (ESXi), primary for the system or even specific Host/Hypervisor/physical machines, on which you do not want to overprovision VMs/LXcs.
In practice when you have a need to host the guest (VMs or LXC containers) that can not be overprovisioned and cause to "kill" the host if all of the VMs/LXCs are having high utilization all the time or in some period of time. So in that high utilization period of time, you will have disk I/O problems, package drops, problems in communication between VMs/LXCs and similar problems.
One of the cases is using VMs/LXCs in mission-critical, telco, or many HA systems scenarios that must run without the disruptions caused by overprovisioning.
That is important as in the example of OpenStack, that you have the ability to set the resource reservation for every specific Host/Physical machine (Hypervisor itself) because it needs some resources for handling the VMs/LXCs, Disk I/O, network, etc.
In my field of work, this is most important than having a few VMs/LXc more per host.

So performance/stability (quality) over quantity let's say.

I believe that others has also such a need/environment.
I did not test on OpenStack for example if only a few Host/Hypervisor/physical machines have a resource reservation and the others don't, just worked will all of them has resource reservation defined. But probably there can be some algorithm or mechanism that can be aware of that, or let say a finer grain of HA work depending on the importance of VMs/LXcs:
  • Most important go to the Host/Hypervisor/physical machines with specified resource reservation
  • Not important one goes to the Host/Hypervisor/physical machines without specified resource reservation
 
Disk I/O, network, etc.
Yes, i totally agree with you here, only those cannot be limited so easy.

Memory and processes, as the OP wanted, does not make sense. Processes are scheduled due to their priority, so that is covered and memory is good as long as it does not swap. If you have heavy swapping (a lot in and out, constantly), there is nothing you can do to speed up your system besides solving the problems that yield to swapping.
 
Hello.

When changing /etc/nova/nova.conf file on all Compute nodes:
reserved_host_memory_mb=8192 # This is 8GB
reserved_host_disk_mb=10000 # This is 100GB
reserved_host_cpus=2

... and restarting nova service.

So when we look on the OpenStack, after above reservation is done on all Hosts/Hypervisors/physical machines:
  • 2 CPU cores
  • 8GB of RAM
  • 100 GB of disk space
On the Admin GUI we can see the next interesting thing:
  • The available number of total CPU cores available for all VMs are SUM-2 (n-2).
  • The available amount of total RAM available for all VMs are SUM-8GB (n-8GB).
  • The available amount of total disk space for all VMs are SUM-100GB (n-100GB/per machine)
Please take a look at the picture below
In the red boxex there are Hosts/Hypervisors/physical machines without any VM on it and OpenStack is showing 2CPU, 8GB RAM and 96GB disk space already used (even it is not used at all).
So it means it just shows the smaller amount of resources that you can use for your VMs and exactly this is the amount of the reservation that we made before. Really KISS and a great idea how to reserve the HW resources for the Hosts/Hypervisors/physical machines themselves.

1647697117538.png
 
Last edited:
Hello.

One and easy solution for isolating CPU cores from task/process scheduler is an option that you can use on kernel, and it must be defined in GRUB(2).
This option is: isolcpus

For example, if you want to reserve CPU cores 0 and 1, just add this opction at the end of the grub config file (for the kernel that you want to):

Code:
isolcpus=0-1

Or in file /etc/default/grub
So something like in line:
GRUB_CMDLINE_LINUX="... ... ... isolcpus=0-1"
<-- do not forget to regenerate GRUB(2) (grub2-mkconfig)

Or for all kernels that you have on the system; with grubby tool:
Code:
grubby --update-kernel=ALL --args=isolcpus=0-1

Than after reboot, you can check if this option is in use:
Code:
cat /proc/cmdline

Info on isolcpus.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!