Just to close and roundup this issue:
The "Basic Scheduler" takes the number of running resources into account to find a suitable migration host. On our very differently sized hosts that lead to problems.
The newer "Static-Load Scheduler" (see...
We have a 5-Host hyperconverged Proxmox 7.1 with CEPH as VM storage (5 SSD OSDs per host). My understanding is that CEPH I/O highly depends on available CPU power
Would it make sense to prioritize the OSD processes over any VM process to have (near-)full CPU power available for CEPH even under...
if you have the physical space for adding the new SSD _before_ removing the old one, you could even add a new OSD and set the affected OSD out (not down!) and wait for ceph to rebalance. That would save you the second rebalancing while maintaining full redundancy during the operation
Indeed, NodeB hasn't more VMs running than the other nodes. But after migration, NodeB definitely has more VMs running... My pratical problem here is that the nodes are sized very differently (speaking number of sockets/cores and RAM) and the number of running guests is not a very fitting...
Hi Dominic, thanks for looking into this issue.
The simulator behaves as expected, and I am aware of the documentation you mentioned. However, I hit "reboot" on NodeA, and all 6 VMs currently running onto that node are migrated to NodeB only.
Wouldn't you expect that at least one VM will be...
I am evaluating an 5 node cluster (nodeA to nodeE) where every VM is bound to run on a specific node by assigning a specific HA group I have for every node. Example:
group: prefer-nodeA
nodes nodeA:1
nofailback 0
restricted 0
my datacenter.cfg contains ha: shutdown_policy=migrate...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.