HA non-strict negative resource affinity

cliffpercent

Member
Mar 1, 2021
5
0
21
24
HA node affinity offers the strict option to specify whether a resource requires the condition, or (not strict) prefers it. The new HA resource affinities, especially negative, should offer it as well.

With an example load of 3 VMs running the same application, there is preference (not requirement!) it to be distributed across 3 nodes (in a 3-node cluster). With strict policy (the only implemented), placing a node in maintenance mode still leaves the example VM on it. Node changes/reboots require disabling+re-enabling the affinity rule, running 4 nodes, or manually migrating the resource. I found that manually migrating does nothing (HA task OK, actual migration does not happen), until the affinity rule is disabled.

An additional feature request: distribute by resource count within a group. 6 VMs on 3 nodes, in a normal setting 2 VMs per node.

Thank you for the cattle project, I've been able to reduce my custom tooling scripts for everyday tasks one step at a time.
 
Last edited:
Hi!

The new HA resource affinities, especially negative, should offer it as well.
Thanks for the input! This was and is still planned for both types of resource affinity rules, you can track the status here [0]. It's always helpful to have more views on this, so feel very free to add your perspective there as a comment.

Node changes/reboots require disabling+re-enabling the affinity rule, running 4 nodes, or manually migrating the resource. I found that manually migrating does nothing (HA task OK, actual migration does not happen), until the affinity rule is disabled.
Hm, can you send an excerpt for the relevant HA rules and the output of the migration? At least for resource affinity rules, it should show to users whether there are any HA resources which are also migrated to the same node (positive resource affinity rule) or if the migration is not possible because of a HA resource on the target node (negative affinity rule).

An additional feature request: distribute by resource count within a group. 6 VMs on 3 nodes, in a normal setting 2 VMs per node.
It's best to create Bugzilla entries for this [1], so it doesn't get lost as it's easily searchable by us and users ;)

[0] https://bugzilla.proxmox.com/show_bug.cgi?id=6809
[1] https://bugzilla.proxmox.com/enter_bug.cgi?product=pve&component=HA