How to achieve distribution of a cohort of vms across more than one host?

wolfspyre

Member
Jan 6, 2023
13
4
8
48
Austin, TX
wolfspyre.com
howdy all!

I've dug thru the forums here, but haven't found much that speaks directly to my challenge.

I have a 6 node proxmox cluster.
4x dell r730xd
2x dell r720xd (in the process of moving to 730's... but been slow in migration)
I have a few pools of HA VMs ...
ie
a pair of webservers...
a triad of ci runners...
a pair of VMs running HA mysql
a pair of VMs running ha postgres
a pair of VMs running ha redis

what I'm trying to determine is how to express to proxmox that I would like to inject an aversion to schedule multiple vms of the same pool on the same physical host, to achieve wider resource distribution.

I understand I can associate a weight with a phyiscal host to bias a pool towards, or away from that specific hardware...

but I don't see a mechanism to express how densely or loosely to leverage the hardware available to the pool.

there are certainly scenarios where binpacking vms into a tighter hardware footprint is preferential

likewise there are scenarios where one would prefer to have the VMs inclined to live on different hosts, but are not precluded from cohabitation if necessary...

how can I tell the proxmox ha scheduler:

in a perfect world I would like the VM members of pool A to be deployed on divergent hosts.
however, if necessary, they **MAY** cohabitate.
 
in a perfect world I would like the VM members of pool A to be deployed on divergent hosts.
however, if necessary, they **MAY** cohabitate.
Don't really understand your problem as you already said you can associate a weight with a phyiscal host to bias the vm's.
We have a ha group to each node with prio set to number of nodes, eg prio 6 to node groupname itself. The other hosts (5) can get staggered prio down for second, maybe third and last run prio. So if you generate a vm or have your vm's just give each with the ha group a "homehost" and your your perfect world is reached (which we do so also).
 
Don't really understand your problem as you already said you can associate a weight with a phyiscal host to bias the vm's.
We have a ha group to each node with prio set to number of nodes, eg prio 6 to node groupname itself. The other hosts (5) can get staggered prio down for second, maybe third and last run prio. So if you generate a vm or have your vm's just give each with the ha group a "homehost" and your your perfect world is reached (which we do so also).
Heya @waltar
1) THANK YOU for your perspective and thoughts !!!!
seriously.... I really appreciate it.


2) That idea makes for weird groupings when you have services ...
A (these three VMs are contextually co-related)
B (these two VMs are co-related) ....
C (these four VMs are co-related) ....

A and B are related.
I want A to be as wide as possible, as each VM consumes a lot of resouces.
I want B VMs to cohabitate with an A VM when possible

C has nothing to do with A or B..

I do not want more than one A VM on the same node when possible.
I do not want more than one C VM on a node when possible.

I thought (PERHAPS MISTAKENLY) that an HA group was a 'purpose' construct, not a locality construct ...
It felt like the wrong tool to use for conveying physical hardware affinity ....
am I just totally misunderstanding the construct?
 
Last edited:
You are right as the ha grouping tool isn't perfect but normally still a ok working point.
In a 6 node pve cluster upt to 2 nodes can fail while having quorum >50%.
vm to ha groups looks easy for vm's A and B requirements but impossible for vm's C.
Would be easy with 7th node or you need a script permanently running which check if 2 nodes fail you have to rebias the ha grp of 1 vm of type C.
But on the other hand your wish restrictions are little bit funny as the cluster should run all vm's as ha services and so at the end it doesn't make sense to a user from where it's served.