Classify Cluster Node Roles?

jamin.collins

New Member
May 16, 2023
25
2
3
Is there some way to restrict which nodes within a cluster can host containers or VMs? I know that I can simply not place containers or VMs on various nodes, but I would prefer some way to restrict which nodes are shown in dialogs for hosting containers or VMs, such as migration targets.
 
Setting HA on each VM seems quite tedious, is there either a scriptable or mass method of assignment?

NVM: found `ha-manager`
 
Last edited:
So, HA seems to move the VMs to the desired hosts, but does not appear to alter (or restrict) the migration target drop down in the UI.
 
That's my observation also. HA will attempt to move the VM, immediately realize it shouldn't be on that host, and move it off again, to one of the hosts in the group. Unless you use the "extra disk" trick in which case it will fail to migrate the VM and leave it where it is.
 
  • Like
Reactions: waltar
So, HA seems to move the VMs to the desired hosts, but does not appear to alter (or restrict) the migration target drop down in the UI.
That's right but the HA mechanism would immedently remigrate the vm to a desired ha host group member if manually "wrong" migrated and even not back to "first" host when you eg. enabled maintenance mode on it (as reason for a manual migration).
 
You could assign the storage for the VMs only to the nodes where they should run. No storage, no images, no VMs.
My use case is (currently) to have two different classifications of proxmox hosts:

  1. CEPH OSD nodes
  2. Virt nodes

So, storage for the VMs is by definition the CEPH OSD nodes. However, I don't want to have any VMs running on the CEPH OSD nodes.
 
See the thread posted above, such as this post.

Storage in that case is defined at the Datacenter level not inside Ceph. Just don't add it to Ceph nodes. One can have multiple RBD per Ceph pool.
1747075399683.png
 
  • Like
Reactions: gurubert
Interesting, that brings things one step closer. The UI now gives an error when I try to migrate a VM to one of the CEPH nodes.

Would still be better to not even list the invalid or undesired targets.
 
The challenge is, I think, that the group is not "VM can be here" it is "VM can run here." If you shut down a VM you can move it outside the HA group. If you then start it, it will migrate off to a node that is in the group. So, if the VM is off, all the nodes are valid choices.