I am doing some testing on my three nodes in my cluster. I put node 1 into maintenance mode, the VMs migrated over to the other nodes, I then shut down node 1. When I powered it back on the VMs migrated back to node 1 even though it is still in maintenance mode. After several minutes the VMs then migrated over to the other nodes again. I guess it finally realized node 1 is in maintenance and shouldn't have those VMs yet?
I'm guessing that as soon as the HA manager saw node 1 back online it migrated those then the Maintenance process kicked in and moved them off?
Is this the correct behavior or should it realize maintenance mode is still on before tying to migrate those back to node 1?
I tested it again and so far it only did this the one time.
All nodes are on version 8.4.14.
Node 1 shows in the UI and on CLI that that it is still in maintenance mode.
root@prox03:~# ha-manager status
quorum OK
master prox03 (active, Thu Nov 6 09:18:12 2025)
lrm prox01 (maintenance mode, Thu Nov 6 09:18:08 2025)
lrm prox02 (active, Thu Nov 6 09:18:11 2025)
lrm prox03 (active, Thu Nov 6 09:18:03 2025)
I'm guessing that as soon as the HA manager saw node 1 back online it migrated those then the Maintenance process kicked in and moved them off?
Is this the correct behavior or should it realize maintenance mode is still on before tying to migrate those back to node 1?
I tested it again and so far it only did this the one time.
All nodes are on version 8.4.14.
Node 1 shows in the UI and on CLI that that it is still in maintenance mode.
root@prox03:~# ha-manager status
quorum OK
master prox03 (active, Thu Nov 6 09:18:12 2025)
lrm prox01 (maintenance mode, Thu Nov 6 09:18:08 2025)
lrm prox02 (active, Thu Nov 6 09:18:11 2025)
lrm prox03 (active, Thu Nov 6 09:18:03 2025)