A) First problem is the triggering moment
A1) Monitoring via cron when a node changes status from online to fenced.
Doable with get /cluster/ha/status/manager_status either from outside website, either from script inside nodes ( the script should live on all nodes, but only triggered if the node has the master status in cluster, in order to avoid duplicate runs).
This approach has the advantage of moving all the failover ips at the beginning. Moving the failover ip has a latency of several minutes. With 50 vms, for first 10 won’t have the ip ready, but for the last 40, the public failover ips should already have been routed to the new node.
A2) How should I trigger scripts at the event when a node is fenced ? ( without need of cron). Or when a vm is migrated by ha manager to a new node?
In the above scenario, if the node is fenced for a little time, I should not move all ips from the beginning, because some vms won't be migrated, they would remain on the initial node.
B) Next step moving the ip
-I obtain the container ip get /nodes/{nodefenced}/lxc/{vmid}/config . What if meanwhile the container is restored to the next active node?
-If i could store the OVH Api service ( dedicated server) id in a comment on the proxmox node, I just run the OVH api client command to move ip to the new active server (it should be the next with higher priority online in HA group).
A1) Monitoring via cron when a node changes status from online to fenced.
Doable with get /cluster/ha/status/manager_status either from outside website, either from script inside nodes ( the script should live on all nodes, but only triggered if the node has the master status in cluster, in order to avoid duplicate runs).
This approach has the advantage of moving all the failover ips at the beginning. Moving the failover ip has a latency of several minutes. With 50 vms, for first 10 won’t have the ip ready, but for the last 40, the public failover ips should already have been routed to the new node.
A2) How should I trigger scripts at the event when a node is fenced ? ( without need of cron). Or when a vm is migrated by ha manager to a new node?
In the above scenario, if the node is fenced for a little time, I should not move all ips from the beginning, because some vms won't be migrated, they would remain on the initial node.
B) Next step moving the ip
-I obtain the container ip get /nodes/{nodefenced}/lxc/{vmid}/config . What if meanwhile the container is restored to the next active node?
-If i could store the OVH Api service ( dedicated server) id in a comment on the proxmox node, I just run the OVH api client command to move ip to the new active server (it should be the next with higher priority online in HA group).
Last edited: