SOLVED: HA Cluster shuts down VM's rather than migrating them when rebooting node?

jlficken

New Member
Sep 6, 2022
29
2
3
I figured it out in case anyone ever stumbles across this....
Datacenter --> Options --> HA Settings
Change to shutdown_policy=migrate





I'm at a loss as I have another cluster set up the same way that works correctly although they're both Intel systems. I can live migrate VM's between nodes as well on the cluster that isn't working as expected.

I'm using x86-64-v3 as the CPU type for all VM's since one system is AMD and other is Intel.

All VM's have Start at boot set to Yes and all are in the Datacenter --> HA settings as being set to a state of "started" when I configured them.

ETA: Quorum remains "OK" while the other node is rebooting.


HA status:
quorum - OK
master - S4PVE1
lrm - S4PVE1 (active, current time)
lrm - S4PVE2 (active, current time)

Code:
root@S4PVE1:~# ha-manager status
quorum OK
master S4PVE1 (active, Wed Apr 10 13:53:24 2024)
lrm S4PVE1 (active, Wed Apr 10 13:53:17 2024)
lrm S4PVE2 (active, Wed Apr 10 13:53:15 2024)
service ct:117 (S4PVE1, started)
service vm:100 (S4PVE1, started)
service vm:101 (S4PVE2, started)
service vm:102 (S4PVE1, started)
service vm:103 (S4PVE2, started)
service vm:106 (S4PVE1, started)
service vm:110 (S4PVE1, started)
service vm:111 (S4PVE1, started)
service vm:112 (S4PVE1, started)
service vm:113 (S4PVE1, started)
service vm:114 (S4PVE1, started)
service vm:118 (S4PVE1, started)

Code:
root@S4PVE1:~# pvecm status
Cluster information
-------------------
Name:             S4PVE
Config Version:   2
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Wed Apr 10 13:54:11 2024
Quorum provider:  corosync_votequorum
Nodes:            2
Node ID:          0x00000001
Ring ID:          1.99
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   2
Highest expected: 2
Total votes:      2
Quorum:           1
Flags:            2Node Quorate WaitForAll

Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 192.168.0.253 (local)
0x00000002          1 192.168.0.252

Here's the corosync.conf:
Code:
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: S4PVE1
    nodeid: 1
    quorum_votes: 1
    ring0_addr: 192.168.0.253
  }
  node {
    name: S4PVE2
    nodeid: 2
    quorum_votes: 1
    ring0_addr: 192.168.0.252
  }
}

quorum {
  provider: corosync_votequorum
  two_node: 1
}

totem {
  cluster_name: S4PVE
  config_version: 2
  interface {
    linknumber: 0
  }
  ip_version: ipv4-6
  link_mode: passive
  secauth: on
  version: 2
}
 
Last edited:
  • Like
Reactions: Kingneutron

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!