Remove old node from HA Status page

proxwolfe

Active Member
Jun 20, 2020
449
38
33
49
Hi,

I have a small cluster in my home lab. Over time it evolved and I replaced all original nodes. While the old nodes are no longer shown on the left hand side in the GUI under Datacenter, they are still listed on the HA Status page (as type "lrm").

How can I remove them from there as well?

Thanks
 
Hi,
please post the output of pveversion -v and ha-manager status -v.
 
pveversion -v:

proxmox-ve: 7.3-1 (running kernel: 5.15.74-1-pve) pve-manager: 7.3-3 (running version: 7.3-3/c3928077) pve-kernel-5.15: 7.2-14 pve-kernel-helper: 7.2-14 pve-kernel-5.15.74-1-pve: 5.15.74-1 pve-kernel-5.15.53-1-pve: 5.15.53-1 pve-kernel-5.15.30-2-pve: 5.15.30-3 ceph: 17.2.5-pve1 ceph-fuse: 17.2.5-pve1 corosync: 3.1.7-pve1 criu: 3.15-1+pve-1 glusterfs-client: 9.2-1 ifupdown2: 3.1.0-1+pmx3 ksm-control-daemon: 1.4-1 libjs-extjs: 7.0.0-1 libknet1: 1.24-pve2 libproxmox-acme-perl: 1.4.2 libproxmox-backup-qemu0: 1.3.1-1 libpve-access-control: 7.2-5 libpve-apiclient-perl: 3.2-1 libpve-common-perl: 7.3-1 libpve-guest-common-perl: 4.2-3 libpve-http-server-perl: 4.1-5 libpve-storage-perl: 7.3-1 libspice-server1: 0.14.3-2.1 lvm2: 2.03.11-2.1 lxc-pve: 5.0.0-3 lxcfs: 4.0.12-pve1 novnc-pve: 1.3.0-3 proxmox-backup-client: 2.2.7-1 proxmox-backup-file-restore: 2.2.7-1 proxmox-mini-journalreader: 1.3-1 proxmox-offline-mirror-helper: 0.5.0-1 proxmox-widget-toolkit: 3.5.3 pve-cluster: 7.3-1 pve-container: 4.4-2 pve-docs: 7.3-1 pve-edk2-firmware: 3.20220526-1 pve-firewall: 4.2-7 pve-firmware: 3.5-6 pve-ha-manager: 3.5.1 pve-i18n: 2.8-1 pve-qemu-kvm: 7.1.0-4 pve-xtermjs: 4.16.0-1 qemu-server: 7.3-1 smartmontools: 7.2-pve3 spiceterm: 3.2-2 swtpm: 0.8.0~bpo11+2 vncterm: 1.7-1 zfsutils-linux: 2.1.6-pve1

ha-manager status -v:

unable to read file '/etc/pve/nodes/<old-removed-node>/lrm_status' unable to read file '/etc/pve/nodes/<old-removed-node>/lrm_status' quorum OK master <existing-node> (idle, Tue Sep 13 07:51:09 2022) lrm <old-removed-node> (old timestamp - dead?, Sun Sep 18 13:01:04 2022) lrm <old-removed-node> (unable to read lrm status) lrm <old-removed-node> (unable to read lrm status) full cluster state: unable to read file '/etc/pve/nodes/<old-removed-node>/lrm_status' unable to read file '/etc/pve/nodes/<old-removed-node>/lrm_status' { "lrm_status" : { "<old-removed-node>" : { "mode" : "shutdown", "results" : {}, "state" : "wait_for_agent_lock", "timestamp" : 1663498864 }, "<old-removed-node>" : { "mode" : "unknown" }, ">old-removed-node>" : { "mode" : "unknown" } }, "manager_status" : { "master_node" : "<old-removed-node>", "node_status" : { "<old-removed-node>" : "online", "<old-removed-node>" : "online", "<old-removed-node>" : "online" }, "service_status" : {}, "timestamp" : 1663048269 }, "quorum" : { "node" : "<new-existing-node>", "quorate" : "1" } }
 
I think the HA manager won't do anything if there are no services configured. Just temporarily configure a service and the gone nodes should be removed after a while (IIRC might take up to an hour for it to consider the gone nodes really dead).
 
And..... they are gone!

You were right: After configuring a service for HA, first, the existing new nodes showed up in addition and, then, the old deleted nodes disappeared after a while (at least an hour).

Thanks!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!