[SOLVED] Deleted nodes still visible in HA

BenDDD

Member
Nov 28, 2019
59
1
11
40
Hello,

I deleted nodes from my Proxmox cluster following this doc. they are no longer members of the cluster but they are still visible in the HA menu of the GUI:

cat /etc/pve/.members
{
"nodename": "galaxie5",
"version": 60,
"cluster": { "name": "galaxie", "version": 89, "nodes": 10, "quorate": 1 },
"nodelist": {
"galaxie5": { "id": 5, "online": 1, "ip": "147.215.130.105"},
"galaxie6": { "id": 6, "online": 1, "ip": "147.215.130.106"},
"galaxie7": { "id": 7, "online": 1, "ip": "147.215.130.107"},
"galaxie9": { "id": 9, "online": 1, "ip": "147.215.130.109"},
"galaxie15": { "id": 15, "online": 1, "ip": "147.215.130.115"},
"galaxie21": { "id": 21, "online": 1, "ip": "147.215.130.121"},
"galaxie22": { "id": 22, "online": 1, "ip": "147.215.130.122"},
"galaxie23": { "id": 23, "online": 1, "ip": "147.215.130.123"},
"galaxie27": { "id": 24, "online": 1, "ip": "147.215.130.127"},
"galaxie25": { "id": 25, "online": 1, "ip": "147.215.130.125"}
}
}

1618235796974.png

Did I forget to do something?

Thank you in advance for your help.
 
Hi,
are there still directories for the deleted nodes in /etc/pve/nodes? IIRC we don't remove those directories automatically.
 
Hello,

Yes, there are still directories corresponding to deleted nodes:

ls -al /etc/pve/nodes
total 0
drwxr-xr-x 2 root www-data 0 Aug 3 2017 .
drwxr-xr-x 2 root www-data 0 Jan 1 1970 ..
drwxr-xr-x 2 root www-data 0 Aug 3 2017 dhcp-20-216
drwxr-xr-x 2 root www-data 0 Aug 3 2017 galaxie1
drwxr-xr-x 2 root www-data 0 Nov 3 2017 galaxie10
drwxr-xr-x 2 root www-data 0 Jun 8 2018 galaxie11
drwxr-xr-x 2 root www-data 0 Dec 18 2018 galaxie12
drwxr-xr-x 2 root www-data 0 Apr 12 14:33 galaxie12.12042021
drwxr-xr-x 2 root www-data 0 Nov 5 2019 galaxie13
drwxr-xr-x 2 root www-data 0 Aug 13 2019 galaxie15
drwxr-xr-x 2 root www-data 0 Sep 25 2019 galaxie16
drwxr-xr-x 2 root www-data 0 Sep 25 2019 galaxie17
drwxr-xr-x 2 root www-data 0 Jan 24 2020 galaxie18
drwxr-xr-x 2 root www-data 0 Mar 6 2020 galaxie19
drwxr-xr-x 2 root www-data 0 Dec 4 2017 galaxie1-ifi
drwxr-xr-x 2 root www-data 0 Aug 3 2017 galaxie2
drwxr-xr-x 2 root www-data 0 Mar 6 2020 galaxie20
drwxr-xr-x 2 root www-data 0 May 18 2020 galaxie21
drwxr-xr-x 2 root www-data 0 May 22 2020 galaxie22
drwxr-xr-x 2 root www-data 0 May 22 2020 galaxie23
drwxr-xr-x 2 root www-data 0 May 25 2020 galaxie25
drwxr-xr-x 2 root www-data 0 Sep 16 2020 galaxie27
drwxr-xr-x 2 root www-data 0 Dec 20 2017 galaxie2-ifi
drwxr-xr-x 2 root www-data 0 Aug 17 2017 galaxie3
drwxr-xr-x 2 root www-data 0 Aug 14 2017 galaxie4
drwxr-xr-x 2 root www-data 0 Aug 22 2017 galaxie5
drwxr-xr-x 2 root www-data 0 Sep 6 2017 galaxie6
drwxr-xr-x 2 root www-data 0 Oct 31 2017 galaxie7
drwxr-xr-x 2 root www-data 0 Nov 2 2017 galaxie9
drwxr-xr-x 2 root www-data 0 Jul 6 2020 save

Do I have to delete them manually?
 
Yes. Maybe first move them somewhere else and check the contents one last time before deleting them for good.
 
I deleted the galaxie12 node folder but it is still present in the HA menu of the GUI.

ls -al
total 0
drwxr-xr-x 2 root www-data 0 Aug 3 2017 .
drwxr-xr-x 2 root www-data 0 Jan 1 1970 ..
drwxr-xr-x 2 root www-data 0 Aug 3 2017 dhcp-20-216
drwxr-xr-x 2 root www-data 0 Aug 3 2017 galaxie1
drwxr-xr-x 2 root www-data 0 Nov 3 2017 galaxie10
drwxr-xr-x 2 root www-data 0 Jun 8 2018 galaxie11
drwxr-xr-x 2 root www-data 0 Apr 12 14:33 galaxie12.12042021
drwxr-xr-x 2 root www-data 0 Nov 5 2019 galaxie13
drwxr-xr-x 2 root www-data 0 Aug 13 2019 galaxie15
drwxr-xr-x 2 root www-data 0 Sep 25 2019 galaxie16
drwxr-xr-x 2 root www-data 0 Sep 25 2019 galaxie17
drwxr-xr-x 2 root www-data 0 Jan 24 2020 galaxie18
drwxr-xr-x 2 root www-data 0 Mar 6 2020 galaxie19
drwxr-xr-x 2 root www-data 0 Dec 4 2017 galaxie1-ifi
drwxr-xr-x 2 root www-data 0 Aug 3 2017 galaxie2
drwxr-xr-x 2 root www-data 0 Mar 6 2020 galaxie20
drwxr-xr-x 2 root www-data 0 May 18 2020 galaxie21
drwxr-xr-x 2 root www-data 0 May 22 2020 galaxie22
drwxr-xr-x 2 root www-data 0 May 22 2020 galaxie23
drwxr-xr-x 2 root www-data 0 May 25 2020 galaxie25
drwxr-xr-x 2 root www-data 0 Sep 16 2020 galaxie27
drwxr-xr-x 2 root www-data 0 Dec 20 2017 galaxie2-ifi
drwxr-xr-x 2 root www-data 0 Aug 17 2017 galaxie3
drwxr-xr-x 2 root www-data 0 Aug 14 2017 galaxie4
drwxr-xr-x 2 root www-data 0 Aug 22 2017 galaxie5
drwxr-xr-x 2 root www-data 0 Sep 6 2017 galaxie6
drwxr-xr-x 2 root www-data 0 Oct 31 2017 galaxie7
drwxr-xr-x 2 root www-data 0 Nov 2 2017 galaxie9
drwxr-xr-x 2 root www-data 0 Jul 6 2020 save

1618315787968.png
 
Is galaxie17 still part of the cluster? If it is, please try restarting the HA services on that node, i.e. systemctl restart pve-ha-lrm.service pve-ha-crm.service.

Are the nodes running the same package versions? Could you share the output of pveversion -v?
 
Hello,

galaxie17 is no longer part of the cluster. It is one of the nodes that I removed. Do you want me to try the commands with galaxie6 for example?

galaxie5

proxmox-ve: 6.2-1 (running kernel: 5.4.60-1-pve)
pve-manager: 6.2-11 (running version: 6.2-11/22fb4983)
pve-kernel-5.4: 6.2-6
pve-kernel-helper: 6.2-6
pve-kernel-5.3: 6.1-6
pve-kernel-5.0: 6.0-11
pve-kernel-5.4.60-1-pve: 5.4.60-2
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.18-2-pve: 5.3.18-2
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-4.13.13-2-pve: 4.13.13-33
pve-kernel-4.13.4-1-pve: 4.13.4-26
pve-kernel-4.10.17-2-pve: 4.10.17-20
pve-kernel-4.10.15-1-pve: 4.10.15-15
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.5
libpve-access-control: 6.1-2
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.2-2
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.2-6
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-12
pve-cluster: 6.1-8
pve-container: 3.2-1
pve-docs: 6.2-5
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-2
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-1
pve-qemu-kvm: 5.1.0-1
pve-xtermjs: 4.7.0-2
qemu-server: 6.2-14
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.4-pve1



galaxie6

proxmox-ve: 6.2-2 (running kernel: 5.4.65-1-pve)
pve-manager: 6.2-15 (running version: 6.2-15/48bd51b6)
pve-kernel-5.4: 6.2-7
pve-kernel-helper: 6.2-7
pve-kernel-5.3: 6.1-6
pve-kernel-5.0: 6.0-11
pve-kernel-5.4.65-1-pve: 5.4.65-1
pve-kernel-5.4.60-1-pve: 5.4.60-2
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-4.13.13-2-pve: 4.13.13-33
pve-kernel-4.13.4-1-pve: 4.13.4-26
pve-kernel-4.10.17-2-pve: 4.10.17-20
pve-kernel-4.10.15-1-pve: 4.10.15-15
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.5
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.2-4
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.2-10
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.1-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.3-10
pve-cluster: 6.2-1
pve-container: 3.2-2
pve-docs: 6.2-6
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-6
pve-xtermjs: 4.7.0-2
qemu-server: 6.2-19
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.4-pve2



galaxie7

proxmox-ve: 6.2-1 (running kernel: 5.4.60-1-pve)
pve-manager: 6.2-11 (running version: 6.2-11/22fb4983)
pve-kernel-5.4: 6.2-6
pve-kernel-helper: 6.2-6
pve-kernel-5.3: 6.1-6
pve-kernel-5.0: 6.0-11
pve-kernel-5.4.60-1-pve: 5.4.60-2
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.18-2-pve: 5.3.18-2
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-4.13.13-2-pve: 4.13.13-33
pve-kernel-4.13.4-1-pve: 4.13.4-26
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.5
libpve-access-control: 6.1-2
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.2-2
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.2-6
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-12
pve-cluster: 6.1-8
pve-container: 3.2-1
pve-docs: 6.2-5
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-2
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-1
pve-qemu-kvm: 5.1.0-1
pve-xtermjs: 4.7.0-2
qemu-server: 6.2-14
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.4-pve1



galaxie9

proxmox-ve: 6.2-2 (running kernel: 5.4.65-1-pve)
pve-manager: 6.2-15 (running version: 6.2-15/48bd51b6)
pve-kernel-5.4: 6.2-7
pve-kernel-helper: 6.2-7
pve-kernel-5.3: 6.1-6
pve-kernel-5.0: 6.0-11
pve-kernel-5.4.65-1-pve: 5.4.65-1
pve-kernel-5.4.60-1-pve: 5.4.60-2
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-4.13.13-2-pve: 4.13.13-33
pve-kernel-4.13.4-1-pve: 4.13.4-26
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.5
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.2-4
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.2-10
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.1-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.3-10
pve-cluster: 6.2-1
pve-container: 3.2-2
pve-docs: 6.2-6
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-6
pve-xtermjs: 4.7.0-2
qemu-server: 6.2-19
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.4-pve2



galaxie15

proxmox-ve: 6.2-1 (running kernel: 5.4.60-1-pve)
pve-manager: 6.2-11 (running version: 6.2-11/22fb4983)
pve-kernel-5.4: 6.2-6
pve-kernel-helper: 6.2-6
pve-kernel-5.3: 6.1-6
pve-kernel-5.0: 6.0-11
pve-kernel-5.4.60-1-pve: 5.4.60-2
pve-kernel-4.15: 5.4-6
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.18-2-pve: 5.3.18-2
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-4.15.18-18-pve: 4.15.18-44
pve-kernel-4.15.17-1-pve: 4.15.17-9
ceph-fuse: 12.2.13-pve1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.5
libpve-access-control: 6.1-2
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.2-2
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.2-6
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-12
pve-cluster: 6.1-8
pve-container: 3.2-1
pve-docs: 6.2-5
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-2
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-1
pve-qemu-kvm: 5.1.0-1
pve-xtermjs: 4.7.0-2
qemu-server: 6.2-14
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.4-pve1



galaxie21

proxmox-ve: 6.3-1 (running kernel: 5.4.78-2-pve)
pve-manager: 6.3-3 (running version: 6.3-3/eee5f901)
pve-kernel-5.4: 6.3-3
pve-kernel-helper: 6.3-3
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.7
libproxmox-backup-qemu0: 1.0.2-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-2
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-3
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.6-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-3
pve-cluster: 6.2-1
pve-container: 3.3-2
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-7
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-2
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.5-pve1



galaxie22

proxmox-ve: 6.3-1 (running kernel: 5.4.78-2-pve)
pve-manager: 6.3-3 (running version: 6.3-3/eee5f901)
pve-kernel-5.4: 6.3-3
pve-kernel-helper: 6.3-3
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-5.4.65-1-pve: 5.4.65-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.7
libproxmox-backup-qemu0: 1.0.2-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-2
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-3
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.6-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-3
pve-cluster: 6.2-1
pve-container: 3.3-2
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-7
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-2
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.5-pve1



galaxie23

proxmox-ve: 6.2-2 (running kernel: 5.4.65-1-pve)
pve-manager: 6.2-15 (running version: 6.2-15/48bd51b6)
pve-kernel-5.4: 6.2-7
pve-kernel-helper: 6.2-7
pve-kernel-5.4.65-1-pve: 5.4.65-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.5
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.2-4
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.2-10
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
openvswitch-switch: 2.12.0-1
proxmox-backup-client: 1.0.1-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.3-10
pve-cluster: 6.2-1
pve-container: 3.2-2
pve-docs: 6.2-6
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-6
pve-xtermjs: 4.7.0-2
qemu-server: 6.2-19
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.4-pve2



galaxie25

proxmox-ve: 6.3-1 (running kernel: 5.4.78-2-pve)
pve-manager: 6.3-3 (running version: 6.3-3/eee5f901)
pve-kernel-5.4: 6.3-3
pve-kernel-helper: 6.3-3
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-5.4.65-1-pve: 5.4.65-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.7
libproxmox-backup-qemu0: 1.0.2-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-2
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-3
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
openvswitch-switch: 2.12.0-1
proxmox-backup-client: 1.0.6-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-3
pve-cluster: 6.2-1
pve-container: 3.3-2
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-7
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-2
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.5-pve1



galaxie27

proxmox-ve: 6.3-1 (running kernel: 5.4.78-2-pve)
pve-manager: 6.3-3 (running version: 6.3-3/eee5f901)
pve-kernel-5.4: 6.3-3
pve-kernel-helper: 6.3-3
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-5.4.65-1-pve: 5.4.65-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.7
libproxmox-backup-qemu0: 1.0.2-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-2
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-3
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.6-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-3
pve-cluster: 6.2-1
pve-container: 3.3-2
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-7
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-2
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.5-pve1
 
Hello,

galaxie17 is no longer part of the cluster. It is one of the nodes that I removed. Do you want me to try the commands with galaxie6 for example?

Since the current manager/master node does not exist anymore, please try using rmdir /etc/pve/priv/lock/ha_manager_lock to release the lock. Then one of the existing nodes should automatically step in. If not, then yes, try additionally restarting the services on an existing node too.
 
root@galaxie5:~# rmdir /etc/pve/priv/lock/ha_manager_lock

root@galaxie5:~# ha-manager status
unable to read file '/etc/pve/nodes/galaxie12/lrm_status'
unable to read file '/etc/pve/nodes/galaxie14/lrm_status'
unable to read file '/etc/pve/nodes/galaxie8/lrm_status'
quorum OK
master galaxie17 (idle, Thu Oct 3 10:41:39 2019)
lrm galaxie1 (old timestamp - dead?, Mon Nov 23 15:22:36 2020)
lrm galaxie1-ifi (old timestamp - dead?, Mon Jul 27 09:37:51 2020)
lrm galaxie10 (old timestamp - dead?, Mon Nov 23 15:18:51 2020)
lrm galaxie11 (old timestamp - dead?, Mon Nov 23 15:18:52 2020)
lrm galaxie12 (unable to read lrm status)
lrm galaxie14 (unable to read lrm status)
lrm galaxie15 (idle, Thu Apr 15 15:38:09 2021)
lrm galaxie16 (old timestamp - dead?, Mon Nov 23 15:22:36 2020)
lrm galaxie17 (old timestamp - dead?, Mon Nov 23 15:22:36 2020)
lrm galaxie2 (old timestamp - dead?, Mon Nov 23 15:18:48 2020)
lrm galaxie2-ifi (old timestamp - dead?, Mon Jul 27 09:37:10 2020)
lrm galaxie3 (old timestamp - dead?, Mon Nov 23 15:18:49 2020)
lrm galaxie4 (old timestamp - dead?, Mon Nov 23 15:18:49 2020)
lrm galaxie5 (idle, Thu Apr 15 15:38:11 2021)
lrm galaxie6 (idle, Thu Apr 15 15:38:10 2021)
lrm galaxie7 (idle, Thu Apr 15 15:38:10 2021)
lrm galaxie8 (unable to read lrm status)
lrm galaxie9 (idle, Thu Apr 15 15:38:10 2021)

root@galaxie5:~# systemctl restart pve-ha-lrm.service pve-ha-crm.service


root@galaxie5:~# ha-manager status
unable to read file '/etc/pve/nodes/galaxie12/lrm_status'
unable to read file '/etc/pve/nodes/galaxie14/lrm_status'
unable to read file '/etc/pve/nodes/galaxie8/lrm_status'
quorum OK
master galaxie17 (idle, Thu Oct 3 10:41:39 2019)
lrm galaxie1 (old timestamp - dead?, Mon Nov 23 15:22:36 2020)
lrm galaxie1-ifi (old timestamp - dead?, Mon Jul 27 09:37:51 2020)
lrm galaxie10 (old timestamp - dead?, Mon Nov 23 15:18:51 2020)
lrm galaxie11 (old timestamp - dead?, Mon Nov 23 15:18:52 2020)
lrm galaxie12 (unable to read lrm status)
lrm galaxie14 (unable to read lrm status)
lrm galaxie15 (idle, Thu Apr 15 15:38:09 2021)
lrm galaxie16 (old timestamp - dead?, Mon Nov 23 15:22:36 2020)
lrm galaxie17 (old timestamp - dead?, Mon Nov 23 15:22:36 2020)
lrm galaxie2 (old timestamp - dead?, Mon Nov 23 15:18:48 2020)
lrm galaxie2-ifi (old timestamp - dead?, Mon Jul 27 09:37:10 2020)
lrm galaxie3 (old timestamp - dead?, Mon Nov 23 15:18:49 2020)
lrm galaxie4 (old timestamp - dead?, Mon Nov 23 15:18:49 2020)
lrm galaxie5 (idle, Thu Apr 15 15:38:11 2021)
lrm galaxie6 (idle, Thu Apr 15 15:38:10 2021)
lrm galaxie7 (idle, Thu Apr 15 15:38:10 2021)
lrm galaxie8 (unable to read lrm status)
lrm galaxie9 (idle, Thu Apr 15 15:38:10 2021)
 
What is the output of pvecm status?

What is the output of systemctl status pve-ha-crm.service on different nodes (no need to share it multiple time if it's basically the same)?

One of them should indicate something like
Code:
Apr 16 09:07:30 rob2 systemd[1]: Starting PVE Cluster HA Resource Manager Daemon...
Apr 16 09:07:31 rob2 pve-ha-crm[3237]: starting server
Apr 16 09:07:31 rob2 pve-ha-crm[3237]: status change startup => wait_for_quorum
Apr 16 09:07:31 rob2 systemd[1]: Started PVE Cluster HA Resource Manager Daemon.
Apr 16 09:07:41 rob2 pve-ha-crm[3237]: status change wait_for_quorum => slave
Apr 16 09:09:23 rob2 pve-ha-crm[3237]: successfully acquired lock 'ha_manager_lock'
Apr 16 09:09:23 rob2 pve-ha-crm[3237]: watchdog active
Apr 16 09:09:23 rob2 pve-ha-crm[3237]: status change slave => master
i.e. that it became the master.
 
What is the output of pvecm status?
pvecm status
Cluster information
-------------------
Name: galaxie
Config Version: 89
Transport: knet
Secure auth: on

Quorum information
------------------
Date: Fri Apr 16 11:24:25 2021
Quorum provider: corosync_votequorum
Nodes: 10
Node ID: 0x00000005
Ring ID: 5.19b73
Quorate: Yes

Votequorum information
----------------------
Expected votes: 10
Highest expected: 10
Total votes: 10
Quorum: 6
Flags: Quorate

Membership information
----------------------
Nodeid Votes Name
0x00000005 1 147.215.130.105 (local)
0x00000006 1 147.215.130.106
0x00000007 1 147.215.130.107
0x00000009 1 147.215.130.109
0x0000000f 1 147.215.130.115
0x00000015 1 147.215.130.121
0x00000016 1 147.215.130.122
0x00000017 1 147.215.130.123
0x00000018 1 147.215.130.127
0x00000019 1 147.215.130.125

What is the output of systemctl status pve-ha-crm.service on different nodes (no need to share it multiple time if it's basically the same)?
Same output for each nodes :
root@galaxie5:~# systemctl status pve-ha-crm.service
● pve-ha-crm.service - PVE Cluster HA Resource Manager Daemon
Loaded: loaded (/lib/systemd/system/pve-ha-crm.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2021-04-15 15:34:29 CEST; 19h ago
Process: 21863 ExecStart=/usr/sbin/pve-ha-crm start (code=exited, status=0/SUCCESS)
Main PID: 21866 (pve-ha-crm)
Tasks: 1 (limit: 4915)
Memory: 89.1M
CGroup: /system.slice/pve-ha-crm.service
└─21866 pve-ha-crm

Apr 15 15:34:28 galaxie5 systemd[1]: Starting PVE Cluster HA Resource Manager Daemon...
Apr 15 15:34:29 galaxie5 pve-ha-crm[21866]: starting server
Apr 15 15:34:29 galaxie5 pve-ha-crm[21866]: status change startup => wait_for_quorum
Apr 15 15:34:29 galaxie5 systemd[1]: Started PVE Cluster HA Resource Manager Daemon.
 
Seems like the quorum cannot be detected for some reason, but I feel like we're getting closer. What's the output of the following?
Code:
systemctl status pve-cluster.service
perl -e "use PVE::Cluster; PVE::Cluster::check_cfs_quorum();"
stat /etc/pve/local
 
root@galaxie5:~# systemctl status pve-cluster.service
● pve-cluster.service - The Proxmox VE cluster filesystem
Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2020-11-27 15:16:04 CET; 4 months 18 days ago
Main PID: 2491 (pmxcfs)
Tasks: 13 (limit: 4915)
Memory: 89.0M
CGroup: /system.slice/pve-cluster.service
└─2491 /usr/bin/pmxcfs

Apr 16 13:38:31 galaxie5 pmxcfs[2491]: [status] notice: received log
Apr 16 13:39:50 galaxie5 pmxcfs[2491]: [status] notice: received log
Apr 16 13:45:10 galaxie5 pmxcfs[2491]: [status] notice: received log
Apr 16 13:51:15 galaxie5 pmxcfs[2491]: [status] notice: received log
Apr 16 13:53:07 galaxie5 pmxcfs[2491]: [status] notice: received log
Apr 16 13:53:32 galaxie5 pmxcfs[2491]: [status] notice: received log
Apr 16 13:54:51 galaxie5 pmxcfs[2491]: [status] notice: received log
Apr 16 14:00:10 galaxie5 pmxcfs[2491]: [status] notice: received log
Apr 16 14:06:16 galaxie5 pmxcfs[2491]: [status] notice: received log
Apr 16 14:06:16 galaxie5 pmxcfs[2491]: [status] notice: received log
root@galaxie5:~# perl -e "use PVE::Cluster; PVE::Cluster::check_cfs_quorum();"
root@galaxie5:~# stat /etc/pve/local
File: /etc/pve/local -> nodes/galaxie5
Size: 0 Blocks: 0 IO Block: 4096 symbolic link
Device: 35h/53d Inode: 3 Links: 1
Access: (0755/lrwxr-xr-x) Uid: ( 0/ root) Gid: ( 33/www-data)
Access: 1970-01-01 01:00:00.000000000 +0100
Modify: 1970-01-01 01:00:00.000000000 +0100
Change: 1970-01-01 01:00:00.000000000 +0100
Birth: -
root@galaxie5:~# systemctl status pve-cluster.service
● pve-cluster.service - The Proxmox VE cluster filesystem
Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2020-11-27 15:16:04 CET; 4 months 18 days ago
Main PID: 2491 (pmxcfs)
Tasks: 13 (limit: 4915)
Memory: 89.0M
CGroup: /system.slice/pve-cluster.service
└─2491 /usr/bin/pmxcfs
 
Do you have any HA resources configured? If not, the CRM services won't do anything (even updating the status, we might change that in the future though).
 
I actually have no resource currently configured on the HA.

So you think that if I configure one, the CRM services will activate and update everything?
 
Yes, they should.
 
Hi @Fabian_E,

I added a VM in the resources and the HA master has changed well but I always see the servers that I deleted :
root@galaxie22:~# ha-manager status
unable to read file '/etc/pve/nodes/galaxie12/lrm_status'
unable to read file '/etc/pve/nodes/galaxie14/lrm_status'
unable to read file '/etc/pve/nodes/galaxie8/lrm_status'
quorum OK
master galaxie21 (active, Mon Apr 19 10:01:27 2021)
lrm galaxie1 (old timestamp - dead?, Mon Nov 23 15:22:36 2020)
lrm galaxie1-ifi (old timestamp - dead?, Mon Jul 27 09:37:51 2020)
lrm galaxie10 (old timestamp - dead?, Mon Nov 23 15:18:51 2020)
lrm galaxie11 (old timestamp - dead?, Mon Nov 23 15:18:52 2020)
lrm galaxie12 (unable to read lrm status)
lrm galaxie14 (unable to read lrm status)
lrm galaxie15 (idle, Mon Apr 19 10:01:27 2021)
lrm galaxie16 (old timestamp - dead?, Mon Nov 23 15:22:36 2020)
lrm galaxie17 (old timestamp - dead?, Mon Nov 23 15:22:36 2020)
lrm galaxie2 (old timestamp - dead?, Mon Nov 23 15:18:48 2020)
lrm galaxie2-ifi (old timestamp - dead?, Mon Jul 27 09:37:10 2020)
lrm galaxie21 (idle, Mon Apr 19 10:01:24 2021)
lrm galaxie22 (idle, Mon Apr 19 10:01:24 2021)
lrm galaxie23 (idle, Mon Apr 19 10:01:24 2021)
lrm galaxie25 (idle, Mon Apr 19 10:01:24 2021)
lrm galaxie27 (active, Mon Apr 19 10:01:19 2021)
lrm galaxie3 (old timestamp - dead?, Mon Nov 23 15:18:49 2020)
lrm galaxie4 (old timestamp - dead?, Mon Nov 23 15:18:49 2020)
lrm galaxie5 (idle, Mon Apr 19 10:01:26 2021)
lrm galaxie6 (idle, Mon Apr 19 10:01:24 2021)
lrm galaxie7 (idle, Mon Apr 19 10:01:26 2021)
lrm galaxie8 (unable to read lrm status)
lrm galaxie9 (idle, Mon Apr 19 10:01:24 2021)
service vm:220 (galaxie27, started)
 
There is a one hour delay (starting from the time the CRM service was started) for removing nodes that are not part of the cluster anymore. That is to avoid false positives for nodes that are/appear offline for other reasons.
 
Last edited:
Perfect. The nodes have been deleted:

root@galaxie5:~# ha-manager status
quorum OK
master galaxie21 (active, Mon Apr 19 11:40:48 2021)
lrm galaxie15 (idle, Mon Apr 19 11:40:46 2021)
lrm galaxie21 (idle, Mon Apr 19 11:40:46 2021)
lrm galaxie22 (idle, Mon Apr 19 11:40:46 2021)
lrm galaxie23 (idle, Mon Apr 19 11:40:46 2021)
lrm galaxie25 (idle, Mon Apr 19 11:40:46 2021)
lrm galaxie27 (active, Mon Apr 19 11:40:39 2021)
lrm galaxie5 (idle, Mon Apr 19 11:40:44 2021)
lrm galaxie6 (idle, Mon Apr 19 11:40:46 2021)
lrm galaxie7 (idle, Mon Apr 19 11:40:44 2021)
lrm galaxie9 (idle, Mon Apr 19 11:40:46 2021)
service vm:220 (galaxie27, started)

Thanks a lot for your help @Fabian_E !!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!