How can we get rid of the old node in such a situation?
pveversion -v
pvecm status
pvecm nodes
proxmox-ve: 6.2-1 (running kernel: 5.4.34-1-pve)
pve-manager: 6.2-4 (running version: 6.2-4/9824574a)
pve-kernel-5.4: 6.2-1
pve-kernel-helper: 6.2-1
pve-kernel-5.0: 6.0-11
pve-kernel-5.4.34-1-pve: 5.4.34-2
pve-kernel-5.0.21-5-pve: 5.0.21-10
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 2.0.1-1+pve8
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libproxmox-acme-perl: 1.0.3
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-2
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-7
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve2
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-1
pve-cluster: 6.1-8
pve-container: 3.1-5
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-2
pve-qemu-kvm: 5.0.0-2
pve-xtermjs: 4.3.0-1
pve-zsync: 2.0-3
qemu-server: 6.2-2
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1
Cluster information
-------------------
Name: cs
Config Version: 5
Transport: knet
Secure auth: on
Quorum information
------------------
Date: Thu Jun 18 19:48:13 2020
Quorum provider: corosync_votequorum
Nodes: 3
Node ID: 0x00000003
Ring ID: 2.213f
Quorate: Yes
Votequorum information
----------------------
Expected votes: 3
Highest expected: 3
Total votes: 3
Quorum: 2
Flags: Quorate
Membership information
----------------------
Nodeid Votes Name
0x00000002 1 192.168.0.3
0x00000003 1 192.168.0.4 (local)
0x00000004 1 192.168.0.1
Membership information
----------------------
Nodeid Votes Name
2 1 cs-b
3 1 cs-d (local)
4 1 cs-m
Not sure how to tackle this problem - any solutions?
Open new thread if you have new issue, but first try to resolve the old issue.Despite this note not being listed, when I shut off one other node, the quorum broke off.
Try that, as i said check if the old name of old node cs-p still exist in this path /etc/pve/node if yes - delete or move to another file then refresh your Proxmox GUI.Will removing node from /etc/pve/nodes/ resolve this?
but it is still visible in HA tab with description: unable to read lrm status
However in HA tab I don't have my newest node that is cs-m
cat /etc/pve/.members
ha-manager status