Cannot Delete Node From Cluster

mhayhurst

Renowned Member
Jul 21, 2016
111
7
83
44
Hello everyone,

I'm folowing the: Remove a Cluster Node but when trying to delete Node 1 (powered off as well) from Node 2 I receive this:

Bash:
root@proxmox2:~# pvecm delnode 192.168.1.4

400 Parameter verification failed.

node: invalid format - value does not look like a valid node name

pvecm delnode <node>



Bash:
root@proxmox2:~# pvecm nodes

Membership information
----------------------
    Nodeid      Votes Name
         1          1 192.168.1.4

         2          1 192.168.1.5 (local)


As you can see, it appears the node names are the IP addresses, so why are the names IP addresses if that is an invalid format and how can I delete Node 1 (192.168.1.4)?
 
Hi,
As you can see, it appears the node names are the IP addresses, so why are the names IP addresses if that is an invalid format and how can I delete Node 1 (192.168.1.4)?

May you try to put the IP between double quotes maybe it might not accept the IP format?

Bash:
pvecm delnode "192.168.1.4"
 
Last edited:
Hi,


May you try to put the IP between double quotes maybe it might not accept the IP format?

Bash:
pvecm delnode "192.168.1.4"

Unfortunately that did not work:

Bash:
root@proxmox2:~# pvecm delnode "192.168.1.4"
400 Parameter verification failed.
node: invalid format - value does not look like a valid node name

pvecm delnode <node>
 
Please post the output of the following commands:

Bash:
~ cat /etc/hostname
~ cat /etc/hosts
~ pveversion -v
~ pvecm status
 
Please post the output of the following commands:

Bash:
~ cat /etc/hostname
~ cat /etc/hosts
~ pveversion -v
~ pvecm status

Bash:
root@proxmox2:~# cat /etc/hostname
proxmox2

root@proxmox2:~# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
192.168.1.5 proxmox2.jam.lan proxmox2 pvelocalhost

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

root@proxmox2:~# pveversion -v
proxmox-ve: 5.4-2 (running kernel: 4.15.18-30-pve)
pve-manager: 5.4-15 (running version: 5.4-15/d0ec33c6)
pve-kernel-4.15: 5.4-19
pve-kernel-4.15.18-30-pve: 4.15.18-58
pve-kernel-4.15.18-28-pve: 4.15.18-56
pve-kernel-4.15.18-27-pve: 4.15.18-55
pve-kernel-4.15.18-26-pve: 4.15.18-54
pve-kernel-4.15.18-25-pve: 4.15.18-53
pve-kernel-4.15.18-24-pve: 4.15.18-52
pve-kernel-4.15.18-21-pve: 4.15.18-48
pve-kernel-4.15.18-20-pve: 4.15.18-46
pve-kernel-4.15.18-19-pve: 4.15.18-45
pve-kernel-4.15.18-18-pve: 4.15.18-44
pve-kernel-4.15.18-17-pve: 4.15.18-43
pve-kernel-4.15.18-16-pve: 4.15.18-41
pve-kernel-4.15.18-15-pve: 4.15.18-40
pve-kernel-4.15.18-14-pve: 4.15.18-39
pve-kernel-4.15.18-13-pve: 4.15.18-37
pve-kernel-4.15.18-12-pve: 4.15.18-36
pve-kernel-4.15.18-11-pve: 4.15.18-34
pve-kernel-4.15.18-10-pve: 4.15.18-32
pve-kernel-4.15.18-9-pve: 4.15.18-30
pve-kernel-4.15.18-8-pve: 4.15.18-28
pve-kernel-4.15.18-7-pve: 4.15.18-27
pve-kernel-4.15.18-5-pve: 4.15.18-24
pve-kernel-4.15.18-4-pve: 4.15.18-23
pve-kernel-4.15.18-2-pve: 4.15.18-21
pve-kernel-4.15.18-1-pve: 4.15.18-19
pve-kernel-4.15.17-3-pve: 4.15.17-14
pve-kernel-4.15.17-2-pve: 4.15.17-10
pve-kernel-4.15.17-1-pve: 4.15.17-9
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-12
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-56
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-14
libpve-storage-perl: 5.0-44
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-7
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
proxmox-widget-toolkit: 1.0-28
pve-cluster: 5.0-38
pve-container: 2.0-42
pve-docs: 5.4-2
pve-edk2-firmware: 1.20190312-1
pve-firewall: 3.0-22
pve-firmware: 2.0-7
pve-ha-manager: 2.0-9
pve-i18n: 1.1-4
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 3.0.1-4
pve-xtermjs: 3.12.0-1
qemu-server: 5.0-56
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.13-pve1~bpo2

root@proxmox2:~# pvecm status
Quorum information
------------------
Date:             Fri Dec 11 09:05:03 2020
Quorum provider:  corosync_votequorum
Nodes:            1
Node ID:          0x00000002
Ring ID:          2/49236
Quorate:          No

Votequorum information
----------------------
Expected votes:   2
Highest expected: 2
Total votes:      1
Quorum:           2 Activity blocked
Flags:           

Membership information
----------------------
    Nodeid      Votes Name
0x00000002          1 192.168.1.5 (local)


Node 1 (192.168.1.4) is powered off per the instructions to remove it from the cluster, do you need it powered back on?
 
pvecm delnode <Nr>
or
pvecm delnode <nodename>
Should work. I think option #1 works if the node is powered of, option #2 works if the node is still powered on.
 
Have you tried pvecm delnode <nodename> the nodename should be the output of command hostname
 
I might be a little late to the party, however:

I ran in the same issue, but figured it out. You need to use the name which is used in `/etc/corosync/corosync.conf`.

Code:
root@debian10-test:~# cat /etc/corosync/corosync.conf
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: debian10-test
    nodeid: 2
    quorum_votes: 1
    ring0_addr: 192.168.1.211
  }
  node {
    name: server-01
    nodeid: 1
    quorum_votes: 1
    ring0_addr: 192.168.1.6
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: blahblah
  config_version: 2
  interface {
    linknumber: 0
  }
  ip_version: ipv4-6
  link_mode: passive
  secauth: on
  version: 2
}

Code:
root@server-01:~# pvecm delnode "debian10-test"
Killing node 2