[SOLVED] old node persists

Mar 27, 2021
7
0
21
60
Hello,
I have a cluster with 3 nodes that is running without any problems.
However, I have now completely recreated a node (reason was the conversion from lvm to zfs).

before it was
Node 1 - pve
Node 2 - pve2
Node 3 - pve3

Now I have reinstalled the 1st node, which has worked so far, but I have now integrated pve as pve1.

And now I have the following picture
Code:
# pvesh get /nodes

┌──────┬─────────┬───────┬───────┬────────┬───────────┬───────────┬─────────────────┬────────────┐
│ node │ status  │   cpu │ level │ maxcpu │    maxmem │       mem │ ssl_fingerprint │     uptime │
╞══════╪═════════╪═══════╪═══════╪════════╪═══════════╪═══════════╪═════════════════╪════════════╡
│ pve  │ offline │       │       │        │           │           │ :__:__:__:__:80 │            │
├──────┼─────────┼───────┼───────┼────────┼───────────┼───────────┼─────────────────┼────────────┤
│ pve1 │ online  │ 0.60% │ c     │      8 │ 31.28 GiB │  1.91 GiB │ :__:__:__:__:77 │      1h 9s │
├──────┼─────────┼───────┼───────┼────────┼───────────┼───────────┼─────────────────┼────────────┤
│ pve2 │ online  │ 3.14% │       │      4 │ 15.52 GiB │  2.31 GiB │ :__:__:__:__:40 │ 4h 45m 55s │
├──────┼─────────┼───────┼───────┼────────┼───────────┼───────────┼─────────────────┼────────────┤
│ pve3 │ online  │ 4.76% │       │      8 │ 31.27 GiB │ 14.61 GiB │ :__:__:__:__:A2 │   4h 6m 8s │
└──────┴─────────┴───────┴───────┴────────┴───────────┴───────────┴─────────────────┴────────────┘

# pvecm status
Cluster information
-------------------
Name:             pve-cluster
Config Version:   6
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Tue Dec 28 13:50:12 2021
Quorum provider:  corosync_votequorum
Nodes:            3
Node ID:          0x00000004
Ring ID:          2.c18
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   3
Highest expected: 3
Total votes:      3
Quorum:           2
Flags:            2Node Quorate

Membership information
----------------------
    Nodeid      Votes Name
0x00000002          1 192.168.178.29
0x00000003          1 192.168.178.21
0x00000004          1 192.168.178.28 (local)

# pvecm nodes

Membership information
----------------------
    Nodeid      Votes Name
         2          1 pve2
         3          1 pve3
         4          1 pve1 (local)

root@pve1:/etc/pve/nodes# ls -l
total 0
drwxr-xr-x 2 root www-data 0 Mar 27  2021 pve
drwxr-xr-x 2 root www-data 0 Dec 28 12:09 pve1
drwxr-xr-x 2 root www-data 0 Apr  5  2021 pve2
drwxr-xr-x 2 root www-data 0 Dec 27 10:22 pve3

the problem ist, that the old, deleted node Eve is schon (also in den web-gui)

i am not sure what i can do now to clean up this mess.

Just delete the /etc/pve/nodes/pve directory?

What ist with /etc/pve/corosync.conf? ist contains the old node also

Code:
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: pve
    nodeid: 1
    quorum_votes: 1
    ring0_addr: 192.168.178.28
  }
  node {
    name: pve1
    nodeid: 4
    quorum_votes: 1
    ring0_addr: 192.168.178.28
  }
  node {
    name: pve2
    nodeid: 2
    quorum_votes: 1
    ring0_addr: 192.168.178.29
  }
  node {
    name: pve3
    nodeid: 3
    quorum_votes: 1
    ring0_addr: 192.168.178.21
  }
}

quorum {
  expected_votes: 1
  provider: corosync_votequorum
  two_node: 1
  wait_for_all: 0
}

totem {
  cluster_name: pve-cluster
  config_version: 6
  interface {
    linknumber: 0
  }
  ip_version: ipv4-6
  link_mode: passive
  secauth: on
  version: 2
}

Thanks for your time
 
Last edited by a moderator:
Hi,
Just delete the /etc/pve/nodes/pve directory?
Basically yes. As long as there configuration files left from a node the API/GUI still needs to map those to their owning node. So after you ensured that you do not need any configuration file from node (including left-over VM/CT configs!) you can delete that directory and reload the GUI.
 
I overlooked that you actually still got it in the corosync config...
node {
name: pve
nodeid: 1
quorum_votes: 1
ring0_addr: 192.168.178.28
}

because I stopped checking that once I read the cluster status output would suggest that this is not the case (only 3 highest expected votes), which is a bit weird.
Votequorum information
----------------------
Expected votes: 3
Highest expected: 3
Total votes: 3
Quorum: 2
Flags: 2 Node Quorate

Maybe you forgot to do remove the now bogus node from the cluster config via pvecm delnode pve when recreating the cluster?
 
Hi Thomas,

i was sure .. did it now and all ist ok.
/etc/pve/corosync.conf now only contain the 3 wanted nodes, pvesh get /nodes ist ok an the GUI also.

Thanks a lot for your help and sorry for my slackness

Christian