Hello,
I have a cluster with 3 nodes that is running without any problems.
However, I have now completely recreated a node (reason was the conversion from lvm to zfs).
before it was
Node 1 - pve
Node 2 - pve2
Node 3 - pve3
Now I have reinstalled the 1st node, which has worked so far, but I have now integrated pve as pve1.
And now I have the following picture
the problem ist, that the old, deleted node Eve is schon (also in den web-gui)
i am not sure what i can do now to clean up this mess.
Just delete the /etc/pve/nodes/pve directory?
What ist with /etc/pve/corosync.conf? ist contains the old node also
Thanks for your time
I have a cluster with 3 nodes that is running without any problems.
However, I have now completely recreated a node (reason was the conversion from lvm to zfs).
before it was
Node 1 - pve
Node 2 - pve2
Node 3 - pve3
Now I have reinstalled the 1st node, which has worked so far, but I have now integrated pve as pve1.
And now I have the following picture
Code:
# pvesh get /nodes
┌──────┬─────────┬───────┬───────┬────────┬───────────┬───────────┬─────────────────┬────────────┐
│ node │ status │ cpu │ level │ maxcpu │ maxmem │ mem │ ssl_fingerprint │ uptime │
╞══════╪═════════╪═══════╪═══════╪════════╪═══════════╪═══════════╪═════════════════╪════════════╡
│ pve │ offline │ │ │ │ │ │ :__:__:__:__:80 │ │
├──────┼─────────┼───────┼───────┼────────┼───────────┼───────────┼─────────────────┼────────────┤
│ pve1 │ online │ 0.60% │ c │ 8 │ 31.28 GiB │ 1.91 GiB │ :__:__:__:__:77 │ 1h 9s │
├──────┼─────────┼───────┼───────┼────────┼───────────┼───────────┼─────────────────┼────────────┤
│ pve2 │ online │ 3.14% │ │ 4 │ 15.52 GiB │ 2.31 GiB │ :__:__:__:__:40 │ 4h 45m 55s │
├──────┼─────────┼───────┼───────┼────────┼───────────┼───────────┼─────────────────┼────────────┤
│ pve3 │ online │ 4.76% │ │ 8 │ 31.27 GiB │ 14.61 GiB │ :__:__:__:__:A2 │ 4h 6m 8s │
└──────┴─────────┴───────┴───────┴────────┴───────────┴───────────┴─────────────────┴────────────┘
# pvecm status
Cluster information
-------------------
Name: pve-cluster
Config Version: 6
Transport: knet
Secure auth: on
Quorum information
------------------
Date: Tue Dec 28 13:50:12 2021
Quorum provider: corosync_votequorum
Nodes: 3
Node ID: 0x00000004
Ring ID: 2.c18
Quorate: Yes
Votequorum information
----------------------
Expected votes: 3
Highest expected: 3
Total votes: 3
Quorum: 2
Flags: 2Node Quorate
Membership information
----------------------
Nodeid Votes Name
0x00000002 1 192.168.178.29
0x00000003 1 192.168.178.21
0x00000004 1 192.168.178.28 (local)
# pvecm nodes
Membership information
----------------------
Nodeid Votes Name
2 1 pve2
3 1 pve3
4 1 pve1 (local)
root@pve1:/etc/pve/nodes# ls -l
total 0
drwxr-xr-x 2 root www-data 0 Mar 27 2021 pve
drwxr-xr-x 2 root www-data 0 Dec 28 12:09 pve1
drwxr-xr-x 2 root www-data 0 Apr 5 2021 pve2
drwxr-xr-x 2 root www-data 0 Dec 27 10:22 pve3
the problem ist, that the old, deleted node Eve is schon (also in den web-gui)
i am not sure what i can do now to clean up this mess.
Just delete the /etc/pve/nodes/pve directory?
What ist with /etc/pve/corosync.conf? ist contains the old node also
Code:
logging {
debug: off
to_syslog: yes
}
nodelist {
node {
name: pve
nodeid: 1
quorum_votes: 1
ring0_addr: 192.168.178.28
}
node {
name: pve1
nodeid: 4
quorum_votes: 1
ring0_addr: 192.168.178.28
}
node {
name: pve2
nodeid: 2
quorum_votes: 1
ring0_addr: 192.168.178.29
}
node {
name: pve3
nodeid: 3
quorum_votes: 1
ring0_addr: 192.168.178.21
}
}
quorum {
expected_votes: 1
provider: corosync_votequorum
two_node: 1
wait_for_all: 0
}
totem {
cluster_name: pve-cluster
config_version: 6
interface {
linknumber: 0
}
ip_version: ipv4-6
link_mode: passive
secauth: on
version: 2
}
Thanks for your time
Last edited by a moderator: