Can't manage node

Syahwanius

New Member
Feb 15, 2021
16
1
3
28
Hello
i want to add node from other proxmox / join node
Condition :
Different subnet

Code:
detected the following error(s):
* authentication key '/etc/corosync/authkey' already exists
* cluster config '/etc/pve/corosync.conf' already exists
* corosync is already running, is this node already in a cluster?!

TASK ERROR: Check if node may join a cluster failed!

i was already check on coronsys.conf on second node (already editied) and config was okay after restart corosync,
the problem was on my primary node, i already edit coronsys.conf, but after restart the conf was change like before
any idea?
 
what are you mean by `coronsys.conf`?

Have you tried with CLI [0] to joining node cluster? what is your network configuration?

the node can ping each other? also please post the output of pveversion -v

[0] https://pve.proxmox.com/pve-docs/chapter-pvecm.html#_join_node_to_cluster_via_command_line
i did, already try it manual from CLI and WEB GUI

follow for pveversion -v
Code:
root@SVR-28:~# pveversion -v
proxmox-ve: 6.3-1 (running kernel: 5.4.78-2-pve)
pve-manager: 6.3-3 (running version: 6.3-3/eee5f901)
pve-kernel-5.4: 6.3-3
pve-kernel-helper: 6.3-3
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-5.4.65-1-pve: 5.4.65-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.0-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.0.7
libproxmox-backup-qemu0: 1.0.2-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-3
libpve-guest-common-perl: 3.1-4
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-6
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.8-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-5
pve-cluster: 6.2-1
pve-container: 3.3-3
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.2-1
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-8
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-5
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.5-pve1

for my config
Code:
root@SVR-28:/etc/corosync# cat corosync.conf
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: SVR-21
    nodeid: 2
    quorum_votes: 1
    ring0_addr: 10.10.30.21
  }
  node {
    name: SVR-23
    nodeid: 5
    quorum_votes: 1
    ring0_addr: 10.10.30.23
  }
  node {
    name: SVR-24
    nodeid: 1
    quorum_votes: 1
    ring0_addr: 10.10.30.24
  }
  node {
    name: SVR-26
    nodeid: 4
    quorum_votes: 1
    ring0_addr: 10.10.50.6
  }
  node {
    name: SVR-28
    nodeid: 3
    quorum_votes: 1
    ring0_addr: 10.10.30.28
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: 
  config_version: 26
  interface {
    linknumber: 0
  }
  ip_version: ipv4-6
  link_mode: passive
  secauth: on
  version: 2
}

pvecmadd error.png
statusnode.png
Note:
SVR-28 IP 10.10.30.28
SVR-26 IP 10.10.50.6
they different subnetmask
firewall already turned off
 
Last edited: