all nodes auto reboot after add one node to cluster(proxmox6.1)

sanshi_xt

Member
Apr 24, 2020
2
0
21
37
I have 4 nodes in proxmox 6.1 . after add a new node to cluster via web gui. 4 nodes auto reboot.
the cluster is OK after auto reboot. but i want to find the reason of reboot. please help me,thanks!


Bash:
root@lk2001d-p009001:~# pveversion -v
proxmox-ve: 6.1-2 (running kernel: 5.3.10-1-pve)
pve-manager: 6.1-3 (running version: 6.1-3/37248ce6)
pve-kernel-5.3: 6.0-12
pve-kernel-helper: 6.0-12
pve-kernel-5.3.10-1-pve: 5.3.10-1
ceph: 14.2.8-pve1
ceph-fuse: 14.2.8-pve1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-5
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-9
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.1-2
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve3
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-1
pve-cluster: 6.1-2
pve-container: 3.0-14
pve-docs: 6.1-3
pve-edk2-firmware: 2.20191002-1
pve-firewall: 4.0-9
pve-firmware: 3.0-4
pve-ha-manager: 3.0-8
pve-i18n: 2.0-3
pve-qemu-kvm: 4.1.1-2
pve-xtermjs: 3.13.2-1
qemu-server: 6.1-2
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve2
Bash:
root@lk2001d-p009001:~# cat /etc/pve/corosync.conf
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: lk1707d-p009005
    nodeid: 5
    quorum_votes: 1
    ring0_addr: 10.216.9.5
    ring1_addr: 192.168.9.5
  }
  node {
    name: lk1804d-p009003
    nodeid: 3
    quorum_votes: 1
    ring0_addr: 10.216.9.3
    ring1_addr: 192.168.9.3
  }
  node {
    name: lk1804d-p009004
    nodeid: 4
    quorum_votes: 1
    ring0_addr: 10.216.9.4
    ring1_addr: 192.168.9.4
  }
  node {
    name: lk2001d-p009001
    nodeid: 1
    quorum_votes: 1
    ring0_addr: 10.216.9.1
    ring1_addr: 192.168.9.1
  }
  node {
    name: lk2001d-p009002
    nodeid: 2
    quorum_votes: 1
    ring0_addr: 10.216.9.2
    ring1_addr: 192.168.9.2
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: pve-lk-g11
  config_version: 5
  interface {
    linknumber: 0
  }
  interface {
    linknumber: 1
  }
  ip_version: ipv4-6
  link_mode: passive
  secauth: on
  version: 2
}
Bash:
root@lk2001d-p009001:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback

iface eth0 inet manual

iface eth1 inet manual

iface eth2 inet manual

iface eth3 inet manual

auto bond0
iface bond0 inet manual
    bond-slaves eth0 eth1
    bond-miimon 100
    bond-mode active-backup

auto bond1
iface bond1 inet manual
    bond-slaves eth2 eth3
    bond-miimon 100
    bond-mode active-backup

auto vmbr0
iface vmbr0 inet static
    address 10.216.9.1
    netmask 255.255.254.0
    gateway 10.216.8.1
    bridge_ports bond0
    bridge_stp off
    bridge_fd 0

auto vmbr1
iface vmbr1 inet static
    address 192.168.9.1
    netmask 255.255.254.0
    bridge_ports bond1
    bridge_stp off
    bridge_fd 0
 

Attachments

  • syslog-9.1-cut.txt
    10.2 KB · Views: 3
  • syslog-9.2-cut.txt
    8.2 KB · Views: 2
  • syslog-9.3-cut.txt
    15.6 KB · Views: 2
  • syslog-9.4-cut.txt
    11.5 KB · Views: 2
  • syslog-9.5-cut.txt
    48.4 KB · Views: 2

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!