[SOLVED] Reunite fragmented cluster after network outage

dystopist

New Member
Mar 2, 2023
7
0
1
Owing to communication problems and a surprise revamp of our network, we have suffered from network loops, which have gradually overwhelmed any actual traffic, such that the nodes of our proxmox v7.1 cluster could probably not communicate neither with each other nor with the common nfs storage any more.

Now, although the network has fully recovered, and all of our 6 nodes can connect to each other via ssh and still have retained their configuration, they do not form a cluster any longer.

For each one of 4 of them, it looks like this:

Code:
% pvecm status                                                                                                                                                                                                                                                       ~
Cluster information
-------------------
Name:             ITP
Config Version:   6
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Thu Mar  2 12:32:05 2023
Quorum provider:  corosync_votequorum
Nodes:            1
Node ID:          0x00000001
Ring ID:          1.3c62
Quorate:          No

Votequorum information
----------------------
Expected votes:   6
Highest expected: 6
Total votes:      1
Quorum:           4 Activity blocked
Flags:

Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 XXX.XXX.XXX.XXX (local)

The other 2 are still bound together, i.e.

Code:
pvecm status
Cluster information
-------------------
Name:             ITP
Config Version:   6
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Thu Mar  2 12:39:18 2023
Quorum provider:  corosync_votequorum
Nodes:            2
Node ID:          0x00000006
Ring ID:          3.3c76
Quorate:          No

Votequorum information
----------------------
Expected votes:   6
Highest expected: 6
Total votes:      2
Quorum:           4 Activity blocked
Flags:

Membership information
----------------------
    Nodeid      Votes Name
0x00000003          1 XXX.XXX.XXX.XXX
0x00000006          1 XXX.XXX.XXX.XXX (local)

Such that our cluster now looks like this:


Node A
=====
Node B
=====
Node C
=====
Node D
=====
Node E + Node F

Since none of those fragments is quorate, nothing can be done without force. Luckily, all of the VMs are still running, especially the DHCP and LDAP servers.


Is there any way to safely bring the cluster back together without a total overhaul?

Please let me know, if you require any more information.
My experience with proxmox is quite limited.
 
Last edited:
please post
- "pveversion -v" output of each node
- /etc/pve/corosync.conf contents
- "journalctl -u corosync -u pve-cluster --since '-1hour'" on each node

please use [code] tags for posting long output/contents
 
Thank you for your help.
Here are all outputs (ip addresses and domain names redacted):

Node "rubin":

Code:
root@rubin % pveversion -v                                                                                                                                                                                                                                                      ~
proxmox-ve: 7.1-1 (running kernel: 5.13.19-4-pve)
pve-manager: 7.1-10 (running version: 7.1-10/6ddebafe)
pve-kernel-helper: 7.1-12
pve-kernel-5.13: 7.1-7
pve-kernel-5.4: 6.4-13
pve-kernel-5.3: 6.1-6
pve-kernel-5.13.19-4-pve: 5.13.19-9
pve-kernel-5.4.166-1-pve: 5.4.166-1
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-5.4.60-1-pve: 5.4.60-2
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.13-2-pve: 5.3.13-2
pve-kernel-4.4.134-1-pve: 4.4.134-112
ceph-fuse: 14.2.21-1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: 0.8.36+pve1
ksm-control-daemon: residual config
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.1
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-6
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.1-3
libpve-guest-common-perl: 4.1-1
libpve-http-server-perl: 4.1-1
libpve-storage-perl: 7.1-1
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.3.0-2
proxmox-backup-client: 2.1.5-1
proxmox-backup-file-restore: 2.1.5-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-6
pve-cluster: 7.1-3
pve-container: 4.1-4
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-5
pve-ha-manager: 3.3-3
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.1-2
pve-xtermjs: 4.16.0-1
qemu-server: 7.1-4
smartmontools: 7.2-pve2
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.2-pve1

Code:
root@rubin % cat /etc/pve/corosync.conf                                                                                                                                                                                                                                         ~
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: backup
    nodeid: 4
    quorum_votes: 1
    ring0_addr: X.X.X.10
  }
  node {
    name: itp11
    nodeid: 5
    quorum_votes: 1
    ring0_addr: X.X.X.11
  }
  node {
    name: itp12
    nodeid: 6
    quorum_votes: 1
    ring0_addr: X.X.X.12
  }
  node {
    name: monitor
    nodeid: 3
    quorum_votes: 1
    ring0_addr: X.X.X.9
  }
  node {
    name: rubin
    nodeid: 1
    quorum_votes: 1
    ring0_addr: X.X.X.1
  }
  node {
    name: saphir
    nodeid: 2
    quorum_votes: 1
    ring0_addr: X.X.X.2
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: ITP
  config_version: 6
  interface {
    linknumber: 0
  }
  ip_version: ipv4-6
  secauth: on
  version: 2
}

Journal file attached as rubin-journal.log .

Node "saphir":

Code:
root@saphir % pveversion -v                                                                                                                                                                                                                                                     ~
proxmox-ve: 7.1-1 (running kernel: 5.13.19-4-pve)
pve-manager: 7.1-10 (running version: 7.1-10/6ddebafe)
pve-kernel-helper: 7.1-12
pve-kernel-5.13: 7.1-7
pve-kernel-5.4: 6.4-13
pve-kernel-5.13.19-4-pve: 5.13.19-9
pve-kernel-5.4.166-1-pve: 5.4.166-1
pve-kernel-5.4.78-2-pve: 5.4.78-2
ceph-fuse: 14.2.21-1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: 0.8.36+pve1
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.1
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-6
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.1-3
libpve-guest-common-perl: 4.1-1
libpve-http-server-perl: 4.1-1
libpve-storage-perl: 7.1-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.3.0-2
proxmox-backup-client: 2.1.5-1
proxmox-backup-file-restore: 2.1.5-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-6
pve-cluster: 7.1-3
pve-container: 4.1-4
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-5
pve-ha-manager: 3.3-3
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.1-2
pve-xtermjs: 4.16.0-1
qemu-server: 7.1-4
smartmontools: 7.2-pve2
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.2-pve1

Code:
root@saphir % cat /etc/pve/corosync.conf                                                                                                                                                                                                                                        ~
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: backup
    nodeid: 4
    quorum_votes: 1
    ring0_addr: X.X.X.10
  }
  node {
    name: itp11
    nodeid: 5
    quorum_votes: 1
    ring0_addr: X.X.X.11
  }
  node {
    name: itp12
    nodeid: 6
    quorum_votes: 1
    ring0_addr: X.X.X.12
  }
  node {
    name: monitor
    nodeid: 3
    quorum_votes: 1
    ring0_addr: X.X.X.9
  }
  node {
    name: rubin
    nodeid: 1
    quorum_votes: 1
    ring0_addr: X.X.X.1
  }
  node {
    name: saphir
    nodeid: 2
    quorum_votes: 1
    ring0_addr: X.X.X.2
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: ITP
  config_version: 6
  interface {
    linknumber: 0
  }
  ip_version: ipv4-6
  secauth: on
  version: 2
}

Journal file attached as saphir-journal.log .


Node "monitor":

Code:
root@monitor:~# pveversion -v
proxmox-ve: 7.1-1 (running kernel: 5.13.19-4-pve)
pve-manager: 7.1-10 (running version: 7.1-10/6ddebafe)
pve-kernel-helper: 7.1-12
pve-kernel-5.13: 7.1-7
pve-kernel-5.13.19-4-pve: 5.13.19-9
pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph-fuse: 15.2.15-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.1
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-6
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.1-3
libpve-guest-common-perl: 4.1-1
libpve-http-server-perl: 4.1-1
libpve-storage-perl: 7.1-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.3.0-2
proxmox-backup-client: 2.1.5-1
proxmox-backup-file-restore: 2.1.5-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-6
pve-cluster: 7.1-3
pve-container: 4.1-4
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-5
pve-ha-manager: 3.3-3
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.1-2
pve-xtermjs: 4.16.0-1
qemu-server: 7.1-4
smartmontools: 7.2-1
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.2-pve1

Code:
root@monitor:~# cat /etc/pve/corosync.conf
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: backup
    nodeid: 4
    quorum_votes: 1
    ring0_addr: X.X.X.10
  }
  node {
    name: itp11
    nodeid: 5
    quorum_votes: 1
    ring0_addr: X.X.X.11
  }
  node {
    name: itp12
    nodeid: 6
    quorum_votes: 1
    ring0_addr: X.X.X.12
  }
  node {
    name: monitor
    nodeid: 3
    quorum_votes: 1
    ring0_addr: X.X.X.9
  }
  node {
    name: rubin
    nodeid: 1
    quorum_votes: 1
    ring0_addr: X.X.X.1
  }
  node {
    name: saphir
    nodeid: 2
    quorum_votes: 1
    ring0_addr: X.X.X.2
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: ITP
  config_version: 6
  interface {
    linknumber: 0
  }
  ip_version: ipv4-6
  secauth: on
  version: 2
}

Journal file attached as monitor-journal.log .


Node "backup":

Code:
root@backup:~# pveversion -v
proxmox-ve: 7.1-1 (running kernel: 5.13.19-4-pve)
pve-manager: 7.1-10 (running version: 7.1-10/6ddebafe)
pve-kernel-helper: 7.1-12
pve-kernel-5.13: 7.1-7
pve-kernel-5.13.19-4-pve: 5.13.19-9
pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph-fuse: 15.2.15-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.1
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-6
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.1-3
libpve-guest-common-perl: 4.1-1
libpve-http-server-perl: 4.1-1
libpve-storage-perl: 7.1-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.3.0-2
proxmox-backup-client: 2.1.5-1
proxmox-backup-file-restore: 2.1.5-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-6
pve-cluster: 7.1-3
pve-container: 4.1-4
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-5
pve-ha-manager: 3.3-3
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.1-2
pve-xtermjs: 4.16.0-1
qemu-server: 7.1-4
smartmontools: 7.2-1
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.2-pve1

Code:
root@backup:~# cat /etc/pve/corosync.conf
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: backup
    nodeid: 4
    quorum_votes: 1
    ring0_addr: X.X.X.10
  }
  node {
    name: itp11
    nodeid: 5
    quorum_votes: 1
    ring0_addr: X.X.X.11
  }
  node {
    name: itp12
    nodeid: 6
    quorum_votes: 1
    ring0_addr: X.X.X.12
  }
  node {
    name: monitor
    nodeid: 3
    quorum_votes: 1
    ring0_addr: X.X.X.9
  }
  node {
    name: rubin
    nodeid: 1
    quorum_votes: 1
    ring0_addr: X.X.X.1
  }
  node {
    name: saphir
    nodeid: 2
    quorum_votes: 1
    ring0_addr: X.X.X.2
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: ITP
  config_version: 6
  interface {
    linknumber: 0
  }
  ip_version: ipv4-6
  secauth: on
  version: 2
}


Journal file attached as backup-journal.log .

Node "itp11":

Code:
root@itp11:~# pveversion -v
proxmox-ve: 7.1-1 (running kernel: 5.13.19-2-pve)
pve-manager: 7.1-7 (running version: 7.1-7/df5740ad)
pve-kernel-helper: 7.1-6
pve-kernel-5.13: 7.1-5
pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph-fuse: 15.2.15-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-14
libpve-guest-common-perl: 4.0-3
libpve-http-server-perl: 4.0-4
libpve-storage-perl: 7.0-15
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.1.2-1
proxmox-backup-file-restore: 2.1.2-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-4
pve-cluster: 7.1-2
pve-container: 4.1-2
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-3
pve-ha-manager: 3.3-1
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.0-3
pve-xtermjs: 4.12.0-1
qemu-server: 7.1-4
smartmontools: 7.2-1
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.1-pve3

Code:
root@itp11:~# cat /etc/pve/corosync.conf
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: backup
    nodeid: 4
    quorum_votes: 1
    ring0_addr: X.X.X.10
  }
  node {
    name: itp11
    nodeid: 5
    quorum_votes: 1
    ring0_addr: X.X.X.11
  }
  node {
    name: itp12
    nodeid: 6
    quorum_votes: 1
    ring0_addr: X.X.X.12
  }
  node {
    name: monitor
    nodeid: 3
    quorum_votes: 1
    ring0_addr: X.X.X.9
  }
  node {
    name: rubin
    nodeid: 1
    quorum_votes: 1
    ring0_addr: X.X.X.1
  }
  node {
    name: saphir
    nodeid: 2
    quorum_votes: 1
    ring0_addr: X.X.X.2
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: ITP
  config_version: 6
  interface {
    linknumber: 0
  }
  ip_version: ipv4-6
  secauth: on
  version: 2
}

Journal file attached as itp11-journal.log .


Node "itp12":

Code:
root@itp12:~# pveversion -v
proxmox-ve: 7.1-1 (running kernel: 5.13.19-2-pve)
pve-manager: 7.1-7 (running version: 7.1-7/df5740ad)
pve-kernel-helper: 7.1-6
pve-kernel-5.13: 7.1-5
pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph-fuse: 15.2.15-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-14
libpve-guest-common-perl: 4.0-3
libpve-http-server-perl: 4.0-4
libpve-storage-perl: 7.0-15
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.1.2-1
proxmox-backup-file-restore: 2.1.2-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-4
pve-cluster: 7.1-2
pve-container: 4.1-2
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-3
pve-ha-manager: 3.3-1
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.0-3
pve-xtermjs: 4.12.0-1
qemu-server: 7.1-4
smartmontools: 7.2-1
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.1-pve3

Code:
root@itp12:~# cat /etc/pve/corosync.conf
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: backup
    nodeid: 4
    quorum_votes: 1
    ring0_addr: X.X.X.10
  }
  node {
    name: itp11
    nodeid: 5
    quorum_votes: 1
    ring0_addr: X.X.X.11
  }
  node {
    name: itp12
    nodeid: 6
    quorum_votes: 1
    ring0_addr: X.X.X.12
  }
  node {
    name: monitor
    nodeid: 3
    quorum_votes: 1
    ring0_addr: X.X.X.9
  }
  node {
    name: rubin
    nodeid: 1
    quorum_votes: 1
    ring0_addr: X.X.X.1
  }
  node {
    name: saphir
    nodeid: 2
    quorum_votes: 1
    ring0_addr: X.X.X.2
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: ITP
  config_version: 6
  interface {
    linknumber: 0
  }
  ip_version: ipv4-6
  secauth: on
  version: 2
}

Journal file attached as itp12-journal.log .
 

Attachments

okay, I would do the following on each node:

Code:
systemctl stop pve-cluster
systemctl restart corosync

then check with corosync-quorumtool -s on each node that quorum was established. if it was, then start pve-cluster again: systemctl start pve-cluster

I would strongly recommend upgrading to the current 7.x version afterwards!
 
okay, I would do the following on each node:

Code:
systemctl stop pve-cluster
systemctl restart corosync

then check with corosync-quorumtool -s on each node that quorum was established. if it was, then start pve-cluster again: systemctl start pve-cluster

I would strongly recommend upgrading to the current 7.x version afterwards!
Just to be safe: Will this affect the uptime of any VM ?
 
unless you have HA enabled, already running guests should continue to run. I assumed you don't have HA enabled, since based on the logs that would already have caused all your nodes to fence themselves..
 
okay, I would do the following on each node:

Code:
systemctl stop pve-cluster
systemctl restart corosync

then check with corosync-quorumtool -s on each node that quorum was established. if it was, then start pve-cluster again: systemctl start pve-cluster

I would strongly recommend upgrading to the current 7.x version afterwards!
Unfortunately, nothing has changed.
 
then please post the full output of journalctl -u corosync --since "time when corosync was restarted" and the output of corosync-quorumtool -s, both on all nodes!
 
then please post the full output of journalctl -u corosync --since "time when corosync was restarted" and the output of corosync-quorumtool -s, both on all nodes!
itp12:

Code:
root@itp12:~# corosync-quorumtool -s
Quorum information
------------------
Date:             Fri Mar  3 11:38:13 2023
Quorum provider:  corosync_votequorum
Nodes:            2
Node ID:          6
Ring ID:          3.3e82
Quorate:          No

Votequorum information
----------------------
Expected votes:   6
Highest expected: 6
Total votes:      2
Quorum:           4 Activity blocked
Flags:           

Membership information
----------------------
    Nodeid      Votes Name
         3          1 monitor
         6          1 itp12 (local)

itp11:

Code:
root@itp11:~# corosync-quorumtool -s
Quorum information
------------------
Date:             Fri Mar  3 11:47:54 2023
Quorum provider:  corosync_votequorum
Nodes:            1
Node ID:          5
Ring ID:          5.16a9
Quorate:          No

Votequorum information
----------------------
Expected votes:   6
Highest expected: 6
Total votes:      1
Quorum:           4 Activity blocked
Flags:           

Membership information
----------------------
    Nodeid      Votes Name
         5          1 itp11 (local)

monitor:


Code:
root@monitor:~# corosync-quorumtool -s
Quorum information
------------------
Date:             Fri Mar  3 11:49:04 2023
Quorum provider:  corosync_votequorum
Nodes:            2
Node ID:          3
Ring ID:          3.3e82
Quorate:          No

Votequorum information
----------------------
Expected votes:   6
Highest expected: 6
Total votes:      2
Quorum:           4 Activity blocked
Flags:           

Membership information
----------------------
    Nodeid      Votes Name
         3          1 monitor (local)
         6          1 itp12

backup:

Code:
root@backup:~# corosync-quorumtool -s
Quorum information
------------------
Date:             Fri Mar  3 11:51:16 2023
Quorum provider:  corosync_votequorum
Nodes:            1
Node ID:          4
Ring ID:          4.169b
Quorate:          No

Votequorum information
----------------------
Expected votes:   6
Highest expected: 6
Total votes:      1
Quorum:           4 Activity blocked
Flags:           

Membership information
----------------------
    Nodeid      Votes Name
         4          1 backup (local)

rubin:

Code:
root@rubin % corosync-quorumtool -s                                                                                                       ~
Quorum information
------------------
Date:             Fri Mar  3 11:53:50 2023
Quorum provider:  corosync_votequorum
Nodes:            1
Node ID:          1
Ring ID:          1.3e4a
Quorate:          No

Votequorum information
----------------------
Expected votes:   6
Highest expected: 6
Total votes:      1
Quorum:           4 Activity blocked
Flags:           

Membership information
----------------------
    Nodeid      Votes Name
         1          1 rubin (local)

saphir:

Code:
root@saphir % corosync-quorumtool -s                                                                                                      ~
Quorum information
------------------
Date:             Fri Mar  3 11:58:26 2023
Quorum provider:  corosync_votequorum
Nodes:            1
Node ID:          2
Ring ID:          2.3d15
Quorate:          No

Votequorum information
----------------------
Expected votes:   6
Highest expected: 6
Total votes:      1
Quorum:           4 Activity blocked
Flags:           

Membership information
----------------------
    Nodeid      Votes Name
         2          1 saphir (local)

journals are attached in archive
 

Attachments

yes, something is definitely not right with your network (also loopback failing!! that should basically never happen).

I'd try the following next:
- stop corosync again on all nodes
- start corosync on one node
- start corosync on second node, check if they are able to establish quorum and stay stable
-- if yes, continue
-- if no, stop corosync on second node and continue
- start corosync on third node, check, proceed like with second node
- rinse repeat

maybe you can isolate a problematic node that way..

edit: one more thing: the logs on node saphir complain about node 6 not supporting an MTU of 1500 - a stray VLAN somewhere?

Code:
4078:Mar 03 06:26:53 saphir.XXX corosync[2475647]:   [KNET  ] pmtud: possible MTU misconfiguration detected. kernel is reporting MTU: 1500 bytes for host 6 link 0 but the other node is not acknowledging packets of this size.
4079:Mar 03 06:26:53 saphir.XXX corosync[2475647]:   [KNET  ] pmtud: This can be caused by this node interface MTU too big or a network device that does not support or has been misconfigured to manage MTU of this size, or packet loss. knet will continue to run but performances might be affected.
4080:Mar 03 06:26:55 saphir.XXX corosync[2475647]:   [KNET  ] pmtud: PMTUD link change for host: 6 link: 0 from 1397 to 1381
4081:Mar 03 06:27:45 saphir.XXX corosync[2475647]:   [KNET  ] pmtud: Global data MTU changed to: 1381
 
Last edited:
  • Like
Reactions: dystopist

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!