We have a cluster of 10 PVE nodes, all running pve-manager/8.3.3 (running kernel: 6.8.12-8-pve)
We noticed that all nodes in the cluster (BUT the one I am using to access the web UI) are showing grey question mark.
We are able to test and reboot only two of the 10 nodes (since they are empty) and after a few reboots we were not able to see any green ticks
Does anyone have any suggestion where we can look at to see what this is happening?
Some details:
- Each node is connected to the cluster using link0 (dedicated cluster only interface/vlan)
- We have PBS shared storage server (working all fine)
- We have a NFS shared storage server (working all fine and mounted all good)
We noticed that all nodes in the cluster (BUT the one I am using to access the web UI) are showing grey question mark.
We are able to test and reboot only two of the 10 nodes (since they are empty) and after a few reboots we were not able to see any green ticks
Does anyone have any suggestion where we can look at to see what this is happening?
Some details:
- Each node is connected to the cluster using link0 (dedicated cluster only interface/vlan)
- We have PBS shared storage server (working all fine)
- We have a NFS shared storage server (working all fine and mounted all good)
proxmox-ve: 8.3.0 (running kernel: 6.8.12-4-pve)
pve-manager: 8.3.3 (running version: 8.3.3/f157a38b211595d6)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.12-8
proxmox-kernel-6.8.12-8-pve-signed: 6.8.12-8
proxmox-kernel-6.8.12-7-pve-signed: 6.8.12-7
proxmox-kernel-6.8.12-4-pve-signed: 6.8.12-4
ceph-fuse: 17.2.7-pve3
corosync: 3.1.7-pve3
criu: 3.17.1-2+deb12u1
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx11
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.5.1
libproxmox-rs-perl: 0.3.4
libpve-access-control: 8.2.0
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.10
libpve-cluster-perl: 8.0.10
libpve-common-perl: 8.2.9
libpve-guest-common-perl: 5.1.6
libpve-http-server-perl: 5.2.0
libpve-network-perl: 0.10.0
libpve-rs-perl: 0.9.1
libpve-storage-perl: 8.3.3
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.5.0-1
proxmox-backup-client: 3.3.2-1
proxmox-backup-file-restore: 3.3.2-2
proxmox-firewall: 0.6.0
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.3.1
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.7
proxmox-widget-toolkit: 4.3.4
pve-cluster: 8.0.10
pve-container: 5.2.3
pve-docs: 8.3.1
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.2
pve-firewall: 5.1.0
pve-firmware: 3.14-3
pve-ha-manager: 4.0.6
pve-i18n: 3.3.3
pve-qemu-kvm: 9.0.2-5
pve-xtermjs: 5.3.0-3
qemu-server: 8.3.7
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.7-pve1
pve-manager: 8.3.3 (running version: 8.3.3/f157a38b211595d6)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.12-8
proxmox-kernel-6.8.12-8-pve-signed: 6.8.12-8
proxmox-kernel-6.8.12-7-pve-signed: 6.8.12-7
proxmox-kernel-6.8.12-4-pve-signed: 6.8.12-4
ceph-fuse: 17.2.7-pve3
corosync: 3.1.7-pve3
criu: 3.17.1-2+deb12u1
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx11
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.5.1
libproxmox-rs-perl: 0.3.4
libpve-access-control: 8.2.0
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.10
libpve-cluster-perl: 8.0.10
libpve-common-perl: 8.2.9
libpve-guest-common-perl: 5.1.6
libpve-http-server-perl: 5.2.0
libpve-network-perl: 0.10.0
libpve-rs-perl: 0.9.1
libpve-storage-perl: 8.3.3
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.5.0-1
proxmox-backup-client: 3.3.2-1
proxmox-backup-file-restore: 3.3.2-2
proxmox-firewall: 0.6.0
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.3.1
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.7
proxmox-widget-toolkit: 4.3.4
pve-cluster: 8.0.10
pve-container: 5.2.3
pve-docs: 8.3.1
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.2
pve-firewall: 5.1.0
pve-firmware: 3.14-3
pve-ha-manager: 4.0.6
pve-i18n: 3.3.3
pve-qemu-kvm: 9.0.2-5
pve-xtermjs: 5.3.0-3
qemu-server: 8.3.7
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.7-pve1
Cluster information
-------------------
Name: NZ-PX-CLUSTER
Config Version: 45
Transport: knet
Secure auth: on
Quorum information
------------------
Date: Sun Feb 16 21:05:33 2025
Quorum provider: corosync_votequorum
Nodes: 10
Node ID: 0x00000003
Ring ID: 1.29d8e
Quorate: Yes
Votequorum information
----------------------
Expected votes: 10
Highest expected: 10
Total votes: 10
Quorum: 6
Flags: Quorate
Membership information
----------------------
Nodeid Votes Name
0x00000001 1 10.69.69.19
0x00000002 1 10.69.69.22
0x00000003 1 10.69.69.23 (local)
0x00000004 1 10.69.69.18
0x00000005 1 10.69.69.25
0x00000006 1 10.69.69.17
0x00000007 1 10.69.69.20
0x00000008 1 10.69.69.24
0x00000009 1 10.69.69.26
0x0000000a 1 10.69.69.27
-------------------
Name: NZ-PX-CLUSTER
Config Version: 45
Transport: knet
Secure auth: on
Quorum information
------------------
Date: Sun Feb 16 21:05:33 2025
Quorum provider: corosync_votequorum
Nodes: 10
Node ID: 0x00000003
Ring ID: 1.29d8e
Quorate: Yes
Votequorum information
----------------------
Expected votes: 10
Highest expected: 10
Total votes: 10
Quorum: 6
Flags: Quorate
Membership information
----------------------
Nodeid Votes Name
0x00000001 1 10.69.69.19
0x00000002 1 10.69.69.22
0x00000003 1 10.69.69.23 (local)
0x00000004 1 10.69.69.18
0x00000005 1 10.69.69.25
0x00000006 1 10.69.69.17
0x00000007 1 10.69.69.20
0x00000008 1 10.69.69.24
0x00000009 1 10.69.69.26
0x0000000a 1 10.69.69.27
logging {
debug: off
to_syslog: yes
}
nodelist {
node {
name: node4
nodeid: 4
quorum_votes: 1
ring0_addr: 10.69.69.18
}
node {
name: node3
nodeid: 3
quorum_votes: 1
ring0_addr: 10.69.69.23
}
node {
name: node5
nodeid: 5
quorum_votes: 1
ring0_addr: 10.69.69.25
}
node {
name: node9
nodeid: 9
quorum_votes: 1
ring0_addr: 10.69.69.26
}
node {
name: node8
nodeid: 8
quorum_votes: 1
ring0_addr: 10.69.69.24
}
node {
name: node10
nodeid: 10
quorum_votes: 1
ring0_addr: 10.69.69.27
}
node {
name: node6
nodeid: 6
quorum_votes: 1
ring0_addr: 10.69.69.17
}
node {
name: node1
nodeid: 1
quorum_votes: 1
ring0_addr: 10.69.69.19
}
node {
name: node2
nodeid: 2
quorum_votes: 1
ring0_addr: 10.69.69.22
}
node {
name: node7
nodeid: 7
quorum_votes: 1
ring0_addr: 10.69.69.20
}
}
quorum {
provider: corosync_votequorum
}
totem {
cluster_name: PX-CLUSTER
config_version: 45
interface {
bindnetaddr: 10.69.69.0
ringnumber: 0
}
ip_version: ipv4
link_mode: passive
secauth: on
version: 2
}
debug: off
to_syslog: yes
}
nodelist {
node {
name: node4
nodeid: 4
quorum_votes: 1
ring0_addr: 10.69.69.18
}
node {
name: node3
nodeid: 3
quorum_votes: 1
ring0_addr: 10.69.69.23
}
node {
name: node5
nodeid: 5
quorum_votes: 1
ring0_addr: 10.69.69.25
}
node {
name: node9
nodeid: 9
quorum_votes: 1
ring0_addr: 10.69.69.26
}
node {
name: node8
nodeid: 8
quorum_votes: 1
ring0_addr: 10.69.69.24
}
node {
name: node10
nodeid: 10
quorum_votes: 1
ring0_addr: 10.69.69.27
}
node {
name: node6
nodeid: 6
quorum_votes: 1
ring0_addr: 10.69.69.17
}
node {
name: node1
nodeid: 1
quorum_votes: 1
ring0_addr: 10.69.69.19
}
node {
name: node2
nodeid: 2
quorum_votes: 1
ring0_addr: 10.69.69.22
}
node {
name: node7
nodeid: 7
quorum_votes: 1
ring0_addr: 10.69.69.20
}
}
quorum {
provider: corosync_votequorum
}
totem {
cluster_name: PX-CLUSTER
config_version: 45
interface {
bindnetaddr: 10.69.69.0
ringnumber: 0
}
ip_version: ipv4
link_mode: passive
secauth: on
version: 2
}
Last edited: