wrong inacurate info in the console

svacaroaia

Member
Oct 4, 2012
36
0
6
HI,

I just noticed that one of my nodes shows "red" in the console - all guest on it are showing as "off"

However the node is up and the CLI commands are confirming that all services on it are up, it is part of the cluster and VM are up too

This has happen with all browsers ( IE, Chrome, Firefox)

ANy suggestions ?

Node name is blh02-13 - here is the output of some commands ran on it
clustat
Cluster Status for bl02-cluster01 @ Mon Oct 29 09:47:06 2012
Member Status: Quorate


Member Name ID Status
------ ---- ---- ------
blh02-14 1 Online, rgmanager
blh02-13 2 Online, Local, rgmanager
blh02-10 3 Online, rgmanager
blh02-11 4 Online, rgmanager
blh02-12 5 Online, rgmanager


Service Name Owner (Last) State
------- ---- ----- ------ -----
pvevm:104 blh02-13 started
pvevm:302 blh02-13 started
pvevm:304 blh02-13 started
pvevm:306 blh02-13 started
pvevm:307 blh02-13 started
pvevm:308 blh02-13 started
pvevm:309 blh02-13 started
pvevm:310 blh02-13 started
pvevm:311 blh02-13 started
pvevm:312 blh02-13 started
pvevm:314 blh02-13 started
pvevm:315 blh02-13 started
pvevm:316 blh02-13 started
pvevm:317 blh02-13 started
pvevm:318 blh02-13 started


fence_tool -n ls
fence domain
member count 5
victim count 0
victim now 0
master nodeid 1
wait state none
members 1 2 3 4 5
all nodes
nodeid 1 member 1 victim 0 last fence master 2 how agent
nodeid 2 member 1 victim 0 last fence master 0 how none
nodeid 3 member 1 victim 0 last fence master 2 how agent
nodeid 4 member 1 victim 0 last fence master 2 how agent
nodeid 5 member 1 victim 0 last fence master 0 how none


pvecm status
Version: 6.2.0
Config Version: 36
Cluster Name: bl02-cluster01
Cluster Id: 29537
Cluster Member: Yes
Cluster Generation: 6708
Membership state: Cluster-Member
Nodes: 5
Expected votes: 4
Total votes: 5
Node votes: 1
Quorum: 3
Active subsystems: 6
Flags:
Ports Bound: 0 177
Node name: blh02-13
Node ID: 2
Multicast addresses: x.x.x.212
Node addresses: 10.x.x.48


pveversion -v
pve-manager: 2.2-24 (pve-manager/2.2/7f9cfa4c)
running kernel: 2.6.32-16-pve
proxmox-ve-2.6.32: 2.2-80
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-16-pve: 2.6.32-80
pve-kernel-2.6.32-14-pve: 2.6.32-74
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-1
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-28
qemu-server: 2.0-62
pve-firmware: 1.0-21
libpve-common-perl: 1.0-36
libpve-access-control: 1.0-25
libpve-storage-perl: 2.0-34
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.2-7
ksm-control-daemon: 1.1-1
 
update to latest packages from today (pve stable repo) and report if the issue is resolved.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!