Hi,
I stumbled upon a problem I never had and I did not find any useful information. I have currently a 6 node PVE 3.4 cluster (previously 7 nodes). One machine got fenced due to a ZFS stuck kernel problem and since then halv of my cluster is shown offline (red machine icon). The VMs are still running, I can start, stop, migrate via commandline but the GUI is not working as expected. The GUI updates also hardware information like used ram and cpu, but does not update the rrd graphs. These are blank since the fencing of the other node.
There are no obvious entries in the logfiles I checked, it seams only the GUI is not working.
clustat shows a normal cluster:
	
	
	
		
and also pvecm
	
	
	
		
Here my pveversion -v:
	
	
	
		
I already tried to restart some services like pve-manager, pveproxy, pvedaemon, pvestatd.
				
			I stumbled upon a problem I never had and I did not find any useful information. I have currently a 6 node PVE 3.4 cluster (previously 7 nodes). One machine got fenced due to a ZFS stuck kernel problem and since then halv of my cluster is shown offline (red machine icon). The VMs are still running, I can start, stop, migrate via commandline but the GUI is not working as expected. The GUI updates also hardware information like used ram and cpu, but does not update the rrd graphs. These are blank since the fencing of the other node.
There are no obvious entries in the logfiles I checked, it seams only the GUI is not working.
clustat shows a normal cluster:
		Code:
	
	Cluster Status for cluster @ Tue Nov  3 09:11:20 2015
Member Status: Quorate
 Member Name                                                     ID   Status
 ------ ----                                                     ---- ------
 proxmox1                                                            1 Online, rgmanager
 proxmox2                                                            2 Online, rgmanager
 proxmox3                                                            3 Online, Local, rgmanager
 apu-01                                                              4 Online, rgmanager
 apu-02                                                              5 Online, rgmanager
 proxmox4                                                            7 Online, rgmanager
	and also pvecm
		Code:
	
	Version: 6.2.0
Config Version: 62
Cluster Name: cluster
Cluster Id: 13364
Cluster Member: Yes
Cluster Generation: 2988
Membership state: Cluster-Member
Nodes: 6
Expected votes: 6
Total votes: 6
Node votes: 1
Quorum: 4  
Active subsystems: 7
Flags: 
Ports Bound: 0 11 177  
Node name: proxmox3
Node ID: 3
Multicast addresses: 239.192.52.104 
Node addresses: 10.192.0.243
	Here my pveversion -v:
		Code:
	
	root@proxmox3 ~ > pveversion  -v
proxmox-ve-2.6.32: 3.4-160 (running kernel: 3.10.0-11-pve)
pve-manager: 3.4-9 (running version: 3.4-9/4b51d87a)
pve-kernel-2.6.32-40-pve: 2.6.32-160
pve-kernel-3.10.0-11-pve: 3.10.0-36
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-3
pve-cluster: 3.0-18
qemu-server: 3.4-6
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-33
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-11
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
	I already tried to restart some services like pve-manager, pveproxy, pvedaemon, pvestatd.