Dead Node - Removed, GUI Broken

vispa

Well-Known Member
Feb 20, 2016
34
0
46
44
Hi,

I've had a node from my cluster crash today. I have removed the dead node however there seems to have been some issue with the GUI. My CT's are running and I have quorum but no nodes are shown in the web gui.

I can write to /etc/pve fine.

I have noticed that the output from ha-manager status is :-

Code:
ha-manager status
malformed JSON string, neither array, object, number, string or atom, at character offset 86 (before "jwA34E0RmLYzILfjmeTb...") at /usr/share/perl5/PVE/HA/Config.pm line 41.

Perhaps this is giving an indication why the guy isn't working?

Tasks show fine in the GUI. Just no nodes.

Any help would be appreciated.


Code:
pvecm status
Quorum information
------------------
Date:             Sat Dec  9 12:33:42 2017
Quorum provider:  corosync_votequorum
Nodes:            6
Node ID:          0x00000005
Ring ID:          4/441896
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   6
Highest expected: 6
Total votes:      6
Quorum:           4
Flags:            Quorate

Membership information
----------------------
    Nodeid      Votes Name
0x00000004          1 x.x.x.48
0x00000007          1 x.x.x.49
0x00000001          1 x.x.x.51
0x00000003          1 x.x.x.53
0x00000005          1 x.x.x.54 (local)
0x00000006          1 x.x.x.55

Code:
ceph status
    cluster c49cb41f-41fa-41a8-9dce-aa5e202ed84e
     health HEALTH_WARN
            4 pgs backfill
            1 pgs backfilling
            142 pgs degraded
            1 pgs recovering
            140 pgs recovery_wait
            142 pgs stuck degraded
            146 pgs stuck unclean
            recovery 1198/1092102 objects degraded (0.110%)
            recovery 12244/1092102 objects misplaced (1.121%)
            1 near full osd(s)
     monmap e14: 2 mons at {2=10.10.11.53:6789/0,3=10.10.11.48:6789/0}
            election epoch 522, quorum 0,1 3,2
     osdmap e15339: 13 osds: 13 up, 13 in; 8 remapped pgs
            flags nearfull
      pgmap v19997634: 512 pgs, 2 pools, 1372 GB data, 353 kobjects
            4264 GB used, 4105 GB / 8369 GB avail
            1198/1092102 objects degraded (0.110%)
            12244/1092102 objects misplaced (1.121%)
                 365 active+clean
                 137 active+recovery_wait+degraded
                   4 active+remapped+wait_backfill
                   3 active+recovery_wait+degraded+remapped
                   1 active+recovering+degraded
                   1 active+degraded+remapped+backfilling
                   1 active+clean+scrubbing+deep
  client io 43926 B/s rd, 237 kB/s wr, 81 op/s
 
Hi,

Code:
root@cloud1:~# cat /etc/pve/.members
{
"nodename": "cloud1",
"version": 22,
"cluster": { "name": "vispa", "version": 15, "nodes": 6, "quorate": 1 },
"nodelist": {
  "storage2": { "id": 7, "online": 1, "ip": "x.x.x.49"},
  "cloud1": { "id": 1, "online": 1, "ip": "x.x.x.51"},
  "cloud3": { "id": 3, "online": 1, "ip": "x.x.x.53"},
  "storage1": { "id": 4, "online": 1, "ip": "x.x.x.48"},
  "cloud4": { "id": 5, "online": 1, "ip": "x.x.x.54"},
  "cloud5": { "id": 6, "online": 1, "ip": "x.x.x.55"}
  }
}

Code:
{"master_node":"storage1","node_status":{"cloud3":"online","cloud4":"online","cloud5":jwA34E0RmLYzILfjmeTbjhK8RX/kqKp00c4Z/6JANKGFjnwEiVhVOHWmhnBIxoXSvf7H7ofwyNw6VGkoFSXKSVQlXrX9svE+YO8RRp/aQ7TE62h+quEi90ynNs1eMsJMDTJKte7QOo7aqKrgwZKU9oNy1OC2TGfb4IKc+5MBS7FkCnZdE4DBkCsUSyEMRa6HIKL0k0Li/NQ5jqPHoNgVskFy63/CNscUCTtGgLt3eCciU0f7zsNadXoCsKbBJqf7Q3CuzjRAq8k8ZNCGabeaFY0r7jwObgXexaoVLEKV5X5XsqD2T09gvTmTATZLTWkWPf root@cloud1
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDSDC5Dw/69TIFhe/pkQB6+p+m0uDdynV3ZsBJjayrzyjzkK5IPeeeFmJhwFy3kDBtYoc5eg2eoKOnEWKxTPvDzEhS30tT01KT3HvhLwMIlVPPyrvkp8CXQAvYeq/zBGgd2O0EbiXRsOHhXsbuBfZ75iMDDWn7ZmqHxMjSZxUKJPp8C2b63wFvyeprEO2op+qTCP3OC/RSGcSqyAjWn5nlvl+2czZ3psKVFZxzb6n4zJ6pqOmLUs/XF2u7xZGuSys/ct4Rrh0MYQy7ndpQoyFz8XRlfBUL8O9GA63PSqlakzDthJDJco+UejlLIH8mfdr8K9Z4D+O9PiFf85vJn8U+1 root@cloud2
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDBOZjJW4EdMsFmx/tQ7u2na+WoYMuXJd8yND/k1lF4UWy5n/tOdtC1bc+WyWzrv9aQ3HacaBYfPfjX7XQfPC2lgo9NvvWY3QOOlik7gIwyWoZML4Qd/SDjS3uSWDg0I9A7D0FDBsgXqQlqLAtykjkjT5oiV9F/Okk6AsgsY/+pkIM5Im/OwjWXlcHu6kRr8jtzdZlewwi3zUr+81ppBqaxcpS99oBoU8Fu4JGc0fSuH7EtUKwu4g92V0JqpXwMroot@cloud1:~#
 
Hi,

Any update on this? I still have the issue and don't know how to resolve?