Hey,
With the new proxmox 5 out, I removed a test node out of my cluster (V4) (method) This must have gone very bad, and now I'm left with a cluster that is not responding at all. 3 of the 4 nodes don't show a web interface lucky for me 1 does.
I don't really wanne "safe" the cluster, my test on v5 where successfully so I want to backup my containers and move them. Sadly the backup process seems to hang. I started a vzdump in one of 3 no-interface machines and it took +12h before vzdump even showed something... but at the target location there is no data being dumped. Is there a way to disable the cluster and let a node do its own thing ?
I don't see any obv. errors why this is happening ...
// update
I noticed in /etc/pve/corosync.conf the original node is still in :/
// update 2
I rebooted one node and the interface came back and the backup ran.
With the new proxmox 5 out, I removed a test node out of my cluster (V4) (method) This must have gone very bad, and now I'm left with a cluster that is not responding at all. 3 of the 4 nodes don't show a web interface lucky for me 1 does.
I don't really wanne "safe" the cluster, my test on v5 where successfully so I want to backup my containers and move them. Sadly the backup process seems to hang. I started a vzdump in one of 3 no-interface machines and it took +12h before vzdump even showed something... but at the target location there is no data being dumped. Is there a way to disable the cluster and let a node do its own thing ?
I don't see any obv. errors why this is happening ...
Code:
pvecm nodes
Membership information
----------------------
Nodeid Votes Name
1 1 ristretto (local)
3 1 rocky
2 1 elf
5 1 figaro
Code:
root@ristretto:~# pvecm status
Quorum information
------------------
Date: Thu Aug 3 09:39:15 2017
Quorum provider: corosync_votequorum
Nodes: 4
Node ID: 0x00000001
Ring ID: 1/8592
Quorate: Yes
Votequorum information
----------------------
Expected votes: 4
Highest expected: 4
Total votes: 4
Quorum: 3
Flags: Quorate
Membership information
----------------------
Nodeid Votes Name
0x00000001 1 143.xxxxx (local)
0x00000003 1 143.xxxxx
0x00000002 1 143.xxxxx
0x00000005 1 143.xxxxx
// update
I noticed in /etc/pve/corosync.conf the original node is still in :/
// update 2
I rebooted one node and the interface came back and the backup ran.
Last edited: