lost forever node, the cluster does not work

nezabor

Member
Oct 6, 2012
7
0
21
Moscow
z1q.ru
I lost forever this node
On the master node, I can not do anything

Code:
root@s2 ~ # pvecm nodes
Node  Sts   Inc   Joined               Name
   1   M     36   2013-04-02 23:00:53  s2
   2   X      0                        s3
Code:
root@s2 ~ # service pve-cluster status
Checking status of pve cluster filesystem: pve-cluster running.
root@s2 ~ # service cman status
fenced is stopped
root@s2 ~ # service cman start
Starting cluster: 
   Checking if cluster has been disabled at boot... [  OK  ]
   Checking Network Manager... [  OK  ]
   Global setup... [  OK  ]
   Loading kernel modules... [  OK  ]
   Mounting configfs... [  OK  ]
   Starting cman... [  OK  ]
   Waiting for quorum... Timed-out waiting for cluster
[FAILED]
Code:
root@s2 ~ # pvecm delnode s3
cluster not ready - no quorum?

as a result
Code:
command 'vzctl --skiplock set 1011 --diskspace 8G:9227468 --diskinodes 1600000:1760000 --save' failed: exit code 139 (500)

not logical
turns out if I lose a node, and then I lost the master node
 
I comment the lines of a quorum of the /etc/init.d/cman
Code:
#       runwrap wait_for_quorum \
#               none \
#               "Waiting for quorum"
#
#       [ "$breakpoint" = "quorum" ] && exit 0
there is no miracle
Code:
root@s2 ~ # /etc/init.d/cman start
Starting cluster: 
   Checking if cluster has been disabled at boot... [  OK  ]
   Checking Network Manager... [  OK  ]
   Global setup... [  OK  ]
   Loading kernel modules... [  OK  ]
   Mounting configfs... [  OK  ]
   Starting cman... [  OK  ]
   Starting fenced... [  OK  ]
   Starting dlm_controld... [  OK  ]
   Tuning DLM kernel config... [  OK  ]
   Unfencing self... [  OK  ]

Code:
root@s2 ~ # pvecm delnode s3
cluster not ready - no quorum?
root@s2 ~ # vzctl --skiplock set 1011 --diskspace 9G:10380902 --diskinodes 1800000:1980000 --save
Unable to create configuration file /etc/pve/nodes/s2/openvz/1011.conf.tmp: Permission denied
 
Simply set expected votes to 1 before trying to remove the node:

# pvecm expected 1

after this you can do

# pvecm delnode s3
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!