CMan not start

W

walterluis68

Guest
and I realized that that due to the problem presented to the node, as the cman is stopped and when I try to start it gives me this error

Starting cluster:
Checking if cluster has been disabled at boot ... [OK]
Checking Network Manager ... [OK]
Global setup ... [OK]
Loading kernel modules ... [OK]
Mounting configfs ... [OK]
Starting cman ... / usr / sbin / ccs_config_validate: line 186: 19713 Segmentation fault (core dumped) ccs_config_dump> $ tempfile

Unable to get the configuration
corosync [MAIN] Corosync Cluster Engine ('1 .4.5 '): started and ready to Provide service.
corosync [MAIN] Corosync built-in features: nss
corosync [MAIN] Successfully read config from / etc / cluster / cluster.conf
corosync died with signal 11 Check cluster logs for details
[FAILED]
TASK ERROR: command '/ etc / init.d / cman start' failed: exit code 1


anyone can tell me how to solve this problem
 
my version is


/root$ pveversion -v
pve-manager: 3.0-20 (pve-manager/3.0/0428106c)
running kernel: 2.6.32-20-pve
proxmox-ve-2.6.32: 3.0-100
pve-kernel-2.6.32-20-pve: 2.6.32-100
lvm2: 2.02.95-pve3
clvm: 2.02.95-pve3
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-4
qemu-server: 3.0-15
pve-firmware: 1.0-22
libpve-common-perl: 3.0-4
libpve-access-control: 3.0-4
libpve-storage-perl: 3.0-6
vncterm: 1.1-3
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-12
ksm-control-daemon: 1.1-1
 
Is it possible to see the cluster config:

# cat /etc/cluster/cluster.conf

And what if the hostname and IP address of this node?
 
Here I put what it says it does not let me cluster.conf adjuntarcelo (meduza-III node was joining the cluster and I fault the electric current)
_______________________________________________________________

<?xml version="1.0"?>
<cluster name="hcqho" config_version="9">

<cman keyfile="/var/lib/pve-cluster/corosync.authkey">
</cman>

<clusternodes>

<clusternode name="meduza-III" votes="1" nodeid="1"/></clusternodes>

</cluster>
______________________________________________________________
This was the first post I did today, maybe you can see better to be all

have very good morning.
sorry for the inconvenience, but something happened to me, I have 2 servers currently Proxmox version 3 I was connecting a third server to the cluster and I fault the electric fluid and apparently not complete the entry node to the cluster, now are giving me problems. ..
when I try to delete the cluster I get this error. (/ $ Pvecm delnode meduza-III
cluster not ready - no quorum?

cman restart the service and I get this ...

/ $ / Etc / init.d / cman restart
Stopping cluster:
Stopping dlm_controld ... [OK]

Stopping fenced ... [OK]

Stopping cman ... [OK]

Unloading kernel modules ... [OK]

Unmounting configfs ... [OK]

Starting cluster:
Checking if cluster has been disabled at boot ... [OK]

Checking Network Manager ... [OK]

Global setup ... [OK]

Loading kernel modules ... [OK]

Mounting configfs ... [OK]

Starting cman ... / usr / sbin / ccs_config_validate: line 186: 16132 Segmentation fault (core dumped) ccs_config_dump> $ tempfile

Unable to get the configuration
corosync [MAIN] Corosync Cluster Engine ('1 .4.5 '): started and ready to Provide service.
corosync [MAIN] Corosync built-in features: nss
corosync [MAIN] Successfully read config from / etc / cluster / cluster.conf
corosync died with signal 11 Check cluster logs for details
[FAILED]


thank you very much and I hope you can help solve this problem
 
and tried it .. both the computer you created the cluster as in the other and the results are the same, try to remove it (pvecm delnode meduza-II) cluster not ready - not quorum?


I must recreate the cluster?
 
and tried it .. both the computer you created the cluster as in the other and the results are the same, try to remove it (pvecm delnode meduza-II)

Sorry, I simply do not understand what do try to do. Looks you removed the node itself from the cluster conf, and that is why cman does not start.
You can try to manually correct the cluster config and copy it to /etc/cluster/cluster.conf, then try to startup cman. If that works, copy the working cluster configuration to /etc/pve/cluster.conf.
 
thanks for responding, but I think it would be much more feasible to remove the cluster and everything that refers to it and re-create it (and seen some documentation but regarding Proxmox 2 would you kindly explain how it would to remove a cluster and the files I should remove) thanks and sorry for the inconvenience
 
currently from the web will not let me see any of the PCs I have at the nodes ...
is there any way to save the VM via ssh and configurations files, install from scratch and then put them back again on that node?

If so which should save files?

thank you very much and apologize for any inconvenience you
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!