Any help will be appreciated.[root@proxmox-slave:/var/lib/vz#] qm unlock 204
unable to open file '/etc/pve/nodes/proxmox-slave/qemu-server/204.conf.tmp.184320' - Permission denied
Yes, you are right! I tried set up cluster in the last week, using this article: http://pve.proxmox.com/wiki/Proxmox_VE_2.0_Clusteryou have a cluster and you lost quorum?
Yes, you are right! I tried set up cluster in the last week, using this article: http://pve.proxmox.com/wiki/Proxmox_VE_2.0_Cluster
On the first node all was well: I typed "pvecm create CLUSTERNAME" and "pvecm nodes" commands - the cluster config was been created and I was able to see cluster's node.
Then I logged to second node and typed "pvecm add IP-ADDRESS-CLUSTER" and configuration was hung at set up quorum. I was frustrated because in the above how to nothing was said about quorum problem and 3 node cluster at least (now I have only 2 physical machines with proxmox).
Then I deleted second node from the cluster, and save other settings (on the first node). All worked fine about 2 weeks, but yesterday my NFS-backup server got a problem with HDDs and hung. My proxmox server should do a backup job, but NFS wasn't available. I restarted proxmox server and got problem with automatically VM's start up. OpenVZ VM's started after manually "vzctl start vmid" command, but KVM VM's show me the lock error die to backup job.
How can I resolve this problem? Can I set up 2-node non-HA cluster to run away from quorum error? I don't have fencing device and SAN right now, but I needn't HA cluster feature. Only centralised web management and VM's migration are desirable for me right now.
PS: Sorry for my english. It's not native language for me. Thanks.
Yes, IP-multicast is enabled. I tested it by asmping command. It's works.1. does your network support IP multicast?
2. and post "cat /etc/hosts" from both nodes
[root@proxmox-slave:~#] cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
10.0.10.7 proxmox-slave.local proxmox-slave pvelocalhost
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
[root@proxmox-backup:~#] cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
10.0.10.9 proxmox-backup.local proxmox-backup pvelocalhost
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
[root@proxmox-slave:~#] /etc/init.d/cman start
Starting cluster:
Checking if cluster has been disabled at boot... [ OK ]
Checking Network Manager... [ OK ]
Global setup... [ OK ]
Loading kernel modules... [ OK ]
Mounting configfs... [ OK ]
Starting cman... Cannot find node name in cluster.conf
Unable to get the configuration
Cannot find node name in cluster.conf
cman_tool: corosync daemon didn't start Check cluster logs for details
[FAILED]
[root@proxmox-slave:~#] cat /etc/pve/cluster.conf
<?xml version="1.0"?>
<cluster name="BAKULEV" config_version="4">
<cman keyfile="/var/lib/pve-cluster/corosync.authkey">
</cman>
<clusternodes>
</clusternodes>
</cluster>
Should it help me with 2-node non-HA cluster or not?To be able to create and manage a two-node cluster, edit the cman configuration part to include this:
<cman two_node="1" expected_votes="1"> </cman>
Hi,
I had similar and think I worked it out..
You need to start the cluster file system in Local mode.
First stop it..
# /etc/init.d/pve-cluster stop
then start in Local mode
# /usr/bin/pmxcfs -l
then remove (or backup etc) your cluster.conf
# mv /etc/pve/cluster.conf ~/
Then stop and start the cluster file system normally
# /etc/init.d/pve-cluster stop
# /etc/init.d/pve-cluster start
hope that helps
Tim