Proxmox 3 High disponibility Installation in Cluster with interchange disks

jdcm

New Member
May 27, 2013
8
0
1
Hi friends,

I discovered Proxmox recently, and this is incredible, I installed same configurations and the best configuration for me this is:

I install Proxmox 3.0 in Two Servers exactly equals (Dell SC1950, Two CPU, 16 GB ram, Two Disks 1 TB (in Raid1 Hardware).


With this configuration I can interchange the information with servers (Backups, Templates, etc...): (http://www.netstorming.com.ar/2010/06/14/proxmox-instalacion-clustering-y-migracion-en-vivo/).


foto2.pngfoto1.png


This configuration is very good, but not have a High Disponibility,

I tried: http://pve.proxmox.com/wiki/Two-Node_High_Availability_Cluster and others manuals.

But unsuccessfully, I can not configure this option.

I do not understand the functioning of DRBD, HA...

How I can configure High disponibility in my servers with this configuration (if this is possible...), without external storage, external disks or other disks/servers?


Thank you very much for your help!

*Sorry for my english. The spanish education... :)
 
Hi,
I don't understand your setup ? you have mount through nfs the disks from node1 to node2, and disks from node2 to node1 ?

For ha, you need a external shared storage. (How can you restart vm from node1 to node2, if node1 is down (so the storage of node1 if down too) ?

Or or need to replicate datas from node1 to node2. (like with drbd), maybe you can have a look at sheepdog too (http://pve.proxmox.com/wiki/Storage:_Sheepdog).
 
Hi friend, thanks for your time,

-> I don't understand your setup ? you have mount through nfs the disks from node1 to node2, and disks from node2 to node1 ?

Yes, so, when a node is down, I can restore their backups in the other node. Also, I can shared isos, templates, etc.. very easily.


-> For ha, you need a external shared storage. (How can you restart vm from node1 to node2, if node1 is down (so the storage of node1 if down too) ?

Mandatory ?


-> Or or need to replicate datas from node1 to node2. (like with drbd), maybe you can have a look at sheepdog too (http://pve.proxmox.com/wiki/Storage:_Sheepdog).

But this configuration is very similar to mine with nfs, right?


Thanks ;)
 
Well, The idea is to have access to disks in real time for any nodes. So you need a shared storage, or replicated storage.

Also fo "real HA", (if a node is down, vm are automaticaly restarted), you need 3 nodes mininum and fencing devices.
 
Hi Friends,

Then I installed this:

- Cluster with 2 nodes linked to a Gigabit Network Card.
- NFS storage shared with the nodes.

Then, I configure the HA following this manual until "Checking everything works": http://pve.proxmox.com/wiki/Two-Node_High_Availability_Cluster

My /etc/pve/cluster.conf.new:

<?xml version="1.0"?>
<cluster config_version="3" name="ACLASS-CLUSTER">
<cman expected_votes="1" keyfile="/var/lib/pve-cluster/corosync.authkey" two_node="1"/>
<fencedevices>
<fencedevice agent="fence_ilo" ipaddr="172.16.0.1" login="root" name="proxmox1" passwd="PASSWORD"/>
<fencedevice agent="fence_ilo" ipaddr="172.16.0.2" login="root" name="proxmox2" passwd="PASSWORD"/>
</fencedevices>
<clusternodes>
<clusternode name="proxmox1" nodeid="1" votes="1">
<fence>
<method name="1">
<device action="reboot" name="proxmox1"/>
</method>
</fence>
</clusternode>
<clusternode name="proxmox2" nodeid="2" votes="1">
<fence>
<method name="1">
<device action="reboot" name="proxmox2"/>
</method>
</fence>
</clusternode>
</clusternodes>
<rm>
<pvevm autostart="1" vmid="100"/>
</rm>
</cluster>

- Now, I uncomment #FENCE_JOIN="yes" in the /etc/default/redhat-cluster-pve, and execute fence_tool join in both servers. //This part does not come in the manual.

- Add a VM in the HA tab, click on Activate, and Start RGManager Service in both servers.All configured!! When I Stopping a RGManager Service, The VM start in other node automatically, Zero problems.
Exactly like in the video: http://www.youtube.com/watch?v=aued5iDXcrc



The manual says following:
Now that everything is going smooth on your cluster, just power-off the node were machines are running (press power) or pull the network cables used for data and synchronization (keep that used for the fencing mechanism). From your web manager (connected to the other node), you can see how things evolve by clicking on the "healthy" node and looking at the scrolling "syslog" tab. It should detect the other node is down, create a new quorum and fence it. If fencing fails, you should re-check your fencing configuration as it MUST work in order to automatically restart machines.


But, when I unplug the cables or halt the machine, not starting automatically in other node... :(
Why?

Thanks for your time ;)
 
Hi friends,

at finally I install a Cluster with a 2 nodes and the NFS shared is in other node.

I configure /etc/pve/cluster.conf.new:

<?xml version="1.0"?>
<cluster config_version="6" name="ACLASS-CLUSTER">
<cman expected_votes="1" keyfile="/var/lib/pve-cluster/corosync.authkey" two_node="1"/>
<fencedevices>
<fencedevice agent="fence_ilo" ipaddr="172.16.0.1" login="root" name="proxmox1" passwd="PASS"/>
<fencedevice agent="fence_ilo" ipaddr="172.16.0.2" login="root" name="proxmox2" passwd="PASS"/>
</fencedevices>
<clusternodes>
<clusternode name="proxmox1" nodeid="1" votes="1">
<fence>
<method name="1">
<device action="reboot" name="proxmox1"/>
</method>
</fence>
</clusternode>
<clusternode name="proxmox2" nodeid="2" votes="1">
<fence>
<method name="1">
<device action="reboot" name="proxmox2"/>
</method>
</fence>
</clusternode>
</clusternodes>
<rm>
<pvevm autostart="1" vmid="101"/>
<pvevm autostart="1" vmid="102"/>
</rm>
</cluster>


When I stop RGmanager service, the VM starting automatically in other node. All correctly and working without problems.

But, when I shutdown physically the node or unplug the cables, the VMs not Start in other node... :( Why?


Thanks for your time. ;)
 
But, when I shutdown physically the node or unplug the cables, the VMs not Start in other node... :( Why?
Thanks for your time. ;)

Do you mean, when you shutdown physically the node where the nfs is located ?
If yes, how do you want that the vm can be restart on the other node, if the storage is not available .....
 
Hi,

Do you mean, when you shutdown physically the node where the nfs is located ?

No,

I have a cluster with a 2 nodes and proxmox 3 in both nodes, and a third machine with a nfs storage (with phisicall VM).

Now, when I stop the RGManager service in a any node, the VM start automatically in other node.
But, when I halt phisically node not start in other node.


Regards.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!