run HA with two nodes

rickygm

Renowned Member
Sep 16, 2015
138
6
83
Hi forum , Hi have two nodes running in a cluster with Proxmox 3.4, I'm making run HA for vm, but I still have not succeeded, if I shutdown a node , in the cluster the virtual machine is migrated to another node, but does not start, remains off.

ilo4 the administration is correct, I can connect perfectly
My server are Hp DL380P

<?xml version="1.0"?>
<cluster config_version="17" name="RPI">
<cman expected_votes="1" keyfile="/var/lib/pve-cluster/corosync.authkey"/>
<fencedevices>
<fencedevice agent="fence_ilo4" ipaddr="192.168.21.12" login="Administrator" name="fencea" passwd="XXXXX"/>
<fencedevice agent="fence_ilo4" ipaddr="192.168.21.11" login="Administrator" name="fenceb" passwd="XXXX1"/>
</fencedevices>
<clusternodes>
<clusternode name="hyper1" nodeid="1" votes="1">
<fence>
<method name="1">
<device action="reboot" name="fencea"/>
</method>
</fence>
</clusternode>
<clusternode name="hyper2" nodeid="2" votes="1">
<fence>
<method name="1">
<device action="reboot" name="fenceb"/>
</method>
</fence>
</clusternode>
</clusternodes>
<rm>
<pvevm autostart="1" vmid="5000"/>
</rm>
</cluster>
 
Hi forum , Hi have two nodes running in a cluster with Proxmox 3.4, I'm making run HA for vm, but I still have not succeeded, if I shutdown a node , in the cluster the virtual machine is migrated to another node, but does not start, remains off.

ilo4 the administration is correct, I can connect perfectly
My server are Hp DL380P

<?xml version="1.0"?>
<cluster config_version="17" name="RPI">
<cman expected_votes="1" keyfile="/var/lib/pve-cluster/corosync.authkey"/>
<fencedevices>
<fencedevice agent="fence_ilo4" ipaddr="192.168.21.12" login="Administrator" name="fencea" passwd="XXXXX"/>
<fencedevice agent="fence_ilo4" ipaddr="192.168.21.11" login="Administrator" name="fenceb" passwd="XXXX1"/>
</fencedevices>
<clusternodes>
<clusternode name="hyper1" nodeid="1" votes="1">
<fence>
<method name="1">
<device action="reboot" name="fencea"/>
</method>
</fence>
</clusternode>
<clusternode name="hyper2" nodeid="2" votes="1">
<fence>
<method name="1">
<device action="reboot" name="fenceb"/>
</method>
</fence>
</clusternode>
</clusternodes>
<rm>
<pvevm autostart="1" vmid="5000"/>
</rm>
</cluster>

The only way a 2 node cluster can work in 3.4 is with a quorum disk. Are you using a quorum disk? Based on your results I am guessing you aren't. You have to maintain 2 votes for quorum. If 1 server goes down, there is only 1 vote and no way to achieve quorum other than manually setting it. Quorum disks aren't supported in proxmox 4. I would suggest adding a 3rd node and it will make life much easier. It can be a very light weight node, it just needs to provide a quorum vote.
 
The only way a 2 node cluster can work in 3.4 is with a quorum disk. Are you using a quorum disk? Based on your results I am guessing you aren't. You have to maintain 2 votes for quorum. If 1 server goes down, there is only 1 vote and no way to achieve quorum other than manually setting it. Quorum disks aren't supported in proxmox 4. I would suggest adding a 3rd node and it will make life much easier. It can be a very light weight node, it just needs to provide a quorum vote.

Adamb Hi , right now I'm using Proxmox 3.4, a third node is a little complicated by issues of money, but what would be the easiest route, have a quorum disk.

I'm using a Hp Eva6300, you could use some volume to create the quorum disk?
 
Adamb Hi , right now I'm using Proxmox 3.4, a third node is a little complicated by issues of money, but what would be the easiest route, have a quorum disk.

I'm using a Hp Eva6300, you could use some volume to create the quorum disk?

Yep in theory you could present a new disk from the HP Eva6300 and use that as quorum (Only needs to be like 10-20MB). Its definitely not the best design but will atleast get you to the point of a basic HA setup.
 
Yep in theory you could present a new disk from the HP Eva6300 and use that as quorum (Only needs to be like 10-20MB). Its definitely not the best design but will atleast get you to the point of a basic HA setup.

I have a question, see if you can give me some idea, I have 4 disks submitted to both nodes in the cluster, these and have used and created LVM volumes, yet I have free space, but not if I can be useful? Since each disc is already using

look:

lvm> vgs
VG #PV #LV #SN Attr VSize VFree
lvm_DFAST1 1 3 0 wz--n- 730.00g 325.00g
lvm_DFAST2 1 1 0 wz--n- 730.00g 570.00g
lvm_DLOW1 1 1 0 wz--n- 4.00t 3.93t
lvm_DLOW2 1 0 0 wz--n- 4.29t 4.29t
pve 1 4 0 wz--n- 279.24g 3.24g

now I have a Synology NAS NFS added to the cluster, for backup of my vm, but then have a free disk NAS 4TB I think it could use for iSCSI

http://i1132.photobucket.com/albums/m579/rickygm/synologi_zpsageuwtpn.png

What could be the best scenario?
 
I have a question, see if you can give me some idea, I have 4 disks submitted to both nodes in the cluster, these and have used and created LVM volumes, yet I have free space, but not if I can be useful? Since each disc is already using

look:

lvm> vgs
VG #PV #LV #SN Attr VSize VFree
lvm_DFAST1 1 3 0 wz--n- 730.00g 325.00g
lvm_DFAST2 1 1 0 wz--n- 730.00g 570.00g
lvm_DLOW1 1 1 0 wz--n- 4.00t 3.93t
lvm_DLOW2 1 0 0 wz--n- 4.29t 4.29t
pve 1 4 0 wz--n- 279.24g 3.24g

now I have a Synology NAS NFS added to the cluster, for backup of my vm, but then have a free disk NAS 4TB I think it could use for iSCSI

http://i1132.photobucket.com/albums/m579/rickygm/synologi_zpsageuwtpn.png

What could be the best scenario?

Yea I would present that 4TB disk or a small piece of it over iscsi to both hosts, then setup the quorum disk on that.
 
Hi, i have a doubt, when I add a SCSI connection, if I reboot a node of the cluster and want to see on the other node iSCSI storage, often you show me this error message:
connect timed out failed
the connection gets slow and I have to remove the datastore option.

any idea? , where I see the log.
 
Hi, i have a doubt, when I add a SCSI connection, if I reboot a node of the cluster and want to see on the other node iSCSI storage, often you show me this error message:
connect timed out failed
the connection gets slow and I have to remove the datastore option.

any idea? , where I see the log.

Now that you have a qurorum disk, can you provide the output of "clustat" from each node?
 
Hi adamb , I'm having trouble when I add the disk iscsi, only what I see on one of the nodes, I think should appear in both.

Node1
Disk /dev/sdr: 10.7 GB, 10737418240 bytes
64 heads, 32 sectors/track, 10240 cylinders, total 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes
Disk identifier: 0x00000000

look link
http://i1132.photobucket.com/albums/m579/rickygm/node1_zpskxz0qael.png

Node2
not show disk on cli interface
look link
http://i1132.photobucket.com/albums/m579/rickygm/node2_zpsovuzhwvk.png
 
Last edited:
Hi adamb , I'm having trouble when I add the disk iscsi, only what I see on one of the nodes, I think should appear in both.

Node1
Disk /dev/sdr: 10.7 GB, 10737418240 bytes
64 heads, 32 sectors/track, 10240 cylinders, total 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes
Disk identifier: 0x00000000

Node2
not show disk

How are you logging into the iscsi target? From the GUI or using iscsiadm?