Good morning to all.
I just configured a two node cluster with HA but I have a problem.
I have a VM (100) running in node 1 (gestion1), if I restart or shutdown manually node 1, this VM is migrated to node 2 withouth any problem, it works from node 2 to node 1, too.
VM is configured in a LVM data storage with drbd configured.
RGManager is running in two nodes.
The problem I have is:
If I have a VM running on node 1 (or node 2) and I quit the LAN cable or quit the power from the node, VM is not migrated to the other node from the cluster. The VM downs with the node.
My cluster.conf is:
Proxmox version in two nodes:
I hope you can help me with this problem, if you need more info I can paste it.
I just configured a two node cluster with HA but I have a problem.
I have a VM (100) running in node 1 (gestion1), if I restart or shutdown manually node 1, this VM is migrated to node 2 withouth any problem, it works from node 2 to node 1, too.
VM is configured in a LVM data storage with drbd configured.
RGManager is running in two nodes.
The problem I have is:
If I have a VM running on node 1 (or node 2) and I quit the LAN cable or quit the power from the node, VM is not migrated to the other node from the cluster. The VM downs with the node.
My cluster.conf is:
Code:
<?xml version="1.0"?>
<cluster config_version="7" name="gestioncluster">
<cman expected_votes="1" keyfile="/var/lib/pve-cluster/corosync.authkey" two_node="1"/>
<fencedevices>
<fencedevice agent="fence_ilo" ipaddr="192.168.130.34" login="ADMIN" name="fenceA" passwd="ADMI$
<fencedevice agent="fence_ilo" ipaddr="192.168.130.44" login="ADMIN" name="fenceB" passwd="ADMI$
</fencedevices>
<clusternodes>
<clusternode name="gestion1" nodeid="1" votes="1">
<fence>
<method name="1">
<device action="reboot" name="fenceA"/>
</method>
</fence>
</clusternode>
<clusternode name="gestion2" nodeid="2" votes="1">
<fence>
<method name="1">
<device action="reboot" name="fenceB"/>
</method>
</fence>
</clusternode>
</clusternodes>
<rm>
<pvevm autostart="1" vmid="100" recovery="relocate"/>
</rm>
</cluster>
Proxmox version in two nodes:
Code:
pveversion -vproxmox-ve-2.6.32: 3.2-136 (running kernel: 2.6.32-32-pve)
pve-manager: 3.3-1 (running version: 3.3-1/a06c9f73)
pve-kernel-2.6.32-32-pve: 2.6.32-136
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.1-34
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-23
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-5
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
I hope you can help me with this problem, if you need more info I can paste it.