HA cluster problem VM migration

O

odmicnheg

Guest
Hi all!
I have two node
node1 - 192.168.0.1 (Proxmox 2.1)
node2 - 192.168.0.2 (Proxmox 2.1)

I have problem with my cluster.
I created two node cluster (same as in manual)
# cat /etc/pve/cluster.conf
<?xml version="1.0"?>
<cluster config_version="15" name="my-cluster">
<cman expected_votes="1" keyfile="/var/lib/pve-cluster/corosync.authkey" two_node="1"/>
<fencedevices>
<fencedevice agent="fence_intelmodular" ipaddr="192.168.0.2" login="admin" name="ims" passwd="123456" snmp_auth_prot="SHA" snmp_sec_level="auth" snmp_version="3"/>
</fencedevices>
<clusternodes>
<clusternode name="node1" nodeid="1" votes="1">
<fence>
<method name="1">
<device name="ims" port="1"/>
</method>
</fence>
</clusternode>
<clusternode name="node2" nodeid="2" votes="1">
<fence>
<method name="1">
<device name="ims" port="2"/>
</method>
</fence>
</clusternode>
</clusternodes>
<rm>
<pvevm autostart="1" vmid="124"/>
</rm>
</cluster>

I created DRBD and and VM (vmid=124) managed by HA.
VM on node1 with Hard Disk

00000177.png


Migrated VM on node2 without Hard Disk

00000176.png


When I stoping RGManager VM migrate to node2, but I can`t see Hard Disk here.
On first node all right.

Storage where VM created are with drbd and shared
00000175.png


When Vm migrate to node2 on both nodes I see this
node1
# lvscan |grep 124
inactive '/dev/drbdvg/vm-124-disk-1' [32.00 GiB] inherit

node2
# lvscan |grep 124
inactive '/dev/drbdvg/vm-124-disk-1' [32.00 GiB] inherit

Where is my problem? What am I doing wrong?
 
# pvecm status

root@node1:~# pvecm status
Version: 6.2.0
Config Version: 15
Cluster Name: my-cluster
Cluster Id: 180
Cluster Member: Yes
Cluster Generation: 180
Membership state: Cluster-Member
Nodes: 2
Expected votes: 1
Total votes: 2
Node votes: 1
Quorum: 1
Active subsystems: 6
Flags: 2node
Ports Bound: 0 177
Node name: node1
Node ID: 1
Multicast addresses: 239.192.0.180
Node addresses: 192.168.0.1
root@node1:~#

Node2 status
root@node2:~# pvecm status
Version: 6.2.0
Config Version: 15
Cluster Name: my-cluster
Cluster Id: 180
Cluster Member: Yes
Cluster Generation: 180
Membership state: Cluster-Member
Nodes: 2
Expected votes: 1
Total votes: 2
Node votes: 1
Quorum: 1
Active subsystems: 5
Flags: 2node
Ports Bound: 0
Node name: node2
Node ID: 2
Multicast addresses: 239.192.0.180
Node addresses: 192.168.0.2
 
What is the output of

# md5sum /etc/pve/qemu-server/124.conf

on both nodes?

When VM on node1

root@node1:~# md5sum /etc/pve/qemu-server/124.conf
caef4dd6dadee21c2263c05db96f68a5 /etc/pve/qemu-server/124.conf

root@node2:~# md5sum /etc/pve/qemu-server/124.conf
md5sum: /etc/pve/qemu-server/124.conf: No such file or directory
root@node2:~# ls -alh /etc/pve/qemu-server
lrwxr-x--- 1 root www-data 0 Jan 1 1970 /etc/pve/qemu-server -> nodes/node2/qemu-server
root@node2:~# md5sum /etc/pve/nodes/node1/qemu-server/124.conf
caef4dd6dadee21c2263c05db96f68a5 124.conf

After migration, they are different

root@node2:/etc/pve# md5sum /etc/pve/qemu-server/124.conf
2056a1b3b31db7126623a9d64e5bc9c7 /etc/pve/qemu-server/124.conf
 
root@node1:~# clustat
Cluster Status for my-cluster @ Thu Oct 18 15:28:54 2012
Member Status: Quorate


Member Name ID Status
------ ---- ---- ------
node1 1 Online, Local, rgmanager
node2 2 Online


Service Name Owner (Last) State
------- ---- ----- ------ -----
pvevm:124 node1 started
 
Can you show me working cluster.conf file?

Step by step what I did:

1. Created cluster and added node2.
node1# pvecm create my-cluster
node2# pvecm add MY_CLUSTER_IP

2. Created DRBD storage and sync

3. In proxmox
Datacenter -> Storage -> Add -> LVM Group
ID: drbd
VolumeGroup: drbdvg
Shared: yes

4. Create cluster.conf with fence

5. Create VM on drbd lvm.

6. Stop rgmanager for migrate VM. (On node2 VM without storage ((( )
 
6. Stop rgmanager for migrate VM. (On node2 VM without storage ((( )

Not sure what you are trying to do, but you use 'two_node=1'. So either kill/fence a node, or keep both nodes running. Everything between is very dangerours!
 
I had allmost the same issue before, I came down to the CDROM image not being available on the second node local storage iso directory.
Either synching up the iso storage directory or removing the cdrom image from the VM storage (not needed anymore since the OS is already installed anyway)
fixed it for me...
 
I had allmost the same issue before, I came down to the CDROM image not being available on the second node local storage iso directory.
Either synching up the iso storage directory or removing the cdrom image from the VM storage (not needed anymore since the OS is already installed anyway)
fixed it for me...

Thanks. Yes, when I removed the cd-rom it worked. But now I have another problem. When I am adding new storage to the VM after migration it don't migrate with this VM.

Before migrate
00000179.png

After migrate
00000180.png
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!