DRDB: LV (LVM Logical Volume) not visible from second Proxmox Node

marcap

New Member
Apr 5, 2011
10
0
1
Hi,

I just experimenting with DRDB, my main goal is to create a solution with proxmox to make a virtual machine redundant for the case one of my two servers fails. On the way to this goal I installed DRDB to provide a cheap shared medium for both servers. The live migration feature between both nodes works pretty fine:

Code:
/usr/bin/ssh -t -t -n -o BatchMode=yes 192.168.1.121 /usr/sbin/qmigrate --online 192.168.1.120 101Oct 02 20:12:53 starting migration of VM 101 to host '192.168.1.120' 
Oct 02 20:12:53 copying disk images 
Oct 02 20:12:53 starting VM on remote host '192.168.1.120' 
Oct 02 20:12:53 starting migration tunnel 
Oct 02 20:12:54 starting online/live migration 
Oct 02 20:12:56 migration status: active (transferred 26596KB, remaining 2083996KB), total 2113856KB) 
Oct 02 20:12:58 migration status: active (transferred 49553KB, remaining 2059084KB), total 2113856KB) 
Oct 02 20:13:00 migration status: active (transferred 65877KB, remaining 1208360KB), total 2113856KB) 
Oct 02 20:13:02 migration status: active (transferred 80454KB, remaining 253048KB), total 2113856KB) 
Oct 02 20:13:04 migration status: active (transferred 103386KB, remaining 230504KB), total 2113856KB) 
Oct 02 20:13:06 migration status: active (transferred 125991KB, remaining 205956KB), total 2113856KB) 
Oct 02 20:13:08 migration status: active (transferred 149015KB, remaining 181112KB), total 2113856KB) 
Oct 02 20:13:10 migration status: active (transferred 171908KB, remaining 156928KB), total 2113856KB) 
Oct 02 20:13:12 migration status: active (transferred 194992KB, remaining 131404KB), total 2113856KB) 
Oct 02 20:13:14 migration status: active (transferred 217505KB, remaining 107776KB), total 2113856KB) 
Oct 02 20:13:16 migration status: active (transferred 240497KB, remaining 84404KB), total 2113856KB) 
Oct 02 20:13:18 migration status: active (transferred 263286KB, remaining 54488KB), total 2113856KB) 
Oct 02 20:13:20 migration status: completed 
Oct 02 20:13:20 migration speed: 78.77 MB/s 
Oct 02 20:13:21 migration finished successfuly (duration 00:00:28) 
Connection to 192.168.1.121 closed. 
VM 101 migration done

But my Problem is: Lets think about that: The node with the running KVM container has a hardware fault and is complete powered off. Now I want to bring the VM up and running from the second hardware node. But unfortunately I didn't see the shared drdb lv which includes the KVM-Disk:

Client00:
Code:
client00:~# lvdisplay 
  --- Logical volume ---
  LV Name                /dev/replicated/vm-101-disk-1
  VG Name                replicated
  LV UUID                i7NM1a-heRf-ZufV-5pVJ-NsU8-0xHh-d2vALH
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                10.00 GB
  Current LE             2560
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:4

client00:~# pvscan
  PV /dev/sda2    VG pve          lvm2 [1.82 TB / 343.99 GB free]
  PV /dev/drbd0   VG replicated   lvm2 [60.00 GB / 50.00 GB free]
  Total: 2 [1.88 TB] / in use: 2 [1.88 TB] / in no VG: 0 [0   ]

Client01:
Code:
client01:~# lvdisplay 
 !!Displays just local LV's!!

client01:~# pvscan  PV /dev/sda2    VG pve          lvm2 [698.13 GB / 343.99 GB free]
  PV /dev/drbd0   VG replicated   lvm2 [60.00 GB / 50.00 GB free]
  Total: 2 [758.13 GB] / in use: 2 [758.13 GB] / in no VG: 0 [0   ]

Do you have an Idea how to fix that and start the VM from the second node?

Thanks a lot in advance!

Best regards,
Marcap
 
Have you found a solution ?

I am running into similar problem ( i.e volume not available/active on the second server)

Thanks
Steven