TASK ERROR: can't activate LV '/dev/drbdpool/vm-102-disk-1'

crazynotion

New Member
Aug 31, 2015
6
0
1
Hi all

I had a 3 nodecluster(Proxmox 4.0) with drbd9 running each node had 1-2 Vms.

After running the command

drbdmanage init -q 10.0.234.51

i get the following error when i try to start the VM's

TASK ERROR: can't activate LV '/dev/drbdpool/vm-102-disk-1': Failed to find logical volume "drbdpool/vm-102-disk-1"

I guess this has to do with the fact that the metadata of the volumes are changed. Is there any possibility to recover the Vm or are they gone for good.

The lv can be still found on the disk

root@pve1:/mnt/pve# lvdisplay
--- Logical volume ---
LV Path /dev/drbdpool/lvol0
LV Name lvol0
VG Name drbdpool
LV UUID FKZamO-WDPx-wDzQ-oWMw-iBe5-vQz0-tQwfRn
LV Write Access read/write
LV Creation host, time pve1, 2016-03-31 14:24:35 +0200
LV Status available
# open 0
LV Size 96.00 MiB
Current LE 24
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:1

--- Logical volume ---
LV Name drbdthinpool
VG Name drbdpool
LV UUID 73rz6g-9Vzx-Vihy-yNS4-gdfH-YNyH-2RGpo8
LV Write Access read/write
LV Creation host, time pve1, 2016-03-31 14:24:53 +0200
LV Pool metadata drbdthinpool_tmeta
LV Pool data drbdthinpool_tdata
LV Status available
# open 7
LV Size 2.54 TiB
Allocated pool data 20.04%
Allocated metadata 10.32%
Current LE 665600
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:4

--- Logical volume ---
LV Path /dev/drbdpool/vm-100-disk-1_00
LV Name vm-100-disk-1_00
VG Name drbdpool
LV UUID OWiwYJ-nAjq-o2hd-UCX1-O3x1-Vjo3-gHSgN5
LV Write Access read/write
LV Creation host, time pve1, 2016-03-31 16:44:02 +0200
LV Pool name drbdthinpool
LV Status available
# open 0
LV Size 45.01 GiB
Mapped size 100.00%
Current LE 11523
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:6



regards
 
Last edited:
You 'drbdmanage init' initialize the cluster including the control volume. Its a bad idea to do that on
a running cluster.
 
You 'drbdmanage init' initialize the cluster including the control volume. Its a bad idea to do that on
a running cluster.

yes i guess so :D

I found the solution

I created a new VM machine on the drbd volumen and did a lvdisplay
and renamed the lv

from
/dev/drbdpool/vm-100-disk-1_00
to
/dev/drbdpool/vm-100-disk-1_00


with

lvrename /dev/drbdpool/vm-102-disk-1_00 /dev/drbdpool/vm-102-disk-1

and all the machines are running now :D