SAN FC storage with LVM. Can't migrate after add new VM

lwidomski

New Member
Jun 8, 2011
9
0
1
Hello,

I use Proxmox 5 on two node cluster (without HA). After i add new VM on shared FC lun on one node second node did not refresh LVM. Then i can't migrate this machine.
FC lun with LVM (marked as shared in GUI).

Code:
root@pve4:~# pveversion
pve-manager/5.0-23/af4267bf (running kernel: 4.10.15-1-pve)

Code:
()
2017-07-08 13:50:54 starting migration of VM 402 to node 'pve3' (10.0.x.x)
2017-07-08 13:50:54 copying disk images
2017-07-08 13:50:54 starting VM 402 on remote node 'pve3'
2017-07-08 13:50:55 can't activate LV '/dev/hp3par_pvea/vm-402-disk-1':   Failed to find logical volume "hp3par_pvea/vm-402-disk-1"
2017-07-08 13:50:55 ERROR: online migrate failure - command '/usr/bin/ssh -o 'BatchMode=yes' -o 'HostKeyAlias=pve3' root@10.0.x.x qm start 402 --skiplock --migratedfrom pve4 --migration_type secure --stateuri unix --machine pc-i440fx-2.9' failed: exit code 255
2017-07-08 13:50:55 aborting phase 2 - cleanup resources
2017-07-08 13:50:55 migrate_cancel
2017-07-08 13:50:56 ERROR: migration finished with problems (duration 00:00:02)
TASK ERROR: migration problems

Please help. What i need to do? Do i need use cLVM?
Regards,
Widomski Lukasz
 
On shared FC storage i can use only full provisioning lvm (normal lvm).

I changed flag in /etc/lvm/lvm.conf
#use_lvmetad = 1
use_lvmetad = 0
and then i think everything is OK. I also use multipathd so there i put filter
Code:
filter = [ "a|/dev/sda.*|", "a|/dev/sdb.*|", "r|/dev/sd.*|" ]
to disable rescan by lvm deamon sdx drives (excluding local sda and sdb).
I can't find those information on Proxmox WIKI. DOCS are very poor about attach shared storage by FC.
Could someone can confirm that option use_lvmeatd=0 is safe?

What is the output of following commands from both nodes?
Code:
vgdisplay
lvs


Code:
root@pve3:/etc# vgdisplay
File descriptor 7 (pipe:[84559]) leaked on vgdisplay invocation. Parent PID 10135: bash
  --- Volume group ---
  VG Name               vmdata
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  5
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               1,59 TiB
  PE Size               4,00 MiB
  Total PE              415549
  Alloc PE / Size       412989 / 1,58 TiB
  Free  PE / Size       2560 / 10,00 GiB
  VG UUID               u7vJFb-DzWD-t6Tp-OBIS-uSpk-y1PR-SU1J3i

  --- Volume group ---
  VG Name               pve
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  7
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               49,75 GiB
  PE Size               4,00 MiB
  Total PE              12735
  Alloc PE / Size       11176 / 43,66 GiB
  Free  PE / Size       1559 / 6,09 GiB
  VG UUID               rxaZ9Q-TcZc-4knx-R6an-NMxa-m1IH-rb60II

  --- Volume group ---
  VG Name               hp3par_pveb
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  12
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               5,00 TiB
  PE Size               4,00 MiB
  Total PE              1310712
  Alloc PE / Size       0 / 0
  Free  PE / Size       1310712 / 5,00 TiB
  VG UUID               CIFitI-BtWJ-GkYJ-bAa4-ro9a-07iH-FF5R9f

  --- Volume group ---
  VG Name               hp3par_pvea
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  55
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                20
  Open LV               6
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               3,00 TiB
  PE Size               4,00 MiB
  Total PE              786424
  Alloc PE / Size       354816 / 1,35 TiB
  Free  PE / Size       431608 / 1,65 TiB
  VG UUID               8v62Pn-RnN1-D3mZ-MQvm-JLuF-7JKi-EVExDh

root@pve3:/etc# lvs
File descriptor 7 (pipe:[84559]) leaked on lvs invocation. Parent PID 10135: bash
  LV            VG          Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  vm-341-disk-1 hp3par_pvea -wi-ao----  20,00g
  vm-342-disk-1 hp3par_pvea -wi-ao----  60,00g
  vm-351-disk-1 hp3par_pvea -wi-ao----   4,00g
  vm-370-disk-1 hp3par_pvea -wi-ao----  40,00g
  vm-371-disk-1 hp3par_pvea -wi-ao----  40,00g
  vm-372-disk-1 hp3par_pvea -wi-ao----  40,00g
  vm-402-disk-1 hp3par_pvea -wi-a-----  32,00g
  vm-403-disk-1 hp3par_pvea -wi-a-----  40,00g
  vm-404-disk-1 hp3par_pvea -wi-a-----  20,00g
  vm-404-disk-2 hp3par_pvea -wi-a----- 220,00g
  vm-404-disk-3 hp3par_pvea -wi-a-----  10,00g
  vm-405-disk-1 hp3par_pvea -wi-a-----  20,00g
  vm-405-disk-2 hp3par_pvea -wi-a----- 330,00g
  vm-405-disk-3 hp3par_pvea -wi-a-----  10,00g
  vm-421-disk-1 hp3par_pvea -wi-a-----  60,00g
  vm-422-disk-1 hp3par_pvea -wi-a----- 200,00g
  vm-431-disk-1 hp3par_pvea -wi-a-----  60,00g
  vm-431-disk-2 hp3par_pvea -wi-a-----  60,00g
  vm-432-disk-1 hp3par_pvea -wi-a-----  60,00g
  vm-432-disk-2 hp3par_pvea -wi-a-----  60,00g
  data          pve         twi-a-tz--  15,38g             0,00   0,59
  root          pve         -wi-ao----  12,25g
  swap          pve         -wi-ao----  16,00g
  vmdir         vmdata      -wi-ao---- 350,00g
  vmlvmthin     vmdata      twi-a-tz--   1,23t             0,00   0,25

Code:
root@pve4:~# vgdisplay
  --- Volume group ---
  VG Name               vmdata
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  9
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               1,59 TiB
  PE Size               4,00 MiB
  Total PE              415549
  Alloc PE / Size       412989 / 1,58 TiB
  Free  PE / Size       2560 / 10,00 GiB
  VG UUID               H0eMXZ-WcgC-tUPQ-9R3N-rimC-r5oB-PVmnIz

  --- Volume group ---
  VG Name               pve
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  7
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               49,75 GiB
  PE Size               4,00 MiB
  Total PE              12735
  Alloc PE / Size       11176 / 43,66 GiB
  Free  PE / Size       1559 / 6,09 GiB
  VG UUID               YXNuTj-uUym-9ghD-LMQY-cq9g-eMyD-60HYcZ

  --- Volume group ---
  VG Name               hp3par_pveb
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  12
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               5,00 TiB
  PE Size               4,00 MiB
  Total PE              1310712
  Alloc PE / Size       0 / 0
  Free  PE / Size       1310712 / 5,00 TiB
  VG UUID               CIFitI-BtWJ-GkYJ-bAa4-ro9a-07iH-FF5R9f

  --- Volume group ---
  VG Name               hp3par_pvea
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  55
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                20
  Open LV               14
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               3,00 TiB
  PE Size               4,00 MiB
  Total PE              786424
  Alloc PE / Size       354816 / 1,35 TiB
  Free  PE / Size       431608 / 1,65 TiB
  VG UUID               8v62Pn-RnN1-D3mZ-MQvm-JLuF-7JKi-EVExDh

root@pve4:~# lvs
  LV            VG          Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  vm-341-disk-1 hp3par_pvea -wi-------  20,00g
  vm-342-disk-1 hp3par_pvea -wi-------  60,00g
  vm-351-disk-1 hp3par_pvea -wi-------   4,00g
  vm-370-disk-1 hp3par_pvea -wi-------  40,00g
  vm-371-disk-1 hp3par_pvea -wi-------  40,00g
  vm-372-disk-1 hp3par_pvea -wi-------  40,00g
  vm-402-disk-1 hp3par_pvea -wi-ao----  32,00g
  vm-403-disk-1 hp3par_pvea -wi-ao----  40,00g
  vm-404-disk-1 hp3par_pvea -wi-ao----  20,00g
  vm-404-disk-2 hp3par_pvea -wi-ao---- 220,00g
  vm-404-disk-3 hp3par_pvea -wi-ao----  10,00g
  vm-405-disk-1 hp3par_pvea -wi-ao----  20,00g
  vm-405-disk-2 hp3par_pvea -wi-ao---- 330,00g
  vm-405-disk-3 hp3par_pvea -wi-ao----  10,00g
  vm-421-disk-1 hp3par_pvea -wi-ao----  60,00g
  vm-422-disk-1 hp3par_pvea -wi-ao---- 200,00g
  vm-431-disk-1 hp3par_pvea -wi-ao----  60,00g
  vm-431-disk-2 hp3par_pvea -wi-ao----  60,00g
  vm-432-disk-1 hp3par_pvea -wi-ao----  60,00g
  vm-432-disk-2 hp3par_pvea -wi-ao----  60,00g
  data          pve         twi-a-tz--  15,38g             0,00   0,59
  root          pve         -wi-ao----  12,25g
  swap          pve         -wi-ao----  16,00g
  vmdir         vmdata      -wi-ao---- 350,00g
  vmlvmthin     vmdata      twi-a-tz--   1,23t             0,00   0,25
Regards
 
On shared FC storage i can use only full provisioning lvm (normal lvm).

I changed flag in /etc/lvm/lvm.conf

and then i think everything is OK. I also use multipathd so there i put filter
Code:
filter = [ "a|/dev/sda.*|", "a|/dev/sdb.*|", "r|/dev/sd.*|" ]
to disable rescan by lvm deamon sdx drives (excluding local sda and sdb).
I can't find those information on Proxmox WIKI. DOCS are very poor about attach shared storage by FC.
Could someone can confirm that option use_lvmeatd=0 is safe?
[/code]
Hi,
I use pve4 only yet - but there I have "use_lvmetad = 0" enabled without trouble.

As multipath-docs I use the iscsi-multipath docs (without the first iscsi-part): https://pve.proxmox.com/wiki/ISCSI_Multipath.

Udo
 
I have checked that in Proxmox 4 "use_lvmetad = 0" was default and in Proxmox 5 default is "use_lvmetad = 1".
Is it Proxmox modification or Debian 8 to 9 changes?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!