lvm config - san, multipath proxmox 5.2

Filip1978

New Member
Sep 20, 2018
10
0
1
46
Hello,

how can i must configure lvm to work correct.
1. I have 4 servers in cluster (blade) and all are connected to the same storage (san)
2. user multipath
3. create lvm but i have some problem with path
WARNING: Device mismatch detected for DS4_2_2/vm-326-disk-1 which is accessing /dev/sdg instead of /dev/mapper/ds4klun22.
WARNING: Device mismatch detected for DS4_2_2/vm-326-disk-2 which is accessing /dev/sdg instead of /dev/mapper/ds4klun22.
PV VG Fmt Attr PSize PFree
/dev/mapper/ds4klun2 DS4_2 lvm2 a-- 1.49t 316.96g
/dev/mapper/ds4klun22 DS4_2_2 lvm2 a-- 2.00t 1.85t
/dev/mapper/ds4klun23 DS4_2_3 lvm2 a-- 2.00t 1.96t
/dev/sdm3 pve lvm2 a-- 135.72g 16.00g


4. When create machine on lvm storage i dont see lvm on another server in cluster
root@VM03:~# lvs | grep DS4_2_2
vm-203-disk-1 DS4_2_2 -wi-ao---- 20.00g
vm-203-disk-2 DS4_2_2 -wi-ao---- 25.01g
vm-204-disk-1 DS4_2_2 -wi-ao---- 40.00g
vm-326-disk-1 DS4_2_2 -wi-ao---- 8.00g
vm-326-disk-2 DS4_2_2 -wi-ao---- 65.00g

root@VM01:~# lvs | grep DS4_2_2

now i configure in lvm.conf on default configuration
global_filter = [ "r|/dev/zd.*|", "r|/dev/mapper/pve-.*|", "r|/dev/sda|", "r|/dev/sdb|", "r|/dev/sdc|", "r|/dev/sdd|", "r|/dev/sde|", "r|/dev/sdf|", "r|/dev/sdg|", "r|/dev/sdh|", "r|/dev/sdi|", "$
 
Please post output of

Code:
multipath -ll
ds4klun13 (3600a0b800048cc7800002b755b8fabeb) dm-52 IBM,1814 FAStT
size=2.0T features='4 queue_if_no_path pg_init_retries 50 retain_attached_hw_handle' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=6 status=active
| `- 0:0:0:0 sda 8:0 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
`- 1:0:1:0 sdj 8:144 active ghost running
ds4klun12 (3600a0b800048cc7800002b715b8fa9b1) dm-54 IBM,1814 FAStT
size=2.0T features='4 queue_if_no_path pg_init_retries 50 retain_attached_hw_handle' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=6 status=active
| `- 0:0:0:2 sdc 8:32 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
`- 1:0:1:2 sdl 8:176 active ghost running
ds4klun2 (3600a0b800033623a0000109f52579d21) dm-56 IBM,1814 FAStT
size=1.5T features='4 queue_if_no_path pg_init_retries 50 retain_attached_hw_handle' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=6 status=active
| `- 0:0:1:1 sde 8:64 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
`- 1:0:0:1 sdh 8:112 active ghost running
ds4klun1 (3600a0b800048d23a000017255260b059) dm-53 IBM,1814 FAStT
size=1.5T features='4 queue_if_no_path pg_init_retries 50 retain_attached_hw_handle' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=6 status=active
| `- 1:0:1:1 sdk 8:160 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
`- 0:0:0:1 sdb 8:16 active ghost running
ds4klun23 (3600a0b800033623a000024625b8fa1a8) dm-57 IBM,1814 FAStT
size=2.0T features='4 queue_if_no_path pg_init_retries 50 retain_attached_hw_handle' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=6 status=active
| `- 0:0:1:3 sdf 8:80 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
`- 1:0:0:3 sdi 8:128 active ghost running
ds4klun22 (3600a0b8000339326000024d15b8fa0c6) dm-55 IBM,1814 FAStT
size=2.0T features='4 queue_if_no_path pg_init_retries 50 retain_attached_hw_handle' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=6 status=active
| `- 1:0:0:0 sdg 8:96 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
`- 0:0:1:0 sdd 8:48 active ghost running
 
and multipath.conf
defaults {
polling_interval 10
path_selector "round-robin 0"
path_grouping_policy failover
uid_attribute ID_SERIAL
getuid_callout "/lib/udev/scsi_id -g -u -d /dev/%n"
prio rdac
path_checker rdac
rr_min_io 100
rr_weight priorities
failback immediate
no_path_retry fail
user_friendly_names yes
}

blacklist {
wwid .*
}

blacklist_exceptions {
wwid "3600a0b800048d23a000017255260b059"
# wwid "3600a0b800048d23a00000f864dca2f98"
wwid "3600a0b800033623a0000109f52579d21"
# wwid "3600a0b80003393260000148a526f202e"
wwid "3600a0b800033623a000024625b8fa1a8"
wwid "3600a0b8000339326000024d15b8fa0c6"
wwid "3600a0b800048cc7800002b715b8fa9b1"
wwid "3600a0b800048cc7800002b755b8fabeb"

}

multipaths {
multipath {
wwid "3600a0b800048d23a000017255260b059"
alias ds4klun1
}

multipath {
wwid "3600a0b800033623a0000109f52579d21"
alias ds4klun2
}

multipath {
wwid "3600a0b800048cc7800002b715b8fa9b1"
alias ds4klun12
}

multipath {
wwid "3600a0b800048cc7800002b755b8fabeb"
alias ds4klun13
}

multipath {
wwid "3600a0b8000339326000024d15b8fa0c6"
alias ds4klun22
}

multipath {
wwid "3600a0b800033623a000024625b8fa1a8"
alias ds4klun23
}
}
 
Multipath looks OK

For the LVM-Part:

now i configure in lvm.conf on default configuration
global_filter = [ "r|/dev/zd.*|", "r|/dev/mapper/pve-.*|", "r|/dev/sda|", "r|/dev/sdb|", "r|/dev/sdc|", "r|/dev/sdd|", "r|/dev/sde|", "r|/dev/sdf|", "r|/dev/sdg|", "r|/dev/sdh|", "r|/dev/sdi|", "$

We use reverse logic of yours in filter, we have global filter unset (adopted to your setup):

Code:
filter = [ "a|/dev/mapper/ds4k*|", "a|/dev/sda?|", "r|.*|" ]

Every LVM operation like create and such are directly done ONLY over PVE, because there is lvm cluster daemon installed. Did you check if your LVM storage inside of PVE has the actived shared checkbox?
 
Multipath looks OK

For the LVM-Part:



We use reverse logic of yours in filter, we have global filter unset (adopted to your setup):

Code:
filter = [ "a|/dev/mapper/ds4k*|", "a|/dev/sda?|", "r|.*|" ]

- i think that is bether to change "a|dev/sda?|" to "a|dev/sdm?|" becouse


PV VG Fmt Attr PSize PFree
/dev/mapper/ds4klun1 DS4_1 lvm2 a-- 1.49t 1.49t
/dev/mapper/ds4klun12 DS4_1_2 lvm2 a-- 2.00t 2.00t
/dev/mapper/ds4klun13 DS4_1_3 lvm2 a-- 2.00t 2.00t
/dev/mapper/ds4klun2 DS4_2 lvm2 a-- 1.49t 316.96g
/dev/mapper/ds4klun22 DS4_2_2 lvm2 a-- 2.00t 1.85t
/dev/mapper/ds4klun23 DS4_2_3 lvm2 a-- 2.00t 1.96t
/dev/sdm3 pve lvm2 a-- 135.72g 16.00g

what do you think?

Every LVM operation like create and such are directly done ONLY over PVE, because there is lvm cluster daemon installed. Did you check if your LVM storage inside of PVE has the actived shared checkbox?

in lvm.conf i change locking_type from 1 to 3. Now when i create or delete VM on lvm all information on lvm is replicate between servers.

storage.cfg

nfs: DDbkpvm
export /data/col1/bkpvm
path /mnt/pve/DDbkpvm
server 192.168.1.42
content iso,backup
maxfiles 4
options vers=3

-------------
qmrestore /mnt/pve/DDbkpvm/dump/vzdump-qemu-201-2018_09_20-21_09_08.vma.lzo 201 --storage DS4_1_1


WARNING: Device mismatch detected for DS4_2_2/vm-326-disk-2 which is accessing /dev/sdg instead of /dev/mapper/ds4klun22.
connect() failed on local socket: No such file or directory
Internal cluster locking initialisation failed.
WARNING: Falling back to local file-based locking.
Volume Groups with the clustered attribute will be inaccessible.
 
- i think that is bether to change "a|dev/sda?|" to "a|dev/sdm?|" becouse


PV VG Fmt Attr PSize PFree
/dev/mapper/ds4klun1 DS4_1 lvm2 a-- 1.49t 1.49t
/dev/mapper/ds4klun12 DS4_1_2 lvm2 a-- 2.00t 2.00t
/dev/mapper/ds4klun13 DS4_1_3 lvm2 a-- 2.00t 2.00t
/dev/mapper/ds4klun2 DS4_2 lvm2 a-- 1.49t 316.96g
/dev/mapper/ds4klun22 DS4_2_2 lvm2 a-- 2.00t 1.85t
/dev/mapper/ds4klun23 DS4_2_3 lvm2 a-- 2.00t 1.96t
/dev/sdm3 pve lvm2 a-- 135.72g 16.00g

what do you think?
 
Multipath looks OK

For the LVM-Part:



We use reverse logic of yours in filter, we have global filter unset (adopted to your setup):

Code:
filter = [ "a|/dev/mapper/ds4k*|", "a|/dev/sda?|", "r|.*|" ]

Every LVM operation like create and such are directly done ONLY over PVE, because there is lvm cluster daemon installed. Did you check if your LVM storage inside of PVE has the actived shared checkbox?

How can i check it?
 
I'm still looking for a solution

WARNING: Device mismatch detected for DS4_1/vm-201-disk-1 which is accessing /dev/sdh instead of /dev/mapper/ds4klun1.
WARNING: Device mismatch detected for DS4_1/vm-201-disk-2 which is accessing /dev/sdh instead of /dev/mapper/ds4klun1.

filter = [ "r|/dev/sdh|" ]
filter = [ "a|/dev/mapper/ds4k*|", "a|/dev/sdm|", "r|.*|" ]
 
ls /dev/mapper/
control DS4_2_3-vm--105--disk--1 DS4_2-vm--203--disk--2 DS4_2-vm--227--disk--2 DS4_2-vm--233--disk--5 DS4_2-vm--312--disk--2 ds4klun22
DS4_1-vm--201--disk--1 DS4_2_3-vm--108--disk--1 DS4_2-vm--205--disk--1 DS4_2-vm--230--disk--1 DS4_2-vm--233--disk--6 DS4_2-vm--315--disk--1 ds4klun23
DS4_1-vm--201--disk--2 DS4_2_3-vm--110--disk--1 DS4_2-vm--206--disk--1 DS4_2-vm--231--disk--1 DS4_2-vm--233--disk--7 DS4_2-vm--315--disk--2 pve-data
DS4_2_2-vm--203--disk--1 DS4_2_3-vm--124--disk--1 DS4_2-vm--208--disk--1 DS4_2-vm--231--disk--2 DS4_2-vm--234--disk--1 DS4_2-vm--326--disk--1 pve-data_tdata
DS4_2_2-vm--203--disk--2 DS4_2-vm--103--disk--1 DS4_2-vm--209--disk--1 DS4_2-vm--231--disk--3 DS4_2-vm--235--disk--1 DS4_2-vm--326--disk--2 pve-data_tmeta
DS4_2_2-vm--204--disk--1 DS4_2-vm--105--disk--2 DS4_2-vm--211--disk--1 DS4_2-vm--232--disk--1 DS4_2-vm--236--disk--1 ds4klun1 pve-root
DS4_2_2-vm--326--disk--1 DS4_2-vm--108--disk--1 DS4_2-vm--212--disk--1 DS4_2-vm--232--disk--2 DS4_2-vm--236--disk--2 ds4klun12 pve-swap
DS4_2_2-vm--326--disk--2 DS4_2-vm--112--disk--1 DS4_2-vm--225--disk--1 DS4_2-vm--232--disk--3 DS4_2-vm--236--disk--3 ds4klun13
DS4_2_3-vm--103--disk--1 DS4_2-vm--203--disk--1 DS4_2-vm--227--disk--1 DS4_2-vm--233--disk--1 DS4_2-vm--312--disk--1 ds4klun2
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!