Hello Members 
Im not very experienced in integrating iscsi into pve so I had my first configuration and there where some unexpected behaviours. In the end everything including HA-Tests worked but there where some questions regarding the process of integration, which might help also integrating this into the Multipath Guide if needed: https://pve.proxmox.com/wiki/Multipath#iSCSI
Thats my multipath.conf
blacklisting is there because ceph is also used on the system.
These are the www-id files from the /etc/multipath/wwids file
I tested with 2 LUNs from a DELL Storage. I integrated all the steps from the Multipath Wikipost and got this:
Looks fine. Somehow only the first LUN is called /dev/mapper/mpathi on all 3 nodes. the second LUN which was created today has a different name on each node (mpathk on node1, mpathl on node2 and mpathj on node3) - can anyone explain if this is correct or wrong or if it does matter?
I also wanted to create a LVM ontop of iscsi and did via webui and cli, both brought up the following error:
I found a solution for this by adding a lvm filter config. Is this needed? and if yes, can this be included into the wikipost?
After that adding an lvm via webui worked for both the LUN1 with having /dev/mapper/mpathi on all 3 nodes but ALSO on LUN2 with having different /dev/mapper/mpath-Devices (k,l,j). Thats how the storage.cfg looks like, I added each portal to the datacenter storage config:
Both pools work regarding: High-Availabilty, Storage-Migration, Link-Loss (tried with ifdown on the iscsi-connections) - allthough the second pools has different /dev/mapper-devices on each node - this is the thing which confuses me. Can anyone explain? Is the setup correct and are there things which you would do different?
Im not very experienced in integrating iscsi into pve so I had my first configuration and there where some unexpected behaviours. In the end everything including HA-Tests worked but there where some questions regarding the process of integration, which might help also integrating this into the Multipath Guide if needed: https://pve.proxmox.com/wiki/Multipath#iSCSI
Thats my multipath.conf
Code:
root@pve1:~# cat /etc/multipath.conf
defaults {
polling_interval 5
checker_timeout 15
find_multipaths strict
user_friendly_names yes
enable_foreign nvme
}
devices {
device {
vendor DellEMC
product PowerStore
detect_prio "yes"
path_selector "queue-length 0"
path_grouping_policy "group_by_prio"
path_checker tur
failback immediate
fast_io_fail_tmo 5
no_path_retry 3
rr_min_io_rq 1
max_sectors_kb 1024
dev_loss_tmo 10
hardware_handler "1 alua"
}
}
blacklist {
devnode "sd[a-h]$"
}
blacklisting is there because ceph is also used on the system.
These are the www-id files from the /etc/multipath/wwids file
Code:
root@pve1:~# cat /etc/multipath/wwids
# Multipath wwids, Version : 1.0
# NOTE: This file is automatically maintained by multipath and multipathd.
# You should not need to edit this file in normal circumstances.
#
# Valid WWIDs:
#/3500a075150392adf/
#/35000cca04fb46f94/
#/35000cca04fb4a36c/
#/35002538b097b6bb0/
#/35002538b097b6a30/
#/35000cca04fb4daac/
#/35000cca04fb4bd7c/
/368ccf09800fc69d0987f2076fa063e6b/
#/350025380a4c61db0/
/368ccf09800042f77bfddd1b0bb7bf57b/
I tested with 2 LUNs from a DELL Storage. I integrated all the steps from the Multipath Wikipost and got this:
Code:
root@pve1:~# multipath -ll
mpathi (368ccf09800fc69d0987f2076fa063e6b) dm-6 DellEMC,PowerStore
size=5.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='queue-length 0' prio=50 status=active
| |- 15:0:0:1 sdk 8:160 active ready running
| `- 16:0:0:1 sdl 8:176 active ready running
`-+- policy='queue-length 0' prio=10 status=enabled
|- 13:0:0:1 sdi 8:128 active ready running
`- 14:0:0:1 sdj 8:144 active ready running
mpathk (368ccf09800042f77bfddd1b0bb7bf57b) dm-7 DellEMC,PowerStore
size=5.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='queue-length 0' prio=50 status=active
| |- 13:0:0:2 sdo 8:224 active ready running
| `- 14:0:0:2 sdn 8:208 active ready running
`-+- policy='queue-length 0' prio=10 status=enabled
|- 15:0:0:2 sdm 8:192 active ready running
`- 16:0:0:2 sdp 8:240 active ready running
Looks fine. Somehow only the first LUN is called /dev/mapper/mpathi on all 3 nodes. the second LUN which was created today has a different name on each node (mpathk on node1, mpathl on node2 and mpathj on node3) - can anyone explain if this is correct or wrong or if it does matter?
I also wanted to create a LVM ontop of iscsi and did via webui and cli, both brought up the following error:
Code:
create storage failed: vgcreate iscsi-500t-vg /dev/disk/by-id/scsi-368ccf09800fc69d0987f2076fa063e6b error: Cannot use device /dev/mapper/mpathi with duplicates. (500)
I found a solution for this by adding a lvm filter config. Is this needed? and if yes, can this be included into the wikipost?
Code:
cp /etc/lvm/lvm.conf /etc/lvm/lvm.conf.backup.$(date +%Y%m%d-%H%M%S)
nano /etc/lvm/lvm.conf
add inside the devices {} blocks
# only /dev/mapper/mpath* und /dev/sdc-h allowed for lvm config
filter = [ "a|^/dev/mapper/mpath.*|", "a|^/dev/sd[c-h]$|", "r|.*|" ]
global_filter = [ "a|^/dev/mapper/mpath.*|", "a|^/dev/sd[c-h]$|", "r|.*|" ]
pvs --config 'devices { filter = [ "a|^/dev/mapper/mpath.*|", "a|^/dev/sd[c-h]$|", "r|.*|" ] }'
pvscan --cache
After that adding an lvm via webui worked for both the LUN1 with having /dev/mapper/mpathi on all 3 nodes but ALSO on LUN2 with having different /dev/mapper/mpath-Devices (k,l,j). Thats how the storage.cfg looks like, I added each portal to the datacenter storage config:
Code:
iscsi: iscsi
portal 10.10.1.71
target iqn.2015-10.com.dell:dellemc-powerstore-crk00230109862-a-0c343331
content none
nodes pve2,pve3,pve1
iscsi: iscsi-2
portal 10.10.1.71
target iqn.2015-10.com.dell:dellemc-powerstore-crk00230109862-a-7b5a2c77
content none
iscsi: iscsi-3
portal 10.10.1.71
target iqn.2015-10.com.dell:dellemc-powerstore-crk00230109862-b-3dd609dd
content none
iscsi: iscsi-4
portal 10.10.1.71
target iqn.2015-10.com.dell:dellemc-powerstore-crk00230109862-b-5ef4a37e
content none
lvm: iscsi-pve-storage-1
vgname iscsi-500t-vg1
base iscsi:0.0.1.scsi-368ccf09800fc69d0987f2076fa063e6b
content images,rootdir
saferemove 0
shared 1
snapshot-as-volume-chain 1
lvm: iscsi-pve-storage-2
vgname iscsi-500t-vg2
base iscsi:0.0.2.scsi-368ccf09800042f77bfddd1b0bb7bf57b
content rootdir,images
saferemove 0
shared 1
snapshot-as-volume-chain 1
Both pools work regarding: High-Availabilty, Storage-Migration, Link-Loss (tried with ifdown on the iscsi-connections) - allthough the second pools has different /dev/mapper-devices on each node - this is the thing which confuses me. Can anyone explain? Is the setup correct and are there things which you would do different?
Last edited: