Help with multipath ISCSI

morocho1979

Active Member
May 11, 2017
13
0
41
45
hello I set the multipath and everthing and work and is on production , but i need to mod the config the lvm :HDD to work with the both isci like DE4000H , DE4000HB ,now is withDE4000H only one cotroller my mistake is possible to do with out losing data or downtime if is possible , thx for the advise.




lvm: HDD
vgname HBL
base DE4000H:0.0.1.dm-uuid-mpath-36d039ea00027771600000195602ae8ae
content images,rootdir
shared 1


iscsi: DE4000H
portal 190.168.6.91
target iqn.2003-09.com.lenovo:thinksystem.6d039e1027771610000005fd3ffc1
content none

iscsi: DE4000HB
portal 190.168.6.250
target iqn.2003-09.com.lenovo:thinksystem.6d039e1027771610000005fd3ffc1
content none
 
Admittedly its a little hard to discern your question and some important information is missing to answer your question with certainty.

It seems like you have an iSCSI storage attached to your host via multiple paths.
You have configured multipath.
Some part of your setup is now in production.

You dont think you have properly defined your storage and that it may be using only one path?

Here is something to keep in mind:
a) Setting up iSCSI with Multipath should be done manually, ie outside of PVE interface.
b) In multi-host/cluster environment you should setup iSCSI and Multipath on all hosts.
c) The target device for the VM image and other usage will be the resulting "dm" device, not direct disk.

The missing information that would be helpful to you and the community is:
a) output of "iscsiadm -m node"
b) output of "multipath -ll"
c) output of "lsblk"
d) output of "pvesm status"
e) output of "pvesm list <storage_name>" // for each storage listed in "status" output

You really need to understand what storage is in use and what each name maps to before you are able to make changes.



Blockbridge: Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
There is the informacion

iscsiadm -m node
192.168.6.90:3260,1 iqn.2002-09.com.lenovo:thinksystem.6d039ea000277716000000005 fd3ffc1
192.168.6.101:3260,1 iqn.2002-09.com.lenovo:thinksystem.6d039ea00027771600000000 5fd3ffc1
192.168.6.104:3260,2 iqn.2002-09.com.lenovo:thinksystem.6d039ea00027771600000000 5fd3ffc1
192.168.133.101:3260,2 iqn.2002-09.com.lenovo:thinksystem.6d039ea000277716000000 005fd3ffc1
192.168.130.101:3260,1 iqn.2002-09.com.lenovo:thinksystem.6d039ea000277716000000 005fd3ffc1
192.168.131.101:3260,1 iqn.2002-09.com.lenovo:thinksystem.6d039ea000277716000000 005fd3ffc1
192.168.6.91:3260,2 iqn.2002-09.com.lenovo:thinksystem.6d039ea000277716000000005 fd3ffc1
192.168.133.101:3260,1 iqn.2002-09.com.lenovo:thinksystem.6d039ea000277716000000 005fd3ffc1
192.168.131.102:3260,2 iqn.2002-09.com.lenovo:thinksystem.6d039ea000277716000000 005fd3ffc1
192.168.130.102:3260,2 iqn.2002-09.com.lenovo:thinksystem.6d039ea000277716000000 005fd3ffc1
192.168.135.102:3260,2 iqn.2002-09.com.lenovo:thinksystem.6d039ea0002919b2000000 005feb4148
192.168.130.101:3260,1 iqn.2002-09.com.lenovo:thinksystem.6d039ea0002919b2000000 005feb4148
192.168.131.101:3260,1 iqn.2002-09.com.lenovo:thinksystem.6d039ea0002919b2000000 005feb4148
192.168.134.101:3260,1 iqn.2002-09.com.lenovo:thinksystem.6d039ea0002919b2000000 005feb4148
192.168.133.102:3260,2 iqn.2002-09.com.lenovo:thinksystem.6d039ea0002919b2000000 005feb4148
192.168.6.103:3260,2 iqn.2002-09.com.lenovo:thinksystem.6d039ea0002919b200000000 5feb4148
192.168.135.101:3260,1 iqn.2002-09.com.lenovo:thinksystem.6d039ea0002919b2000000 005feb4148
192.168.134.102:3260,2 iqn.2002-09.com.lenovo:thinksystem.6d039ea0002919b2000000 005feb4148
192.168.133.101:3260,1 iqn.2002-09.com.lenovo:thinksystem.6d039ea0002919b2000000 005feb4148
192.168.131.102:3260,2 iqn.2002-09.com.lenovo:thinksystem.6d039ea0002919b2000000 005feb4148
192.168.6.102:3260,1 iqn.2002-09.com.lenovo:thinksystem.6d039ea0002919b200000000 5feb4148
192.168.130.102:3260,2 iqn.2002-09.com.lenovo:thinksystem.6d039ea0002919b2000000 005feb4148

multipath -ll
mpath0 (36d039ea00027771600000195602ae8ae) dm-6 LENOVO,DE_Series
size=35T features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 15:0:0:1 sdc 8:32 active ready running
| `- 16:0:0:1 sdb 8:16 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 19:0:0:1 sdf 8:80 active ready running
|- 17:0:0:1 sdd 8:48 active ready running
`- 18:0:0:1 sde 8:64 active ready running


lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 893.1G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 512M 0 part /boot/efi
└─sda3 8:3 0 892.7G 0 part
├─pve-swap 253:0 0 8G 0 lvm [SWAP]
├─pve-root 253:1 0 96G 0 lvm /
├─pve-data_tmeta 253:2 0 7.7G 0 lvm
│ └─pve-data-tpool 253:4 0 757.2G 0 lvm
│ └─pve-data 253:5 0 757.2G 0 lvm
└─pve-data_tdata 253:3 0 757.2G 0 lvm
└─pve-data-tpool 253:4 0 757.2G 0 lvm
└─pve-data 253:5 0 757.2G 0 lvm
sdb 8:16 0 35T 0 disk
└─mpath0 253:6 0 35T 0 mpath
├─HBL-vm--104--disk--0 253:7 0 200G 0 lvm
├─HBL-vm--103--disk--0 253:8 0 1.7T 0 lvm
├─HBL-vm--108--disk--0 253:9 0 32G 0 lvm
├─HBL-vm--111--disk--0 253:10 0 100G 0 lvm
├─HBL-vm--115--disk--0 253:11 0 100G 0 lvm
├─HBL-vm--138--disk--0 253:13 0 300G 0 lvm
├─HBL-vm--110--disk--0 253:14 0 250G 0 lvm
├─HBL-vm--114--disk--0 253:15 0 500G 0 lvm
├─HBL-vm--126--disk--0 253:16 0 380G 0 lvm
├─HBL-vm--137--disk--0 253:17 0 250G 0 lvm
├─HBL-vm--136--disk--0 253:27 0 250G 0 lvm
└─HBL-vm--146--disk--0 253:29 0 250G 0 lvm
sdc 8:32 0 35T 0 disk
└─mpath0 253:6 0 35T 0 mpath
├─HBL-vm--104--disk--0 253:7 0 200G 0 lvm
├─HBL-vm--103--disk--0 253:8 0 1.7T 0 lvm
├─HBL-vm--108--disk--0 253:9 0 32G 0 lvm
├─HBL-vm--111--disk--0 253:10 0 100G 0 lvm
├─HBL-vm--115--disk--0 253:11 0 100G 0 lvm
├─HBL-vm--138--disk--0 253:13 0 300G 0 lvm
├─HBL-vm--110--disk--0 253:14 0 250G 0 lvm
├─HBL-vm--114--disk--0 253:15 0 500G 0 lvm
├─HBL-vm--126--disk--0 253:16 0 380G 0 lvm
├─HBL-vm--137--disk--0 253:17 0 250G 0 lvm
├─HBL-vm--136--disk--0 253:27 0 250G 0 lvm
└─HBL-vm--146--disk--0 253:29 0 250G 0 lvm
sdd 8:48 0 35T 0 disk
└─mpath0 253:6 0 35T 0 mpath
├─HBL-vm--104--disk--0 253:7 0 200G 0 lvm
├─HBL-vm--103--disk--0 253:8 0 1.7T 0 lvm
├─HBL-vm--108--disk--0 253:9 0 32G 0 lvm
├─HBL-vm--111--disk--0 253:10 0 100G 0 lvm
├─HBL-vm--115--disk--0 253:11 0 100G 0 lvm
├─HBL-vm--138--disk--0 253:13 0 300G 0 lvm
├─HBL-vm--110--disk--0 253:14 0 250G 0 lvm
├─HBL-vm--114--disk--0 253:15 0 500G 0 lvm
├─HBL-vm--126--disk--0 253:16 0 380G 0 lvm
├─HBL-vm--137--disk--0 253:17 0 250G 0 lvm
├─HBL-vm--136--disk--0 253:27 0 250G 0 lvm
└─HBL-vm--146--disk--0 253:29 0 250G 0 lvm
sde 8:64 0 35T 0 disk
└─mpath0 253:6 0 35T 0 mpath
├─HBL-vm--104--disk--0 253:7 0 200G 0 lvm
├─HBL-vm--103--disk--0 253:8 0 1.7T 0 lvm
├─HBL-vm--108--disk--0 253:9 0 32G 0 lvm
├─HBL-vm--111--disk--0 253:10 0 100G 0 lvm
├─HBL-vm--115--disk--0 253:11 0 100G 0 lvm
├─HBL-vm--138--disk--0 253:13 0 300G 0 lvm
├─HBL-vm--110--disk--0 253:14 0 250G 0 lvm
├─HBL-vm--114--disk--0 253:15 0 500G 0 lvm
├─HBL-vm--126--disk--0 253:16 0 380G 0 lvm
├─HBL-vm--137--disk--0 253:17 0 250G 0 lvm
├─HBL-vm--136--disk--0 253:27 0 250G 0 lvm
└─HBL-vm--146--disk--0 253:29 0 250G 0 lvm
sdf 8:80 0 35T 0 disk
└─mpath0 253:6 0 35T 0 mpath
├─HBL-vm--104--disk--0 253:7 0 200G 0 lvm
├─HBL-vm--103--disk--0 253:8 0 1.7T 0 lvm
├─HBL-vm--108--disk--0 253:9 0 32G 0 lvm
├─HBL-vm--111--disk--0 253:10 0 100G 0 lvm
├─HBL-vm--115--disk--0 253:11 0 100G 0 lvm
├─HBL-vm--138--disk--0 253:13 0 300G 0 lvm
├─HBL-vm--110--disk--0 253:14 0 250G 0 lvm
├─HBL-vm--114--disk--0 253:15 0 500G 0 lvm
├─HBL-vm--126--disk--0 253:16 0 380G 0 lvm
├─HBL-vm--137--disk--0 253:17 0 250G 0 lvm
├─HBL-vm--136--disk--0 253:27 0 250G 0 lvm
└─HBL-vm--146--disk--0 253:29 0 250G 0 lvm

pvesm status
Name Type Status Total Used Available %
DE4000H iscsi active 0 0 0 0.00%
DE4000H-A2 iscsi active 0 0 0 0.00%
DE4000HB iscsi active 0 0 0 0.00%
DE4000HB-2 iscsi active 0 0 0 0.00%
HDD lvm active 37580959744 13649313792 23931645952 36.32%
ISO nfs active 30158642304 13431958912 16726683392 44.54%
Tank nfs active 30158642304 13431958912 16726683392 44.54%
local dir active 98559220 12132976 81376696 12.31%
local-lvm lvmthin active 793964544 0 793964544 0.00%

pvesm lis
pvesm list DE4000H
Volid Format Type Size VMID
DE4000H:0.0.1.dm-uuid-mpath-36d039ea00027771600000195602ae8ae raw images 38482906972160
root@Proxm01:~# pvesm list DE4000HB
Volid Format Type Size VMID
DE4000HB:0.0.1.dm-uuid-mpath-36d039ea00027771600000195602ae8ae raw images 38482906972160
root@Proxm01:~# pvesm list DE4000HB-2
Volid Format Type Size VMID
DE4000HB-2:0.0.1.dm-uuid-mpath-36d039ea00027771600000195602ae8ae raw images 38482906972160
root@Proxm01:~# pvesm list DE4000H-A2
Volid Format Type Size VMID
DE4000H-A2:0.0.1.dm-uuid-mpath-36d039ea00027771600000195602ae8ae raw images 38482906972160
root@Proxm01:~#

storage.cfg config
dir: local
path /var/lib/vz
content iso,vztmpl,backup

lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir

iscsi: DE4000H
portal 192.168.6.90
target iqn.2002-09.com.lenovo:thinksystem.6d039ea000277716000000005fd3ffc1
content none

lvm: HDD
vgname HBL
base DE4000H:0.0.1.dm-uuid-mpath-36d039ea00027771600000195602ae8ae (can i change this line somethis like this :Existing volume groups to take all the DE4000H....)
content images,rootdir
shared 1

nfs: Tank
export /mnt/RESVM
path /mnt/pve/Tank
server 192.168.6.100
content backup
prune-backups keep-all=1

nfs: ISO
export /mnt/RESVM
path /mnt/pve/ISO
server 192.168.6.100
content iso
prune-backups keep-all=1

iscsi: DE4000HB
portal 192.168.6.104
target iqn.2002-09.com.lenovo:thinksystem.6d039ea000277716000000005fd3ffc1
content none

iscsi: DE4000H-A2
portal 192.168.6.101
target iqn.2002-09.com.lenovo:thinksystem.6d039ea000277716000000005fd3ffc1
content none

iscsi: DE4000HB-2
portal 192.168.133.101
target iqn.2002-09.com.lenovo:thinksystem.6d039ea000277716000000005fd3ffc1
content none
 
Last edited:
i need to update the config /etc/pve/storage.cfg the base volume group to one specfig to all available is possible to do via command and what i have to write ?



lvm: HDD
vgname HBL
base DE4000H:0.0.1.dm-uuid-mpath-36d039ea00027771600000195602ae8ae (can i change this line somethis like this :Existing volume groups to take all the DE4000H....)
content images,rootdir
shared 1
 
Your configuration is quite "interesting". Here are top level thoughts:

a) you should not need "base" option in your lvm storage when using multipath
b) you should not need iscsi entries in your configuration file if your storage is presenting one LUN that is used by multipath
c) you should have iscsi sessions setup via iscsiadm and ensure you have proper setting set as described here : https://pve.proxmox.com/wiki/ISCSI_Multipath
d) there is no way to specify more than 1 path via "base" option


Blockbridge: Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Your configuration is quite "interesting". Here are top level thoughts:

a) you should not need "base" option in your lvm storage when using multipath
b) you should not need iscsi entries in your configuration file if your storage is presenting one LUN that is used by multipath
c) you should have iscsi sessions setup via iscsiadm and ensure you have proper setting set as described here : https://pve.proxmox.com/wiki/ISCSI_Multipath
d) there is no way to specify more than 1 path via "base" option


Blockbridge: Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
any way that i can change this 1649430968931.png
to this ??

1649430936218.png

on case i can have a download time , i can remove de ID: HDD and add again witout lossing data ?? and choose base storage existing volume groups ?? if possible to do ?
 
Your configuration is quite "interesting". Here are top level thoughts:

a) you should not need "base" option in your lvm storage when using multipath
b) you should not need iscsi entries in your configuration file if your storage is presenting one LUN that is used by multipath
c) you should have iscsi sessions setup via iscsiadm and ensure you have proper setting set as described here : https://pve.proxmox.com/wiki/ISCSI_Multipath
d) there is no way to specify more than 1 path via "base" option


Blockbridge: Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Yes,
It is exactely the point.

I have never seen this clearly written in a global official PVE howto. Something is missing here in official docs about iSCSI, multipath, and integration in PVE.

About your « base » option comment : none, but one by path could be useful to have a view in web ui when a path is faulty. May be time to post a feature request !

Regards,

Christophe.
 
This is not PVE specific, therefore there is no documenation about it.
Well, iSCSI is not PVE specific, and it is easy to connect a LUN in web gui.

multipath is covered by documents, if if you let it « as is «, keep your iSCSI storage and your LVM storage based (« base » option in storage.cfg ), you are on the wrong way.

on the other side, when you delete an iSCSI connection in web gui, it is not uncommon to have users think the resource has been freed at a system level, which is wrong and generate confusion and errors.

I’m not blaming !

Ceph is not PVE specific.
PVE integration is (really) better.

Regards,

Christophe.
 
Yes, iSCSI (and FC for that matter) via multipath are not covered and not configurable via GUI, not even in the installer. Maybe it's just a niche? At least the number of posts in the forum are negligible.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!