(SOVLED)LVM over iSCSI not working with multipath devices

cjkeist

New Member
Feb 11, 2020
14
0
1
51
Proxmox version 6.1-7. Trying to get LVM over iSCSI working. My iSCSI server is running OmniOS using NappIt. I created the iSCSI share on the NappIt server and created my host target group and iSCSI target group and view. Proxmox cluster (six nodes) can add the iSCSI volume with no issues. However, whenever I try to create an LVM on the iSCSI volume I get the following error:

create storage failed: error with cfs lock 'file-storage_cfg': device '/dev/disk/by-id/scsi-3600144f0e8ee406800005e69072c0003' is already used by volume group '[unknown]' (500)

Looking at pvdisplay I get:

root@sun1:~# pvdisplay

WARNING: Not using device /dev/sdd for PV mABXRn-dx1c-0QFI-1lJy-iG3N-1vpO-GYyycT.
WARNING: PV mABXRn-dx1c-0QFI-1lJy-iG3N-1vpO-GYyycT prefers device /dev/sdc because device was seen first.
--- Physical volume ---
PV Name /dev/sdb
VG Name ceph-4cede2e7-0403-488c-abf6-31d74371b2e2
PV Size 1.45 TiB / not usable 189.00 MiB
Allocatable yes (but full)
PE Size 1.00 GiB
Total PE 1489
Free PE 0
Allocated PE 1489
PV UUID y08aJm-OaHi-RnYF-L3xU-Akfq-klpk-Gc80Aa

--- Physical volume ---
PV Name /dev/sda3
VG Name pve
PV Size <371.10 GiB / not usable 2.98 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 95000
Free PE 4094
Allocated PE 90906
PV UUID guD30a-5nh0-rpPS-R3U5-4yM6-SwbU-x2Bokj

"/dev/sdc" is a new physical volume of "5.00 TiB"
--- NEW Physical volume ---
PV Name /dev/sdc
VG Name
PV Size 5.00 TiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID mABXRn-dx1c-0QFI-1lJy-iG3N-1vpO-GYyycT

My iSCSI Volume is the 5TB mABXRn-dx1c-0QFI-1lJy-iG3N-1vpO-GYyycT device on /dev/sdc and /dev/sdd. So seeing multipath here as my NappIt server is on two VLANS which the Proxmox cluster can see as well.
Reading up on this I installed multipath-tools on each node and rebooted. However, it doesn't look like it picked up to the two paths to the iSCSI volume

mutlipath -ll
is empty.

/dev/mapper doesn't show the 5TB iSCSI volume:
root@sun1:~# ls /dev/mapper/
ceph--4cede2e7--0403--488c--abf6--31d74371b2e2-osd--block--84393cb3--9a2a--41ce--8f17--046e690a047e
control
pve-data
pve-data_tdata
pve-data_tmeta
pve-data-tpool
pve-root
pve-swap

I'm assuming this is causing my issue in not being able to create my LVM on the iSCSI volume?
 

cjkeist

New Member
Feb 11, 2020
14
0
1
51
Thank you. I got a step closer. Created the multipath.conf file below and copied to all six nodes. Ran systemctl restart multipath-tools.service on each node. This created the /dev/mapper/mpath0 and now pvdisplay is clean. However, when trying to create the LVM through the web GUI the "Base Volume" list is now empty. I tried to remove the iSCSI and readding with the option "Use LUNs Directly" checked. So unable to create the LVM through the GUI. What's missing now?

Code:
defaults {
    find_multipaths        "on"
        polling_interval        2
        path_selector           "round-robin 0"
        path_grouping_policy    multibus
        uid_attribute           ID_SERIAL
        rr_min_io               100
        failback                immediate
        no_path_retry           queue
        user_friendly_names     yes
}
blacklist {
        devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
        devnode "^(td|hd)[a-z]"
        devnode "^dcssblk[0-9]*"
        devnode "^cciss!c[0-9]d[0-9]*"
        device {
                vendor "DGC"
                product "LUNZ"
        }
        device {
                vendor "EMC"
                product "LUNZ"
        }
        device {
                vendor "IBM"
                product "Universal Xport"
        }
        device {
                vendor "IBM"
                product "S/390.*"
        }
        device {
                vendor "DELL"
                product "Universal Xport"
        }
        device {
                vendor "SGI"
                product "Universal Xport"
        }
        device {
                vendor "STK"
                product "Universal Xport"
        }
        device {
                vendor "SUN"
                product "Universal Xport"
        }
        device {
                vendor "(NETAPP|LSI|ENGENIO)"
                product "Universal Xport"
        }
}
blacklist_exceptions {
        wwid "3600144f0e8ee406800005e69072c0003"
}

multipaths {
  multipath {
        wwid "3600144f0e8ee406800005e69072c0003"
        alias mpath0
  }
}

Code:
root@sun1:/etc# multipath -ll
mpath0 (3600144f0e8ee406800005e69072c0003) dm-7 SUN,COMSTAR
size=5.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
  |- 7:0:0:0 sdc 8:32 active ready running
  `- 8:0:0:0 sdd 8:48 active ready running
 

LnxBil

Famous Member
Feb 21, 2015
5,465
596
133
Germany
I've never used the GUI for LVM, but the next steps would be out of my head (this assumes there is no LVM present and all data can be lost on the multipathed device):

Code:
pvcreate /dev/mapper/mpath0
vgcreate iscsi /dev/mapper/mpath0

Now the volume group is created and should be visible in PVE.
 

cjkeist

New Member
Feb 11, 2020
14
0
1
51
I did try those commands, although I did create a partition first on the mpath0 device which created the mpath0-part1. Is it necessary to partition the iSCSI volume first?
PVE didn't see anything and I could not create any LVM entries.
Do I have run the pvcreate and vgcreate on each node?
 

LnxBil

Famous Member
Feb 21, 2015
5,465
596
133
Germany

cjkeist

New Member
Feb 11, 2020
14
0
1
51
Thanks, I was able to do the pvcreate and vgcreate. At first, the pvcreate failed with the error:

root@sun1:/etc/lvm# pvcreate /dev/mapper/mpath0
Device /dev/mapper/mpath0 excluded by a filter.

Was able to clear that by running: wipefs -a /dev/mapper/mpath0

Then run
pvcreate /dev/mapper/mpath0
vgcreate iscis /dev/mapper/mpath0

Then ran:

pvesm add lvm datadisk --vgname iscsi


It looks to be working! Thank you for all the help.

The question, after going through all this work. Is LVM over iSCSI worth it as to just using NFS?
 

LnxBil

Famous Member
Feb 21, 2015
5,465
596
133
Germany
Well, depends on your needs. I'll personnaly always favor iSCSI+LVM over NFS (for performance, and reliability reasons). Block based storage are usualy preferred. But, with NFS, you can use qcow2, so thin prov and snapshots.

same here.
 

stefanobertoli

New Member
Apr 25, 2020
14
0
1
39
is thin provisioning and snapshot working properly over a iscsi or scsi storage?
In my remember i read something about is not possible to activate snapshots on a thin-lvm built on top of a shared storage...

there's some cool news about it? I'm implementing in these days a two nodes proxmox cluster over a SCSI IBM Storewize
 

LnxBil

Famous Member
Feb 21, 2015
5,465
596
133
Germany
is thin provisioning and snapshot working properly over a iscsi or scsi storage?
In my remember i read something about is not possible to activate snapshots on a thin-lvm built on top of a shared storage...

In a shared configuration, no. Yet if you only use the LUN on one node, there is no problem (and no cluster).
You can however have ZFS-over-iSCSI which does thin-provisioning and snapshots, but you need to control the SAN.

there's some cool news about it? I'm implementing in these days a two nodes proxmox cluster over a SCSI IBM Storewize

Hopefully with a third vote, otherwise you will not have fun with your two-node-cluster.
 
  • Like
Reactions: stefanobertoli

stefanobertoli

New Member
Apr 25, 2020
14
0
1
39
In a shared configuration, no. Yet if you only use the LUN on one node, there is no problem (and no cluster).
You can however have ZFS-over-iSCSI which does thin-provisioning and snapshots, but you need to control the SAN.

This is the answer I searched. Have you tested a configuration with thin-lvm on top of a SCSI FC disk with multipath only mounted on the single node?
Why you're saying ther's no cluster? I think i can put in a cluster two nodes with no shared storage... like if they have only local disks.


Hopefully with a third vote, otherwise you will not have fun with your two-node-cluster.
If you're looking at ceph I'm with you. My question was specifically about news or cool stuff concerning the object of this thread (thin-lvm, FC, SCSI)
 

LnxBil

Famous Member
Feb 21, 2015
5,465
596
133
Germany
This is the answer I searched. Have you tested a configuration with thin-lvm on top of a SCSI FC disk with multipath only mounted on the single node?

As long as the LUN is only presented to one host, there is no problem, even with multipath (as long as you use the multipathed device and have proper filter settings in your lvm.conf), but technically there is no problem.

Why you're saying ther's no cluster? I think i can put in a cluster two nodes with no shared storage... like if they have only local disks.
Yes, it's named cluster but does not have the things I would want in something that is named cluster, but that is just terminology. Be aware that a two node cluster will break if one node fails and you did not set the expected vote stuff that you need in such a setup.
 

stefanobertoli

New Member
Apr 25, 2020
14
0
1
39
As long as the LUN is only presented to one host, there is no problem, even with multipath (as long as you use the multipathed device and have proper filter settings in your lvm.conf), but technically there is no problem.

You speak about lvm.conf, I obtained multipath up and running with multipath.conf, didn't touch lvm.conf file... Am I missing something?
 

LnxBil

Famous Member
Feb 21, 2015
5,465
596
133
Germany
You speak about lvm.conf, I obtained multipath up and running with multipath.conf, didn't touch lvm.conf file... Am I missing something?

You may run into warnings with multiple devices having identical LVM signatures, but it is only of cosmetic nature. It'll work without changing the lvm.conf
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!