PVE 8.0 through iSCSI to Dell MD3600i SAN (MPIO)

unassassinable

New Member
Nov 16, 2023
26
0
1
In short, I am a PVE & SAN storage N00b. MPIO not working (but configured according to vendor specs, AFAIK [details below])


Here are my settings:
  • PVE version 8.0.4 on a Dell R710 Host with 6 NICs. 2 NICs are plugged into my iSCSI switch stack with following settings:
    1700248681564.png
  • Dell PowerVault MD3600i SAN with 2 controllers (with 2 NICs each) plugged in to iSCSI switch stack.
    • Each interface configured on the iSCSi network
      • 172.16.0.10, 11, 12, & 13/24
  • I have successfully added all 4 paths in datacenter > storage
    1700248803032.png
  • Trying to add an LVM I cannot get it to see any base volumes
    1700248889670.png

I dont believe I have MPIO setup correctly and not sure why. My /etc/multipath.conf looks like:
Code:
root@COBRA-S-PM01:~# cat /etc/multipath.conf

defaults {
        find_multipaths         yes
        polling_interval        2
        path_selector           "round-robin 0"
        path_grouping_policy    multibus
        rr_min_io               100
        failback                immediate
        no_path_retry           queue
        user_friendly_names     yes
}

blacklist {
        wwid .*
}

blacklist_exceptions {
        wwid "36C81F66000C459D70000000062620821"
}

devices {
        device {
                vendor                  "DELL"
                product                 "MD36xxi"
                path_grouping_policy    group_by_prio
                prio                    rdac
                path_checker            rdac
                path_selector           "round-robin 0"
                hardware_handler        "1 rdac"
                failback                immediate
                features                "2 pg_init_retries 50"
                no_path_retry           30
                rr_min_io               100
        }
}

multipaths {
        multipath {
                wwid "36C81F66000C459D70000000062620821"
                alias md36xxi
        }
}

I added the following 2 lines to the bottom of my /etc/iscsi/iscsid.conf according to this guide:
Code:
node.startup = automatic
node.session.timeo.replacement_timeout = 15

iscsiadm -m session
Code:
# iscsiadm -m session
tcp: [1] 172.16.0.13:3260,2 iqn.1984-05.com.dell:powervault.md3600i.6c81f66000c459d70000000062620821 (non-flash)
tcp: [2] 172.16.0.11:3260,1 iqn.1984-05.com.dell:powervault.md3600i.6c81f66000c459d70000000062620821 (non-flash)
tcp: [3] 172.16.0.10:3260,1 iqn.1984-05.com.dell:powervault.md3600i.6c81f66000c459d70000000062620821 (non-flash)
tcp: [4] 172.16.0.12:3260,2 iqn.1984-05.com.dell:powervault.md3600i.6c81f66000c459d70000000062620821 (non-flash)
root@COBRA-S-PM01:~#

lsblk shows only local storage:
Code:
root@COBRA-S-PM01:~# lsblk
NAME               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                  8:0    0 136.1G  0 disk
├─sda1               8:1    0  1007K  0 part
├─sda2               8:2    0     1G  0 part
└─sda3               8:3    0 135.1G  0 part
  ├─pve-swap       253:0    0     8G  0 lvm  [SWAP]
  ├─pve-root       253:1    0  43.8G  0 lvm  /
  ├─pve-data_tmeta 253:2    0     1G  0 lvm
  │ └─pve-data     253:4    0  65.3G  0 lvm
  └─pve-data_tdata 253:3    0  65.3G  0 lvm
    └─pve-data     253:4    0  65.3G  0 lvm
sr0                 11:0    1  1024M  0 rom

I know there is more I am not showing, but I'm not sure what would be helpful. PLease let me know and I'll replay asap.

Thank you!!
 
Last edited:
this is i not a WWN: iqn.1984-05.com.dell:powervault.md3600i.6c81f66000c459d70000000062620821
This is an iSCSI target IQN, ie address. The WWD is part of Lun/Disk config.

That said, you should still see disks in lsblk and in lsscsi, even if multipath is misconfigured.

Are the sessions logged in? iscsiadm -m session --login

I am not a fan of using two interfaces on the same subnet, there is always ambiguity there. I'd recommend using different subnets, ie 172.16.0/24 and 172.16.1/24

If the sessions are logged in, double check that you actually assigned a LUN to target.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Also note, in the LVM wizard it is looking for LVM vgdisk. You dont pick iSCSI as storage there.
You need to manually create PV and VG on top of mpath device, when you get that working.

good luck

PS you should set your iSCSI storage to content=none


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
I'm not well versed enough to understand some of what you are telling me. Can you please clarify when you say, "Also note, in the LVM wizard it is looking for LVM vgdisk. You dont pick iSCSI as storage there. You need to manually create PV and VG on top of mpath device"?

That said, I'm not sure if the following is relevant but I thought I would add these details as I have made some progress since Friday. On my SAN, I have created 2 LUNs. 1) For VM Storage (and snapshots If I can), 2) file storage space for security event logs. It looks like this in the Dell SAN Manager interface:
1700497016599.png
One thing I did not complete before I posted my above question was perform host mappings. I have completed this as shown below (You can see I have detected and added 2 hosts of the 4 that will be there (will configure host 3 and 4 later):
1700497105085.png

Now, when I navigate to Storage > Add > LVM on host1, I can see all 4 interfaces of my SAN in the base storage field, like so:
1700497212708.png
I select the first one, and in the volume group field I see 2 entries for LUN 1 (The LUN for Files Storage. I do not see the 10TB LUN for VM Storage, i.e., LUN 0):
1700497288618.png
 
Code:
iscsiadm -m node --targetname "iqn.1984-05.com.dell:powervault.md3600i.6c81f66000c459d70000000062620821" --portal "172.16.0.10:3260" --login
this is i not a WWN: iqn.1984-05.com.dell:powervault.md3600i.6c81f66000c459d70000000062620821
This is an iSCSI target IQN, ie address. The WWD is part of Lun/Disk config.

That said, you should still see disks in lsblk and in lsscsi, even if multipath is misconfigured.

Are the sessions logged in? iscsiadm -m session --login

I am not a fan of using two interfaces on the same subnet, there is always ambiguity there. I'd recommend using different subnets, ie 172.16.0/24 and 172.16.1/24

If the sessions are logged in, double check that you actually assigned a LUN to target.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
I understand. But that command works, using the WWI does not (few more examples below).

I can see LUN1 in lsblk listed 4 times (sdb, sdc, sdd, sdf). I also see LUN0 listed 1x (sde), not sure why.
Code:
# lsblk
NAME               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                  8:0    0 136.1G  0 disk
├─sda1               8:1    0  1007K  0 part
├─sda2               8:2    0     1G  0 part
└─sda3               8:3    0 135.1G  0 part
  ├─pve-swap       253:0    0     8G  0 lvm  [SWAP]
  ├─pve-root       253:1    0  43.8G  0 lvm  /
  ├─pve-data_tmeta 253:2    0     1G  0 lvm
  │ └─pve-data     253:4    0  65.3G  0 lvm
  └─pve-data_tdata 253:3    0  65.3G  0 lvm
    └─pve-data     253:4    0  65.3G  0 lvm
sdb                  8:16   0   6.3T  0 disk
sdc                  8:32   0   6.3T  0 disk
sdd                  8:48   0   6.3T  0 disk
sde                  8:64   0    10T  0 disk
sdf                  8:80   0   6.3T  0 disk
sr0                 11:0    1  1024M  0 rom

I'm guessing there are 4 disks (sdb, sdc, sdd, sdf) because there are 4 NIC interfaces on the SAN? Not sure why there wouldnt also be 4 disks for LUN0 (the 10TB drive) as well.

Regarding lsssci command:
Code:
# lsscsi
-bash: lsscsi: command not found

Tried logging in as you asked:
Code:
# iscsiadm -m session --login
iscsiadm: session mode: option '-l' is not allowed/supported

Googling I found the following command:
Code:
iscsiadm -m node --targetname "iqn.1984-05.com.dell:powervault.md3600i.6c81f66000c459d70000000062620821" --portal "172.16.0.10:3260" --login
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: Could not log into all portals

So I logged out:
Code:
# iscsiadm -m node --targetname "iqn.1984-05.com.dell:powervault.md3600i.6c81f66000c459d70000000062620821" --portal "172.16.0.10:3260" -u
Logging out of session [sid: 5, target: iqn.1984-05.com.dell:powervault.md3600i.6c81f66000c459d70000000062620821, portal: 172.16.0.10,3260]
Logout of [sid: 5, target: iqn.1984-05.com.dell:powervault.md3600i.6c81f66000c459d70000000062620821, portal: 172.16.0.10,3260] successful.

And logged back in:
Code:
# iscsiadm -m node --targetname "iqn.1984-05.com.dell:powervault.md3600i.6c81f66000c459d70000000062620821" --portal "172.16.0.10:3260" --login
Logging in to [iface: default, target: iqn.1984-05.com.dell:powervault.md3600i.6c81f66000c459d70000000062620821, portal: 172.16.0.10,3260]
Login to [iface: default, target: iqn.1984-05.com.dell:powervault.md3600i.6c81f66000c459d70000000062620821, portal: 172.16.0.10,3260] successful.

Thank you for the good advice on the multiple subnets for each iSCSI interface. I'll work on that after I get this working. For now, multipath still doesnt look like it's working as the below command doesnt return anything, and I still have the above issues. Again, I really dont know much about these protocols (iscsi with mpio)

Code:
root@COBRA-S-PM01:/etc# multipath -ll
root@COBRA-S-PM01:/etc#
 

Attachments

  • 1700498083573.png
    1700498083573.png
    5.6 KB · Views: 6
  • 1700498048680.png
    1700498048680.png
    3.6 KB · Views: 6
  • 1700497793306.png
    1700497793306.png
    30.7 KB · Views: 5
Last edited:
Some additional info if it helps:

wwids file:
Code:
root@COBRA-S-PM01:/etc# cat /etc/multipath/wwids
# Multipath wwids, Version : 1.0
# NOTE: This file is automatically maintained by multipath and multipathd.
# You should not need to edit this file in normal circumstances.
#
# Valid WWIDs:
/36C81F66000C459D70000000062620821/

/etc/modules:
Code:
root@COBRA-S-PM01:/etc# cat /etc/modules
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
# Parameters can be specified after the module name.
scsi_dh_rdac

I noticed from my last comment that I missed installing lsscsi. I did that and the results you were asking about above are:
Code:
root@COBRA-S-PM01:/etc# lsscsi
[0:0:0:0]    cd/dvd  TEAC     DVD-ROM DV-28SW  R.2A  /dev/sr0
[2:0:32:0]   enclosu DP       BACKPLANE        1.07  -
[2:2:0:0]    disk    DELL     PERC 6/i         1.22  /dev/sda
[3:0:0:0]    disk    DELL     MD36xxi          0820  -
[3:0:0:1]    disk    DELL     MD36xxi          0820  /dev/sdb
[3:0:0:31]   disk    DELL     Universal Xport  0820  -
[4:0:0:0]    disk    DELL     MD36xxi          0820  -
[4:0:0:1]    disk    DELL     MD36xxi          0820  /dev/sdc
[4:0:0:31]   disk    DELL     Universal Xport  0820  -
[5:0:0:0]    disk    DELL     MD36xxi          0820  /dev/sde
[5:0:0:1]    disk    DELL     MD36xxi          0820  /dev/sdf
[5:0:0:31]   disk    DELL     Universal Xport  0820  -
[6:0:0:0]    disk    DELL     MD36xxi          0820  -
[6:0:0:1]    disk    DELL     MD36xxi          0820  /dev/sdd
[6:0:0:31]   disk    DELL     Universal Xport  0820  -
 
More progress, maybe it's working now? I found this post just now. I added the WWI that was presented in the SAN management interface, not the WWIs listed in the lsscsi --scsi_id command! I added both IDs and restarted the multipathd service. I now get:

Code:
root@COBRA-S-PM01:/etc# multipath -ll
md36xxi2 (36c81f66000c459d700009a1865580be5) dm-5 DELL,MD36xxi
size=6.3T features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=14 status=active
| |- 3:0:0:1 sdb 8:16 active ready running
| `- 6:0:0:1 sdd 8:48 active ready running
`-+- policy='round-robin 0' prio=9 status=enabled
  |- 4:0:0:1 sdc 8:32 active ready running
  `- 5:0:0:1 sdf 8:80 active ready running

I also see the following results:
Code:
root@COBRA-S-PM01:/etc# lsscsi
[0:0:0:0]    cd/dvd  TEAC     DVD-ROM DV-28SW  R.2A  /dev/sr0
[2:0:32:0]   enclosu DP       BACKPLANE        1.07  -
[2:2:0:0]    disk    DELL     PERC 6/i         1.22  /dev/sda
[3:0:0:0]    disk    DELL     MD36xxi          0820  -
[3:0:0:1]    disk    DELL     MD36xxi          0820  /dev/sdb
[3:0:0:31]   disk    DELL     Universal Xport  0820  -
[4:0:0:0]    disk    DELL     MD36xxi          0820  -
[4:0:0:1]    disk    DELL     MD36xxi          0820  /dev/sdc
[4:0:0:31]   disk    DELL     Universal Xport  0820  -
[5:0:0:0]    disk    DELL     MD36xxi          0820  /dev/sde
[5:0:0:1]    disk    DELL     MD36xxi          0820  /dev/sdf
[5:0:0:31]   disk    DELL     Universal Xport  0820  -
[6:0:0:0]    disk    DELL     MD36xxi          0820  -
[6:0:0:1]    disk    DELL     MD36xxi          0820  /dev/sdd
[6:0:0:31]   disk    DELL     Universal Xport  0820  -
root@COBRA-S-PM01:/etc# lsblk
NAME               MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
sda                  8:0    0 136.1G  0 disk
├─sda1               8:1    0  1007K  0 part
├─sda2               8:2    0     1G  0 part
└─sda3               8:3    0 135.1G  0 part
  ├─pve-swap       253:0    0     8G  0 lvm   [SWAP]
  ├─pve-root       253:1    0  43.8G  0 lvm   /
  ├─pve-data_tmeta 253:2    0     1G  0 lvm
  │ └─pve-data     253:4    0  65.3G  0 lvm
  └─pve-data_tdata 253:3    0  65.3G  0 lvm
    └─pve-data     253:4    0  65.3G  0 lvm
sdb                  8:16   0   6.3T  0 disk
└─md36xxi2         253:5    0   6.3T  0 mpath
sdc                  8:32   0   6.3T  0 disk
└─md36xxi2         253:5    0   6.3T  0 mpath
sdd                  8:48   0   6.3T  0 disk
└─md36xxi2         253:5    0   6.3T  0 mpath
sde                  8:64   0    10T  0 disk
sdf                  8:80   0   6.3T  0 disk
└─md36xxi2         253:5    0   6.3T  0 mpath
sr0                 11:0    1  1024M  0 rom

testing everything.... will post back anything else I find.
 
Last edited:
So I now see LUN0 and LUN1 when I go to add LVM. My question is, Do I need to add each 4 times, selecting a different base storage each time?
1700502045998.png
1700502127598.png
 

Attachments

  • 1700502041055.png
    1700502041055.png
    23.7 KB · Views: 3
Ive secretly wanted to run ISCSI but what are you using for the clustered filesystem?

I don't think Proxmox has anything out of the box similar to VMFS if you plan on having more than 1 host touching the same LUNs.

So I'm curious how this will work using LVM on top of the existing LUNs... Since local LVs/VGs won't play nice.
 
Honestly, I am not sure. This is new to me and I am learning. I assumed LVM would work as it worked in vmware land.
 
Ok so maybe it is setup, maybe not? Results of lsscsi --scsi_id showed 2 wwids but it looks like the one for sda is a local disk. I do not plan to use local storage for VM storage. Only SAN. Am I only using that one WWID in my wwid file and multipath.conf file? (highlighted in Yellow)

1700503650582.png
 
I'm guessing there are 4 disks (sdb, sdc, sdd, sdf) because there are 4 NIC interfaces on the SAN?
yes, that is correct. You have 4 paths to the disk and you need to deploy Multipath to take advantage of it. If you dont, you will run into issues later in the setup.
Not sure why there wouldnt also be 4 disks for LUN0 (the 10TB drive) as well.
Likely because there is difference in the configuration on the storage side
Regarding lsssci command:
"apt install lsscsi" , but now that you see the disks in "lslblk" output, you dont need lsscsi
Tried logging in as you asked:
you dont need to do it, PVE will login for you. Now that you fixed your backend storage LUN assignment you should not need manually iscsiadm manipulations
So I now see LUN0 and LUN1 when I go to add LVM. My question is, Do I need to add each 4 times, selecting a different base storage each time?
No, you need to create the LVM structure once on one node only.
review https://pve.proxmox.com/wiki/Logical_Volume_Manager_(LVM) and use /dev/dm-5 as your target.
Now that the devices are owned by Multipath - you should never access them directly, ie by /dev/sdc etc.
As I said, mark your iSCSI storage as "content none", unless you plan to use DIRECT LUN mechanism for iSCSI, which it didnt sound like you wanted.
Create a Volume Group on top of DM-5 device, then add that VG as Thick LVM in PVE. Keep in mind that you will not have support for Snapshot, dynamic LUN management and your performance domain will be scoped to a single LUN on the storage.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
So I'm curious how this will work using LVM on top of the existing LUNs... Since local LVs/VGs won't play nice.
PVE developers built an orchestration layer that makes things play nice, albeit they were never designed for it - it works. This is the reason that only LVM Thick is supported with shared SAN storage. Unlike in Thin case, there is no dynamic metadata involved, so a close coordination of cache updates and global locks on config changes, make it possible to use LVM in shared storage case. Of course, with many limitations.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I understand. Thank you. rechecking the configuration, I do have some follow-up questions before I proced as you suggested. Following my post at 11:07, I removed the WWID of the local disk (sda) from multipath.conf and /etc/multipath/wwid and restated the multipathd service.

lsscsi --scsi_id shows:
Code:
root@COBRA-S-PM01:/etc# lsscsi --scsi_id
[0:0:0:0]    cd/dvd  TEAC     DVD-ROM DV-28SW  R.2A  /dev/sr0   SDELL_PERC_6
[2:0:32:0]   enclosu DP       BACKPLANE        1.07  -          -
[2:2:0:0]    disk    DELL     PERC 6/i         1.22  /dev/sda   36842b2b009c642002cde02d71f25c47b
[3:0:0:0]    disk    DELL     MD36xxi          0820  -          -
[3:0:0:1]    disk    DELL     MD36xxi          0820  /dev/sdb   SDELL_PERC_6
[3:0:0:31]   disk    DELL     Universal Xport  0820  -          -
[4:0:0:0]    disk    DELL     MD36xxi          0820  -          -
[4:0:0:1]    disk    DELL     MD36xxi          0820  /dev/sdc   SDELL_PERC_6
[4:0:0:31]   disk    DELL     Universal Xport  0820  -          -
[5:0:0:0]    disk    DELL     MD36xxi          0820  /dev/sde   36c81f66000c459aa00000b286557e269
[5:0:0:1]    disk    DELL     MD36xxi          0820  /dev/sdf   SDELL_MD36xxi_45M000H
[5:0:0:31]   disk    DELL     Universal Xport  0820  -          -
[6:0:0:0]    disk    DELL     MD36xxi          0820  -          -
[6:0:0:1]    disk    DELL     MD36xxi          0820  /dev/sdd   SDELL_MD36xxi_45M000J
[6:0:0:31]   disk    DELL     Universal Xport  0820  -          -

I identify the wwid above as "36c81f66000c459aa00000b286557e269". I add this to my multipath.conf and wwids file. I did not add the other wwid as this is a local drive. After restarting multipathd I do the following

multipath -ll shows:
Code:
root@COBRA-S-PM01:/etc# multipath -ll
md36xxi (36c81f66000c459d700009a1865580be5) dm-5 DELL,MD36xxi
size=6.3T features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=14 status=active
| |- 3:0:0:1 sdb 8:16 active ready running
| `- 6:0:0:1 sdd 8:48 active ready running
`-+- policy='round-robin 0' prio=9 status=enabled
  |- 4:0:0:1 sdc 8:32 active ready running
  `- 5:0:0:1 sdf 8:80 active ready running

And lsblk shows:
Code:
root@COBRA-S-PM01:/etc# lsblk
NAME               MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
sda                  8:0    0 136.1G  0 disk
├─sda1               8:1    0  1007K  0 part
├─sda2               8:2    0     1G  0 part
└─sda3               8:3    0 135.1G  0 part
  ├─pve-swap       253:0    0     8G  0 lvm   [SWAP]
  ├─pve-root       253:1    0  43.8G  0 lvm   /
  ├─pve-data_tmeta 253:2    0     1G  0 lvm
  │ └─pve-data     253:4    0  65.3G  0 lvm
  └─pve-data_tdata 253:3    0  65.3G  0 lvm
    └─pve-data     253:4    0  65.3G  0 lvm
sdb                  8:16   0   6.3T  0 disk
└─md36xxi          253:5    0   6.3T  0 mpath
sdc                  8:32   0   6.3T  0 disk
└─md36xxi          253:5    0   6.3T  0 mpath
sdd                  8:48   0   6.3T  0 disk
└─md36xxi          253:5    0   6.3T  0 mpath
sde                  8:64   0    10T  0 disk
sdf                  8:80   0   6.3T  0 disk
└─md36xxi          253:5    0   6.3T  0 mpath
sr0                 11:0    1  1024M  0 rom

I'm concerned for sde above. That is one of my 2 LUNs on the SAN. The other LUN is represented as sdb, sdc, sdd, sdf. Why do I not see sde as I do the rest? Is this expected? You mentioned in your previous post "Likely because there is difference in the configuration on the storage side". Should this be cause for concern?
 
Last edited:
I'm concerned for sde above. That is one of my 2 LUNs on the san. The other LUN is represented as sdb, sdc, sdd, sdf. Why do I not see sde as I do the rest?
My guess is that something in your Dell storage is not configured the same way for both LUNs. You can ping Dell support and inquire with them for faster resolution.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I found my mistake. I missed adding the sde WWID to the whitelist in multipath.conf:

Code:
root@COBRA-S-PM01:/etc# cat /etc/multipath.conf
defaults {
        find_multipaths         yes
        polling_interval        2
        path_selector           "round-robin 0"
        path_grouping_policy    multibus
        rr_min_io               100
        failback                immediate
        no_path_retry           queue
        user_friendly_names     yes
}

blacklist {
 wwid .*
}

blacklist_exceptions {
 wwid "36c81f66000c459aa00000b286557e269"
 wwid "36c81f66000c459d700009a1865580be5"
}

devices {
 device {
     vendor                     "DELL"
     product                    "MD36xxi"
     path_grouping_policy       group_by_prio
     prio                       rdac
     path_checker               rdac
     path_selector              "round-robin 0"
     hardware_handler   "1 rdac"
     failback           immediate
     features           "2 pg_init_retries 50"
     no_path_retry              30
     rr_min_io          100
 }
}

multipaths {
 multipath {
     wwid "36c81f66000c459aa00000b286557e269"
                alias md36xxi-vmstorage
        }
 multipath {
     wwid "36c81f66000c459d700009a1865580be5"
     alias md36xxi-filestorage
 }
}

Restarted multipathd service and it now looks ok:

Code:
root@COBRA-S-PM01:/etc# lsblk
NAME                  MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
sda                     8:0    0 136.1G  0 disk
├─sda1                  8:1    0  1007K  0 part
├─sda2                  8:2    0     1G  0 part
└─sda3                  8:3    0 135.1G  0 part
  ├─pve-swap          253:0    0     8G  0 lvm   [SWAP]
  ├─pve-root          253:1    0  43.8G  0 lvm   /
  ├─pve-data_tmeta    253:2    0     1G  0 lvm
  │ └─pve-data        253:4    0  65.3G  0 lvm
  └─pve-data_tdata    253:3    0  65.3G  0 lvm
    └─pve-data        253:4    0  65.3G  0 lvm
sdb                     8:16   0   6.3T  0 disk
└─md36xxi-filestorage 253:5    0   6.3T  0 mpath
sdc                     8:32   0   6.3T  0 disk
└─md36xxi-filestorage 253:5    0   6.3T  0 mpath
sdd                     8:48   0   6.3T  0 disk
└─md36xxi-filestorage 253:5    0   6.3T  0 mpath
sde                     8:64   0    10T  0 disk
└─md36xxi-vmstorage   253:6    0    10T  0 mpath
sdf                     8:80   0   6.3T  0 disk
└─md36xxi-filestorage 253:5    0   6.3T  0 mpath
sr0                    11:0    1  1024M  0 rom

=====================================================

root@COBRA-S-PM01:/etc# multipath -ll
md36xxi-filestorage (36c81f66000c459d700009a1865580be5) dm-5 DELL,MD36xxi
size=6.3T features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=14 status=active
| |- 3:0:0:1 sdb 8:16 active ready running
| `- 6:0:0:1 sdd 8:48 active ready running
`-+- policy='round-robin 0' prio=9 status=enabled
  |- 4:0:0:1 sdc 8:32 active ready running
  `- 5:0:0:1 sdf 8:80 active ready running
md36xxi-vmstorage (36c81f66000c459aa00000b286557e269) dm-6 DELL,MD36xxi
size=10T features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
`-+- policy='round-robin 0' prio=14 status=active
  `- 5:0:0:0 sde 8:64 active ready running

Does this look like what I should expect?
 
I created a new post to ask questions specifically about adding LVM. I feel like the original question of this thread is wrapping up. I would greatly appreciate if you chose to provide input there as you indicated some hints as to my next steps above. Thank you!
 
It looks to me like you have multipathing enabled for one iscsitgt connection, and non multipath enabled using proxmox iscsi.

What I would suggest is a follows:

1. remove all pvesm entries for your iscsi targets. its not like there is any integration available for MD storage so there is no benefit for using the "built in" iscsi initiator anyway.
2. follow the instructions here to set up multipathd: https://pve.proxmox.com/wiki/ISCSI_Multipath. this is probably not necessary, since it looks like you're seeing your LUN but its a good place to do a sanity check.
3. vgcreate [vgname] /dev/mapper/mpathxx (or whatever your multipath lun appears as.)
4. pvesm add lvm [pve storename] --vgname [vgname] --shared 1

A couple of notes:
1. You will achieve the best performance if you make one target per interface. You have 4 interfaces, so having 4 luns would be ideal. If you really want to squeeze out all possible perf, have 8 (4 ports per controller.) This applies mostly to the vmstore LUNs.
2. I assume your filestorage lun is raid6 and your vmstorate luns are raid10. if not, make it so.
3. As I'm sure you already know, this method does not allow for vmthin (and consequently, no snapshots.) Size your vmdisk deployments accordingly.

I'll check your other post to see if there is anything I didnt cover.
 
  • Like
Reactions: wbk
1. remove all pvesm entries for your iscsi targets. its not like there is any integration available for MD storage so there is no benefit for using the "built in" iscsi initiator anyway.
If Op were to do it, he would need to manually attach all iscsi sessions via iscsiadm. There is not much benefit in removing the entries, however I would mark them as content=none. If anything, they will serve as a method for establishing iscsi sessions that is a bit more prettier than iscsiadm.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!