iSCSI Huawei Oceanstore Dorado 3000 LVM and Multipath

zeuxprox

Renowned Member
Dec 10, 2014
96
5
73
Hi,

I have a strange issue setting up iSCSI connection betwwen PVE 8.3.2 and Huawei Ocenastore dorado 3000 v6 via iSCSI and then configuring multipath. Because of this is a test environment, I create only one LUN.

PVE has 4x10Gb Nics (Intel) and I setup two networks with 2 different VLANs:
  • 192.168.51.50
  • 192.168.52.50
The dorado 3000 has 2 controllers:
  • Controller A: 192.168.51.60
  • Controller B: 192.168.52.60
I can ping from PVE to Dorado (both controller A and B).

I installed open-iscsi and lsscsi

From the GUI, Datacenter --> Storage --> Add --> iSCSI I successfully added the 2 Controllers as shown:
Datacenter-Storage.png

The problem is that, at this point PVE doesn't create a map in /dev/sdX.

This is the content of /etc/pve/storage.cfg :
Code:
dir: local
        path /var/lib/vz
        content iso,backup,vztmpl

zfspool: local-zfs
        pool rpool/data
        content rootdir,images
        sparse 1

iscsi: dorado-3000-a
        portal 192.168.51.60
        target iqn.2006-08.com.huawei:oceanstor:2100c89f1aac8a39::21f01:192.168.51.60
        content none

iscsi: dorado-3000-b
        portal 192.168.52.60
        target iqn.2006-08.com.huawei:oceanstor:2100c89f1aac8a39::1021f01:192.168.52.60
        content none

This is the output of iscsiadm -m session :
Code:
tcp: [1] 192.168.51.60:3260,7938 iqn.2006-08.com.huawei:oceanstor:2100c89f1aac8a39::21f01:192.168.51.60 (non-flash)
tcp: [2] 192.168.52.60:3260,7948 iqn.2006-08.com.huawei:oceanstor:2100c89f1aac8a39::1021f01:192.168.52.60 (non-flash)


This is the output of lsblk :
Code:
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda      8:0    0 931.5G  0 disk
├─sda1   8:1    0  1007K  0 part
├─sda2   8:2    0     1G  0 part
└─sda3   8:3    0 930.5G  0 part
(Where are the iSCSI disks?)


This is the output of lsscsi :
Code:
[0:0:0:0]    disk    ATA      CT1000MX500SSD1  046   /dev/sda
[6:0:0:0]    disk    HUAWEI   XSG1             6000  -       
[7:0:0:0]    disk    HUAWEI   XSG1             6000  -

This is the output of iscsiadm -m session -P3 :
Code:
iSCSI Transport Class version 2.0-870
version 2.1.8
Target: iqn.2006-08.com.huawei:oceanstor:2100c89f1aac8a39::21f01:192.168.51.60 (non-flash)
    Current Portal: 192.168.51.60:3260,7938
    Persistent Portal: 192.168.51.60:3260,7938
        **********
        Interface:
        **********
        Iface Name: default
        Iface Transport: tcp
        Iface Initiatorname: iqn.1993-08.org.debian:01:c69dbccc6b64
        Iface IPaddress: 192.168.51.50
        Iface HWaddress: default
        Iface Netdev: default
        SID: 1
        iSCSI Connection State: LOGGED IN
        iSCSI Session State: LOGGED_IN
        Internal iscsid Session State: NO CHANGE
        *********
        Timeouts:
        *********
        Recovery Timeout: 15
        Target Reset Timeout: 30
        LUN Reset Timeout: 30
        Abort Timeout: 15
        *****
        CHAP:
        *****
        username: <empty>
        password: ********
        username_in: <empty>
        password_in: ********
        ************************
        Negotiated iSCSI params:
        ************************
        HeaderDigest: None
        DataDigest: None
        MaxRecvDataSegmentLength: 262144
        MaxXmitDataSegmentLength: 262144
        FirstBurstLength: 73728
        MaxBurstLength: 262144
        ImmediateData: Yes
        InitialR2T: Yes
        MaxOutstandingR2T: 1
        ************************
        Attached SCSI devices:
        ************************
        Host Number: 6    State: running
        scsi6 Channel 00 Id 0 Lun: 0
Target: iqn.2006-08.com.huawei:oceanstor:2100c89f1aac8a39::1021f01:192.168.52.60 (non-flash)
    Current Portal: 192.168.52.60:3260,7948
    Persistent Portal: 192.168.52.60:3260,7948
        **********
        Interface:
        **********
        Iface Name: default
        Iface Transport: tcp
        Iface Initiatorname: iqn.1993-08.org.debian:01:c69dbccc6b64
        Iface IPaddress: 192.168.52.50
        Iface HWaddress: default
        Iface Netdev: default
        SID: 2
        iSCSI Connection State: LOGGED IN
        iSCSI Session State: LOGGED_IN
        Internal iscsid Session State: NO CHANGE
        *********
        Timeouts:
        *********
        Recovery Timeout: 15
        Target Reset Timeout: 30
        LUN Reset Timeout: 30
        Abort Timeout: 15
        *****
        CHAP:
        *****
        username: <empty>
        password: ********
        username_in: <empty>
        password_in: ********
        ************************
        Negotiated iSCSI params:
        ************************
        HeaderDigest: None
        DataDigest: None
        MaxRecvDataSegmentLength: 262144
        MaxXmitDataSegmentLength: 262144
        FirstBurstLength: 73728
        MaxBurstLength: 262144
        ImmediateData: Yes
        InitialR2T: Yes
        MaxOutstandingR2T: 1
        ************************
        Attached SCSI devices:
        ************************
        Host Number: 7    State: running
        scsi7 Channel 00 Id 0 Lun: 0

As you can see there is no mapping in /dev/sdX

And in the System log:
Code:
Jan 02 14:25:43 prx1-tex systemd[1]: Starting iscsid.service - iSCSI initiator daemon (iscsid)...
Jan 02 14:25:43 prx1-tex iscsid[8186]: iSCSI logger with pid=8187 started!
Jan 02 14:25:43 prx1-tex systemd[1]: Started iscsid.service - iSCSI initiator daemon (iscsid).
Jan 02 14:25:43 prx1-tex kernel: Loading iSCSI transport class v2.0-870.
Jan 02 14:25:43 prx1-tex kernel: iscsi: registered transport (tcp)
Jan 02 14:25:44 prx1-tex iscsid[8187]: iSCSI daemon with pid=8188 started!
Jan 02 14:25:58 prx1-tex kernel: scsi host6: iSCSI Initiator over TCP/IP
Jan 02 14:25:58 prx1-tex kernel: scsi 6:0:0:0: Direct-Access     HUAWEI   XSG1             6000 PQ: 1 ANSI: 6
Jan 02 14:25:58 prx1-tex kernel: scsi 6:0:0:0: Attached scsi generic sg1 type 0
Jan 02 14:25:59 prx1-tex iscsid[8187]: Could not set session1 priority. READ/WRITE throughout and latency could be affected.
Jan 02 14:25:59 prx1-tex iscsid[8187]: Connection1:0 to [target: iqn.2006-08.com.huawei:oceanstor:2100c89f1aac8a39::21f01:192.168.51.60, portal: 192.168.51.60,3260] through [iface: default] is operational now
Jan 02 14:26:51 prx1-tex kernel: scsi host7: iSCSI Initiator over TCP/IP
Jan 02 14:26:51 prx1-tex kernel: scsi 7:0:0:0: Direct-Access     HUAWEI   XSG1             6000 PQ: 1 ANSI: 6
Jan 02 14:26:51 prx1-tex kernel: scsi 7:0:0:0: Attached scsi generic sg2 type 0
Jan 02 14:26:52 prx1-tex iscsid[8187]: Could not set session2 priority. READ/WRITE throughout and latency could be affected.
Jan 02 14:26:52 prx1-tex iscsid[8187]: Connection2:0 to [target: iqn.2006-08.com.huawei:oceanstor:2100c89f1aac8a39::1021f01:192.168.52.60, portal: 192.168.52.60,3260] through [iface: default] is operational now


If I do not have a mapping I can't configure LVM as you can see:

Datacenter-Storage-LVM.png

And finally I can't configure multipath....

Can someone help me, please?

Thank you
 
Hi @zeuxprox ,
You should not add two iSCSI storage pools pointing to the same device. Configure just one and let the PVE discover both. The next step is to configure multipath https://pve.proxmox.com/wiki/Multipath
When you have the iSCSI and Multipath properly configured then you can add LVM.

I'd recommend removing the pools, ensure that iscsiadm database is cleaned up, then add one of them. The iSCSI discovery should be reporting both SAN IPs.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Hi @zeuxprox ,
You should not add two iSCSI storage pools pointing to the same device. Configure just one and let the PVE discover both. The next step is to configure multipath https://pve.proxmox.com/wiki/Multipath
When you have the iSCSI and Multipath properly configured then you can add LVM.

I'd recommend removing the pools, ensure that iscsiadm database is cleaned up, then add one of them. The iSCSI discovery should be reporting both SAN IPs.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Hi,

I removed the pools (Datacenter --> Storage --> Add --> iSCSI ), rebooted PVE and then, always from the GUI added again the first Controller (192.168.51.60), but without luck. Now I have only one iSCSI entry in Storage:
Datacenter-Storage_2.png

The output of iscsiadm -m session is:
Code:
tcp: [1] 192.168.51.60:3260,7938 iqn.2006-08.com.huawei:oceanstor:2100c89f1aac8a39::21f01:192.168.51.60 (non-flash)

The output of lsblk:
Code:
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda      8:0    0 931.5G  0 disk
├─sda1   8:1    0  1007K  0 part
├─sda2   8:2    0     1G  0 part
└─sda3   8:3    0 930.5G  0 part

The output of lsscsi:
Code:
[0:0:0:0]    disk    ATA      CT1000MX500SSD1  046   /dev/sda
[6:0:0:0]    disk    HUAWEI   XSG1             6000  -

The output of iscsiadm -m session -P3
Code:
iSCSI Transport Class version 2.0-870
version 2.1.8
Target: iqn.2006-08.com.huawei:oceanstor:2100c89f1aac8a39::21f01:192.168.51.60 (non-flash)
    Current Portal: 192.168.51.60:3260,7938
    Persistent Portal: 192.168.51.60:3260,7938
        **********
        Interface:
        **********
        Iface Name: default
        Iface Transport: tcp
        Iface Initiatorname: iqn.1993-08.org.debian:01:c69dbccc6b64
        Iface IPaddress: 192.168.51.50
        Iface HWaddress: default
        Iface Netdev: default
        SID: 1
        iSCSI Connection State: LOGGED IN
        iSCSI Session State: LOGGED_IN
        Internal iscsid Session State: NO CHANGE
        *********
        Timeouts:
        *********
        Recovery Timeout: 15
        Target Reset Timeout: 30
        LUN Reset Timeout: 30
        Abort Timeout: 15
        *****
        CHAP:
        *****
        username: <empty>
        password: ********
        username_in: <empty>
        password_in: ********
        ************************
        Negotiated iSCSI params:
        ************************
        HeaderDigest: None
        DataDigest: None
        MaxRecvDataSegmentLength: 262144
        MaxXmitDataSegmentLength: 262144
        FirstBurstLength: 73728
        MaxBurstLength: 262144
        ImmediateData: Yes
        InitialR2T: Yes
        MaxOutstandingR2T: 1
        ************************
        Attached SCSI devices:
        ************************
        Host Number: 6    State: running
        scsi6 Channel 00 Id 0 Lun: 0

PVE continues to not map iSCSI to /dev/sdX....

For your information I do not use CHAP authentication.

Thank you
 
Last edited:
Please keep in mind that I have no first hand experience with that particular appliance.

That said, your output suggests that you likely did not assign the LUN to the target. The output appears to show "controller" entry.

PVE uses standard Linux Kernel/Userland iSCSI software. Clearly you can login and establish iSCSI session, if the LUN was there the Kernel would show it.

Given that PVE is based on Debian userland and Ubuntu Kernel, you can reach out to your storage vendor and ask for assistance in connecting a Linux host to the device.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Please keep in mind that I have no first hand experience with that particular appliance.

That said, your output suggests that you likely did not assign the LUN to the target. The output appears to show "controller" entry.

PVE uses standard Linux Kernel/Userland iSCSI software. Clearly you can login and establish iSCSI session, if the LUN was there the Kernel would show it.

Given that PVE is based on Debian userland and Ubuntu Kernel, you can reach out to your storage vendor and ask for assistance in connecting a Linux host to the device.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Hi,

what do you think about this discussion related to Kernel 6.8 and iSCSI?

https://serverfault.com/questions/1168100/what-could-prevent-iscsi-disk-to-mount-on-ubuntu

I also tried Kernel 6.11.0-2-pve but the problem remains

Thank you
 
Last edited:
what do you think about this discussion related to Kernel 6.8 and iSCSI?
There is not much discussion there. It's a one person report without any details or analyses.
The symptoms are also not the same. In that poster's case he had scsi device showing up in a "good" case. You have "sgd" device which is a "scsi generic device" and not a "sdX" "scsi disk".

Have you confirmed that your storage configuration is correct? Does your vendor have a guide on attaching iSCSI device to Linux host? If they do - have you tried to follow it, instead of using PVE storage pool configuration?


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
We have the same Dorado, currently PVE on the kernel 6.8.12-x (PVE 8.2.x) working. Can't test 8.3 now, so just if you have access to pve8.2 to test dorado side setup.
 
Hi,

it works! the problem was a configuration on dorado. Now I have a question about multipath...

From the GUI, Datacenter --> Storage --> Add --> iSCSI I added the first Controller (A) of dorado setting as portal the ip 192.168.51.60 and the output of the command iscsiadm -m session is:
Code:
tcp: [1] 192.168.51.60:3260,7938 iqn.2006-08.com.huawei:oceanstor:2100c89f1aac8a39::21f01:192.168.51.60 (non-flash)

PVE is not able to discover the other path to reach the storage (IP 192.168.52.60). Now the question is: do I have add also the second Controller (B) of dorado in Datacenter --> Storage --> Add --> iSCSI setting as portal ip 192.168.52.60?
If the answer is yes, when I go to add an LVM Volume, in the "Base Storage" and "Base Volume" fields what should I choose between Controller A (IP 192.168. 51.60) and Controller B (IP 192.168. 52.60)?

Thank you
 
it works! the problem was a configuration on dorado.
Great to hear. iSCSI is an old and well tested protocol within Linux, it's very unlikely that there would be issues with basic connectivity.

PVE is not able to discover the other path to reach the storage
PVE is only able to see what the SAN presents to it. If your device does not advertise both IPs in discovery, then I guess you can add second iSCSI configuration entry manually.
Keep in mind that in this particular case you do not get much benefit from using PVE iSCSI storage pools. You may be better off configuring the connectivity via iscsiadm directly on each node.

Once you see your device via both paths (lsblk)
The next step is to configure multipath https://pve.proxmox.com/wiki/Multipath

Your LVM will be placed on top of the resulting MPATH device. You should not use or address individual SD devices.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!