Iscsi Multipath

fastboar154

New Member
Apr 9, 2025
15
1
3
Hi all
trying to show proxmox cluster ISCSI Storage to create Multipath but even if both controllers ISCSI portal are mapped

iscsiadm -m session
tcp: [1] 192.168.0.100:3260,0 iqn.2004-08.com.qsantechnology:p300q-d212-00090ed58:dev0.ctr1 (non-flash)
tcp: [2] 192.168.0.200:3260,0 iqn.2004-08.com.qsantechnology:p300q-d212-00090ed58:dev0.ctr2 (non-flash)

i got this

fdisk -l
Disk /dev/sda: 446.62 GiB, 479554568192 bytes, 936630016 sectors
Disk model: PERC H730P Adp
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: F7450559-4A48-4987-8682-481041407E12

Device Start End Sectors Size Type
/dev/sda1 34 2047 2014 1007K BIOS boot
/dev/sda2 2048 2099199 2097152 1G EFI System
/dev/sda3 2099200 936629982 934530783 445.6G Linux LVM

Partition 1 does not start on physical sector boundary.


Disk /dev/mapper/pve-swap: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/mapper/pve-root: 96 GiB, 103079215104 bytes, 201326592 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Gdisk did it
gdisk -l
GPT fdisk (gdisk) version 1.0.9

Problem opening -l for reading! Error is 2.
The specified file does not exist!

1745581408145.png

I'm trying to follow this https://pve.proxmox.com/wiki/ISCSI_Multipath but drive does not appear.
Volume is RAID-6 512bytes 64bit

Any suggest?
 
Ciao @bbgeek17
happy to find you again here :-)
please see the screenshot of it
1745590995440.png

No much thing to setup on QSAN. To be sure i've deleted volume and recreated using 32bit instead of 64bit and again 512byte
1745591180640.png
1745591199438.png

CTRL1
1745591266561.png
CTRL2
1745591361187.png

Just to be sure i've added iscsi portal on Windows and Volume is available

1745592110603.png
 

Attachments

  • 1745591294396.png
    1745591294396.png
    230.5 KB · Views: 2
Last edited:
The Linux Kernel of the PVE system did not detect the LUN as evidenced by the "lsblk" output.

I am not familiar with what QSAN configuration should look like. The output you supplied shows the active sessions. We knew those were working from your prior output. There may be more work needed to map the LUN to a particular initiator, I am not sure.

You may want to reach out to QSAN support/forum, as they are more familiar with the storage device troubleshooting. The iSCSI protocol and its configuration in PVE is very straightforward, there are no hidden knobs to "unhide" the LUN.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I was thinking the same i did wait to reply in order to check again LUN after Erasing task.
do you think need rescan with this command
Code:
for host in /sys/class/scsi_host/host*/scan; do echo "- - -" > $host; done
or will it appear itself?

I've also tick "Use LUNs directly"
1745603441808.png

It was not selected before, i've found a video on where it was not selected in order to do Multipath

I'll keep you updated
 
do you think need rescan with this command
you can do it at any time, it won't hurt. It may not help either.
Until you see the new disk in "lsblk" or "lsscsi" output, or see an event in "dmesg" - you cant advance to next steps.

I've also tick "Use LUNs directly"
Direct LUNs are not supported with Multipath. A direct LUN is a full LUN pass-through to a VM. There are reasons to do so, I don't know if you have these reasons.
If you want to use Multipath and LVM - do not select that option.

Once you have your disks presented properly to the host, you can follow this guide for the rest of the configuration:
https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage/


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: fastboar154
i've created with Touch command /etc/multipath.conf, was not present, but it was not falling back to its default configuration so i've added mine for legacy QSAN
Code:
defaults {
    user_friendly_names     yes
    polling_interval        10
    max_fds                 8192
    find_multipaths         yes
    queue_without_daemon    no
}

blacklist {
    devnode "^sd[a-z]$"
    devnode "^nvme.*"
}

# Assegnazione alias al WWID specifico
multipaths {
    multipath {
        wwid            362cea7f0bd4b2a0028df2df8113f297d
        alias           fatai-multipath
        path_grouping_policy    failover
        path_selector         "round-robin 0"
        rr_min_io             100
    }
}

devices {
    device {
        vendor "QSAN"
        product ".*"
        path_grouping_policy  "failover"
        path_selector         "round-robin 0"
        path_checker          "tur"
        features              "0"
        prio                  "const"
        failback              "manual"
        no_path_retry         5
        fast_io_fail_tmo      5
        dev_loss_tmo          30
    }
}
}

but i' don't if missed something
 
just adding information for future users if can help... added
Code:
blacklist_exceptions {
    devnode "^(sdb|sdc)$"
}

to allow multipath

Code:
blacklist {
        devnode "^sd[a-z]$"
        devnode "^nvme.*"
        devnode "!^(sd[a-z]|dasd[a-z]|nvme[0-9])"
        ...
}
 
just adding information for future users if can help... added
Code:
blacklist_exceptions {
    devnode "^(sdb|sdc)$"
}

to allow multipath

Code:
blacklist {
        devnode "^sd[a-z]$"
        devnode "^nvme.*"
        devnode "!^(sd[a-z]|dasd[a-z]|nvme[0-9])"
        ...
}


You can do much simpler way:

Code:
blacklist {
    wwid ".*"
}

blacklist_exceptions {
    wwid "<wwid-disk1>"
    wwid "<wwid-disk2>"
    ...
}

multipaths {
        multipath {
                wwid "<wwid-disk1>"
                alias mpath-QNAP_disk-1
        }
        multipath {
                wwid "<wwid-disk2>"
                alias mpath-QNAP_disk-2
        }
...
}

You need to know the disks wwid from the SAN.
 
Last edited: