[SOLVED] 2 Nodes and ISCSI Storage

ASM-Network

New Member
Mar 31, 2026
9
0
1
Hello forum,

We have two PVE servers running in a cluster. Both have multiple network interfaces. The data is stored on a Fujitsu Storage DX100.

Server 1 is configured so that it's directly connected to the storage's controller port 1 via a network cable.

An iSCSI connection is established, and an LVM is created. This works without any problems.

Server 2 also has a direct connection to the storage, but to port 2.

Consequently, the iSCSI target is different, and we can't access the LVM.

What can we do to allow Server 2 to connect to the LVM?

Can we simply edit the storage.cfg file on the second server?

BR
Stefan
 
Last edited:
Hi @ASM-Network , welcome to the forum.

This is an unusual configuration. What forced you down this particular path?

Is the Sever2 connected to Port2 on Controller1 or Controller2?

No, you can't just edit storage.cfg on the second server. The storage.cfg is located on a shared/clustered filesystem - its the same file as on server1.

Please take a look at this article https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage/

In particular run the commands described in the following part on each server and report back: https://kb.blockbridge.com/technote...nderstand-multipath-reporting-for-your-device


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Thanks for your reply.

One server is connected to controller cm#0 ca#0 port#0 and the other server to controller cm#0 ca#0 port#1. Both servers are connected to the same controller.

BR
Stefan
 
Please run the commands that were requested earlier. The configuration and system state may be clear to you, but the forum members are not familiar with your infrastructure.

It is unusual for a system to present two different iSCSI targets out of two interfaces of the same controller. Did you configure this on purpose? You were able to map one LUN to two different targets but you are not able to map the same target to both ports?


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
@ASM-Network

You could try to configure two iSCSI storage pools and scope each to one node only (using "nodes" option). Or you can simply use direct iscsiadm commands to manually connect iSCSI on each server and only use shared LVM pool. The process is described in the article I mentioned earlier.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
We made some changes and set up multipath according to your documentation. The first server creates the mpath0 device, but the second server doesn't. Rebooting didn't help. Do you have any ideas?

BR Stefan
 
The first server creates the mpath0 device, but the second server doesn't. Rebooting didn't help. Do you have any ideas?
Hi Stefan, the most likely reason is that the 2nd server is not configured properly in one way or the other.
However, that is based only on your brief report. If you provide some system information from both servers, perhaps a better suggestion can come to mind.

Cheers


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
ok. Here are multipth.conf and multipth -v3 from both server
server 1:
multipath.conf

devices {
device {
vendor "FUJITSU"
product "ETERNUS_DXL"
prio alua
path_grouping_policy group_by_prio
path_selector "round-robin 0"
failback immediate
no_path_retry 0
path_checker tur
dev_loss_tmo 2097151
fast_io_fail_tmo 1
}
}

blacklist {
wwid .*
}

blacklist_exceptions {
wwid "3600000e00d320000003230e400000000"
}


multipaths {
multipath {
wwid "3600000e00d320000003230e400000000"
alias mpath0
}
}

multipath -v3
236823.784181 | set open fds limit to 1073741816/1073741816
236823.784223 | _read_bindings_file: reading /etc/multipath/bindings
236823.784246 | loading /usr/lib/multipath/libchecktur.so checker
236823.784362 | checker tur: message table size = 4
236823.784371 | loading /usr/lib/multipath/libprioconst.so prioritizer
236823.784481 | _init_foreign: foreign library "nvme" is not enabled
236823.812055 | sr0: device node name blacklisted
236823.812206 | sda: size = 976773168
236823.812379 | sda: vendor = ATA
236823.812402 | sda: product = WD Blue SA510 2.
236823.812425 | sda: rev = 8100
236823.813122 | sda: h:b:t:l = 2:0:0:0
236823.813489 | sda: tgt_node_name = ata-3.00
236823.813496 | sda: uid_attribute = ID_SERIAL (setting: multipath internal)
236823.813501 | sda: recheck_wwid = 1 (setting: multipath.conf defaults/devices section)
236823.813702 | sda: 60801 cyl, 255 heads, 63 sectors/track, start at 0
236823.813709 | sda: vpd_vendor_id = 0 "undef" (setting: multipath internal)
236823.813729 | sda: serial = 222327A000B9
236823.813734 | sda: detect_checker = yes (setting: multipath internal)
236823.813768 | sda checker timeout = 30 s (setting: kernel sysfs)
236823.813858 | sda: path_checker = tur (setting: multipath internal)
236823.813903 | sda: tur state = up
236823.813912 | sda: uid = WD_Blue_SA510_2.5_500GB_222327A000B9 (udev)
236823.813924 | sda: wwid WD_Blue_SA510_2.5_500GB_222327A000B9 blacklisted
236823.814468 | sdb: size = 27487789056
236823.814634 | sdb: vendor = FUJITSU
236823.814657 | sdb: product = ETERNUS_DXL
236823.814679 | sdb: rev = 0000
236823.815516 | find_hwe: found 2 hwtable matches for FUJITSU:ETERNUS_DXL:0000
236823.815527 | sdb: h:b:t:l = 10:0:0:0
236823.816030 | sdb: tgt_node_name = iqn.2000-09.com.fujitsu:storage-system.eternus-dxl:003230e4:0000
236823.816037 | sdb: uid_attribute = ID_SERIAL (setting: multipath internal)
236823.816042 | sdb: recheck_wwid = 1 (setting: multipath.conf defaults/devices section)
236823.816263 | sdb: 52428 cyl, 64 heads, 32 sectors/track, start at 0
236823.816271 | sdb: vpd_vendor_id = 0 "undef" (setting: multipath internal)
236823.816290 | sdb: serial = 320C39
236823.816295 | sdb: detect_checker = yes (setting: multipath internal)
236823.816330 | sdb checker timeout = 30 s (setting: kernel sysfs)
236823.857183 | sdb: path_checker = tur (setting: storage device autodetected)
236823.857408 | sdb: tur state = up
236823.857418 | sdb: uid = 3600000e00d320000003230e400000000 (udev)
236823.857484 | sdb: wwid 3600000e00d320000003230e400000000 whitelisted
236823.857490 | sdb: detect_prio = yes (setting: multipath internal)
236823.857541 | loading /usr/lib/multipath/libpriosysfs.so prioritizer
236823.857672 | sdb: prio = sysfs (setting: storage device autodetected)
236823.857678 | sdb: prio args = "" (setting: storage device autodetected)
236823.857709 | sdb: sysfs prio = 50
236823.857886 | sdc: size = 34359738368
236823.858068 | sdc: vendor = IBM
236823.858091 | sdc: product = 2145
236823.858114 | sdc: rev = 0000
236823.858864 | sdc: h:b:t:l = 12:0:0:0
236823.859367 | sdc: tgt_node_name = iqn.1986-03.com.ibm:2145.zollconcluster.node2
236823.859374 | sdc: uid_attribute = ID_SERIAL (setting: multipath internal)
236823.859382 | sdc: recheck_wwid = 1 (setting: multipath.conf defaults/devices section)
236823.859580 | sdc: 0 cyl, 64 heads, 32 sectors/track, start at 0
236823.859588 | sdc: vpd_vendor_id = 0 "undef" (setting: multipath internal)
236823.859607 | sdc: serial = 00c02020f25eXX00
236823.859612 | sdc: detect_checker = yes (setting: multipath internal)
236823.859645 | sdc checker timeout = 30 s (setting: kernel sysfs)
236823.860186 | sdc: path_checker = tur (setting: storage device autodetected)
236823.860419 | sdc: tur state = up
236823.860427 | sdc: uid = 360050763008083c97800000000000008 (udev)
236823.860434 | sdc: wwid 360050763008083c97800000000000008 blacklisted
236823.860611 | loop0: device node name blacklisted
236823.860723 | loop1: device node name blacklisted
236823.860832 | loop2: device node name blacklisted
....
236823.872016 | dm-8: device node name blacklisted
236823.872119 | dm-9: device node name blacklisted
===== paths list =====
uuid hcil dev dev_t pri dm_st chk_st vend/prod/rev dev_st
3600000e00d320000003230e400000000 10:0:0:0 sdb 8:16 50 undef undef FUJITSU,ETERNUS_DXL,0000 unknown
236823.873473 | multipath-tools v0.11.1 (01/24, 2025)
236823.873492 | libdevmapper version 1.02.205
236823.873667 | kernel device mapper v4.50.0
236823.873683 | DM multipath kernel driver v1.15.0
236823.874164 | sdb: size = 27487789056
236823.874175 | sdb: vendor = FUJITSU
236823.874182 | sdb: product = ETERNUS_DXL
236823.874189 | sdb: rev = 0000
236823.875330 | find_hwe: found 2 hwtable matches for FUJITSU:ETERNUS_DXL:0000
236823.875337 | sdb: h:b:t:l = 10:0:0:0
236823.875547 | sdb: tgt_node_name = iqn.2000-09.com.fujitsu:storage-system.eternus-dxl:003230e4:0000
236823.875570 | sdb: 52428 cyl, 64 heads, 32 sectors/track, start at 0
236823.875577 | sdb: vpd_vendor_id = 0 "undef" (setting: multipath internal)
236823.875597 | sdb: serial = 320C39
236823.875858 | sdb: udev property ID_WWN whitelisted
236823.875884 | sdb: wwid 3600000e00d320000003230e400000000 whitelisted
236823.876153 | unloading tur checker
236823.876194 | unloading sysfs prioritizer
236823.876216 | unloading const prioritizer

---------------------------------
server2:

devices {
device {
vendor "FUJITSU"
product "ETERNUS_DXL"
prio alua
path_grouping_policy group_by_prio
path_selector "round-robin 0"
failback immediate
no_path_retry 0
path_checker tur
dev_loss_tmo 2097151
fast_io_fail_tmo 1
}
}

blacklist {
wwid .*
}

blacklist_exceptions {
wwid "3600000e00d320000003230e400000000"
}


multipaths {
multipath {
wwid "3600000e00d320000003230e400000000"
alias mpath0
}
}

multipath -v3
95945.306696 | set open fds limit to 1073741816/1073741816
95945.306767 | _read_bindings_file: reading /etc/multipath/bindings
95945.306806 | loading /usr/lib/multipath/libchecktur.so checker
95945.307075 | checker tur: message table size = 4
95945.307097 | loading /usr/lib/multipath/libprioconst.so prioritizer
95945.307297 | _init_foreign: foreign library "nvme" is not enabled
95945.329405 | sda: size = 466796544
95945.329527 | sda: vendor = Lenovo
95945.329545 | sda: product = RAID 930-8i-2GB
95945.329563 | sda: rev = 5.22
95945.330350 | sda: h:b:t:l = 0:2:0:0
95945.330588 | sda: tgt_node_name =
95945.330594 | sda: uid_attribute = ID_SERIAL (setting: multipath internal)
95945.330599 | sda: recheck_wwid = 1 (setting: multipath.conf defaults/devices section)
95945.330791 | sda: 29056 cyl, 255 heads, 63 sectors/track, start at 0
95945.330798 | sda: vpd_vendor_id = 0 "undef" (setting: multipath internal)
95945.330816 | sda: serial = 00389a3a0a8be12c2ec068ba16b26200
95945.330823 | sda: detect_checker = yes (setting: multipath internal)
95945.330870 | sda checker timeout = 90 s (setting: kernel sysfs)
95945.330963 | sda: path_checker = tur (setting: multipath internal)
95945.331012 | sda: tur state = up
95945.331022 | sda: uid = 3600062b216ba68c02e2ce18b0a3a9a38 (udev)
95945.331054 | sda: wwid 3600062b216ba68c02e2ce18b0a3a9a38 blacklisted
95945.331397 | sdb: size = 34359738368
95945.331494 | sdb: vendor = IBM
95945.331510 | sdb: product = 2145
95945.331528 | sdb: rev = 0000
95945.332382 | sdb: h:b:t:l = 15:0:0:0
95945.332672 | sdb: tgt_node_name = iqn.1986-03.com.ibm:2145.zollconcluster.node2
95945.332678 | sdb: uid_attribute = ID_SERIAL (setting: multipath internal)
95945.332681 | sdb: recheck_wwid = 1 (setting: multipath.conf defaults/devices section)
95945.332857 | sdb: 0 cyl, 64 heads, 32 sectors/track, start at 0
95945.332862 | sdb: vpd_vendor_id = 0 "undef" (setting: multipath internal)
95945.332873 | sdb: serial = 00c02020f25eXX00
95945.332876 | sdb: detect_checker = yes (setting: multipath internal)
95945.332896 | sdb checker timeout = 30 s (setting: kernel sysfs)
95945.333821 | sdb: path_checker = tur (setting: storage device autodetected)
95945.334295 | sdb: tur state = up
95945.334314 | sdb: uid = 360050763008083c97800000000000008 (udev)
95945.334321 | sdb: wwid 360050763008083c97800000000000008 blacklisted
95945.334476 | loop0: device node name blacklisted
95945.334534 | loop1: device node name blacklisted
95945.334586 | loop2: device node name blacklisted
95945.334639 | loop3: device node name blacklisted
95945.334689 | loop4: device node name blacklisted
...
95945.339362 | dm-7: device node name blacklisted
95945.339414 | dm-8: device node name blacklisted
95945.339465 | dm-9: device node name blacklisted
===== no paths =====
95945.339808 | multipath-tools v0.11.1 (01/24, 2025)
95945.339828 | libdevmapper version 1.02.205
95945.340078 | kernel device mapper v4.50.0
95945.340096 | DM multipath kernel driver v1.15.0
95945.340399 | unloading tur checker
95945.340454 | unloading const prioritizer
 
server1:
root@pve1:/etc# multipath -ll
mpath0 (3600000e00d320000003230e400000000) dm-6 FUJITSU,ETERNUS_DXL
size=13T features='0' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
`- 10:0:0:0 sdb 8:16 active ready running

server2:
root@pve2:/etc# multipath -ll
root@pve2:/etc#
 
Please use CODE tags, available in the edit box menu. It makes reading large configuration blocks easier.

Before Multipath can form, the server needs to see raw block devices. Those should be listed via:
lsblk
lsscsi

Are the outputs the same across servers? If not, perhaps your Fujitsu is not zoned properly.

Cheers


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
  • Like
Reactions: Johannes S
Code:
Server 1:
root@pve1:/etc# lsscsi
[0:0:0:0]    cd/dvd  HL-DT-ST DVDROM DUD0N     PG01  /dev/sr0
[2:0:0:0]    disk    ATA      WD Blue SA510 2. 8100  /dev/sda
[10:0:0:0]   disk    FUJITSU  ETERNUS_DXL      0000  /dev/sdb
[12:0:0:0]   disk    IBM      2145             0000  /dev/sdc


root@pve1:/etc# lsblk
NAME                        MAJ:MIN  RM   SIZE RO TYPE  MOUNTPOINTS
sda                           8:0     0 465.8G  0 disk
├─sda1                        8:1     0  1007K  0 part
├─sda2                        8:2     0     1G  0 part  /boot/efi
└─sda3                        8:3     0 464.8G  0 part
  ├─pve-swap                252:0     0     8G  0 lvm   [SWAP]
  ├─pve-root                252:1     0    96G  0 lvm   /
  ├─pve-data_tmeta          252:2     0   3.4G  0 lvm
  │ └─pve-data-tpool        252:4     0 337.9G  0 lvm
  │   └─pve-data            252:5     0 337.9G  1 lvm
  └─pve-data_tdata          252:3     0 337.9G  0 lvm
    └─pve-data-tpool        252:4     0 337.9G  0 lvm
      └─pve-data            252:5     0 337.9G  1 lvm
sdb                           8:16    0  12.8T  0 disk
└─mpath0                    252:6     0  12.8T  0 mpath
sdc                           8:32    0    16T  0 disk
├─ibmstore-vm--105--disk--1 252:7     0    60G  0 lvm
├─ibmstore-vm--105--disk--0 252:8     0     4M  0 lvm
├─ibmstore-vm--100--disk--0 252:9     0   300G  0 lvm
├─ibmstore-vm--101--disk--1 252:10    0    90G  0 lvm
├─ibmstore-vm--101--disk--2 252:11    0   100G  0 lvm
├─ibmstore-vm--101--disk--0 252:12    0     4M  0 lvm
├─ibmstore-vm--103--disk--0 252:13    0   100G  0 lvm
├─ibmstore-vm--103--disk--1 252:14    0     4M  0 lvm
├─ibmstore-vm--202--disk--0 252:15    0    40G  0 lvm
├─ibmstore-vm--203--disk--0 252:16    0    50G  0 lvm
├─ibmstore-vm--203--disk--1 252:17    0   250G  0 lvm
├─ibmstore-vm--203--disk--2 252:18    0   100G  0 lvm
├─ibmstore-vm--203--disk--3 252:19    0   100G  0 lvm
├─ibmstore-vm--204--disk--0 252:20    0    20G  0 lvm
├─ibmstore-vm--204--disk--1 252:21    0    30G  0 lvm
├─ibmstore-vm--204--disk--2 252:22    0    20G  0 lvm
├─ibmstore-vm--205--disk--0 252:23    0    50G  0 lvm
├─ibmstore-vm--206--disk--0 252:24    0    32G  0 lvm
├─ibmstore-vm--206--disk--1 252:25    0    15G  0 lvm
├─ibmstore-vm--208--disk--0 252:26    0    15G  0 lvm
├─ibmstore-vm--208--disk--1 252:27    0    50G  0 lvm
├─ibmstore-vm--208--disk--2 252:28    0    75G  0 lvm
└─ibmstore-vm--208--disk--3 252:29    0    75G  0 lvm
sr0                          11:0     1  1024M  0 rom
nbd0                         43:0     0     0B  0 disk
nbd1                         43:32    0     0B  0 disk
nbd2                         43:64    0     0B  0 disk
nbd3                         43:96    0     0B  0 disk
nbd4                         43:128   0     0B  0 disk
nbd5                         43:160   0     0B  0 disk
nbd6                         43:192   0     0B  0 disk
nbd7                         43:224   0     0B  0 disk
nbd8                         43:256   0     0B  0 disk
nbd9                         43:288   0     0B  0 disk
nbd10                        43:320   0     0B  0 disk
nbd11                        43:352   0     0B  0 disk
nbd12                        43:384   0     0B  0 disk
nbd13                        43:416   0     0B  0 disk
nbd14                        43:448   0     0B  0 disk
nbd15                        43:480   0     0B  0 disk
nbd16                        43:512   0     0B  0 disk
nbd17                        43:544   0     0B  0 disk
nbd18                        43:576   0     0B  0 disk
nbd19                        43:608   0     0B  0 disk
nbd20                        43:640   0     0B  0 disk
nbd21                        43:672   0     0B  0 disk
nbd22                        43:704   0     0B  0 disk
nbd23                        43:736   0     0B  0 disk
nbd24                        43:768   0     0B  0 disk
nbd25                        43:800   0     0B  0 disk
nbd26                        43:832   0     0B  0 disk
nbd27                        43:864   0     0B  0 disk
nbd28                        43:896   0     0B  0 disk
nbd29                        43:928   0     0B  0 disk
nbd30                        43:960   0     0B  0 disk
nbd31                        43:992   0     0B  0 disk
nbd32                        43:1024  0     0B  0 disk
nbd33                        43:1056  0     0B  0 disk
nbd34                        43:1088  0     0B  0 disk
nbd35                        43:1120  0     0B  0 disk
nbd36                        43:1152  0     0B  0 disk
nbd37                        43:1184  0     0B  0 disk
nbd38                        43:1216  0     0B  0 disk
nbd39                        43:1248  0     0B  0 disk
nbd40                        43:1280  0     0B  0 disk
nbd41                        43:1312  0     0B  0 disk
nbd42                        43:1344  0     0B  0 disk
nbd43                        43:1376  0     0B  0 disk
nbd44                        43:1408  0     0B  0 disk
nbd45                        43:1440  0     0B  0 disk
nbd46                        43:1472  0     0B  0 disk
nbd47                        43:1504  0     0B  0 disk
nbd48                        43:1536  0     0B  0 disk
nbd49                        43:1568  0     0B  0 disk
nbd50                        43:1600  0     0B  0 disk
nbd51                        43:1632  0     0B  0 disk
nbd52                        43:1664  0     0B  0 disk
nbd53                        43:1696  0     0B  0 disk
nbd54                        43:1728  0     0B  0 disk
nbd55                        43:1760  0     0B  0 disk
nbd56                        43:1792  0     0B  0 disk
nbd57                        43:1824  0     0B  0 disk
nbd58                        43:1856  0     0B  0 disk
nbd59                        43:1888  0     0B  0 disk
nbd60                        43:1920  0     0B  0 disk
nbd61                        43:1952  0     0B  0 disk
nbd62                        43:1984  0     0B  0 disk
nbd63                        43:2016  0     0B  0 disk
Code:
Server 2:

root@pve2:~# lsscsi
[0:1:6:0]    enclosu LSI      VirtualSES       03    -
[0:2:0:0]    disk    Lenovo   RAID 930-8i-2GB  5.22  /dev/sda
[15:0:0:0]   disk    IBM      2145             0000  /dev/sdb


root@pve2:~# lsblk
NAME                        MAJ:MIN  RM   SIZE RO TYPE MOUNTPOINTS
sda                           8:0     0 222.6G  0 disk
├─sda1                        8:1     0  1007K  0 part
├─sda2                        8:2     0     1G  0 part /boot/efi
└─sda3                        8:3     0 221.6G  0 part
  ├─pve-swap                252:0     0     8G  0 lvm  [SWAP]
  ├─pve-root                252:1     0  65.4G  0 lvm  /
  ├─pve-data_tmeta          252:2     0   1.3G  0 lvm
  │ └─pve-data              252:4     0 129.5G  0 lvm
  └─pve-data_tdata          252:3     0 129.5G  0 lvm
    └─pve-data              252:4     0 129.5G  0 lvm
sdb                           8:16    0    16T  0 disk
├─ibmstore-vm--107--disk--1 252:5     0    90G  0 lvm
├─ibmstore-vm--107--disk--2 252:6     0   500G  0 lvm
├─ibmstore-vm--107--disk--0 252:7     0     4M  0 lvm
├─ibmstore-vm--108--disk--0 252:8     0   125G  0 lvm
├─ibmstore-vm--108--disk--1 252:9     0    50G  0 lvm
├─ibmstore-vm--109--disk--1 252:10    0   125G  0 lvm
├─ibmstore-vm--109--disk--2 252:11    0   460G  0 lvm
├─ibmstore-vm--109--disk--3 252:12    0   920G  0 lvm
├─ibmstore-vm--109--disk--0 252:13    0     4M  0 lvm
├─ibmstore-vm--110--disk--3 252:14    0    80G  0 lvm
├─ibmstore-vm--110--disk--1 252:15    0   940G  0 lvm
├─ibmstore-vm--110--disk--2 252:16    0   300G  0 lvm
├─ibmstore-vm--110--disk--0 252:17    0     4M  0 lvm
├─ibmstore-vm--201--disk--0 252:18    0    50G  0 lvm
├─ibmstore-vm--201--disk--1 252:19    0     4M  0 lvm
├─ibmstore-vm--207--disk--1 252:20    0   100G  0 lvm
├─ibmstore-vm--207--disk--0 252:21    0     4M  0 lvm
└─ibmstore-vm--300--disk--0 252:22    0    50G  0 lvm
nbd0                         43:0     0     0B  0 disk
nbd1                         43:32    0     0B  0 disk
nbd2                         43:64    0     0B  0 disk
nbd3                         43:96    0     0B  0 disk
nbd4                         43:128   0     0B  0 disk
nbd5                         43:160   0     0B  0 disk
nbd6                         43:192   0     0B  0 disk
nbd7                         43:224   0     0B  0 disk
nbd8                         43:256   0     0B  0 disk
nbd9                         43:288   0     0B  0 disk
nbd10                        43:320   0     0B  0 disk
nbd11                        43:352   0     0B  0 disk
nbd12                        43:384   0     0B  0 disk
nbd13                        43:416   0     0B  0 disk
nbd14                        43:448   0     0B  0 disk
nbd15                        43:480   0     0B  0 disk
nbd16                        43:512   0     0B  0 disk
nbd17                        43:544   0     0B  0 disk
nbd18                        43:576   0     0B  0 disk
nbd19                        43:608   0     0B  0 disk
nbd20                        43:640   0     0B  0 disk
nbd21                        43:672   0     0B  0 disk
nbd22                        43:704   0     0B  0 disk
nbd23                        43:736   0     0B  0 disk
nbd24                        43:768   0     0B  0 disk
nbd25                        43:800   0     0B  0 disk
nbd26                        43:832   0     0B  0 disk
nbd27                        43:864   0     0B  0 disk
nbd28                        43:896   0     0B  0 disk
nbd29                        43:928   0     0B  0 disk
nbd30                        43:960   0     0B  0 disk
nbd31                        43:992   0     0B  0 disk
nbd32                        43:1024  0     0B  0 disk
nbd33                        43:1056  0     0B  0 disk
nbd34                        43:1088  0     0B  0 disk
nbd35                        43:1120  0     0B  0 disk
nbd36                        43:1152  0     0B  0 disk
nbd37                        43:1184  0     0B  0 disk
nbd38                        43:1216  0     0B  0 disk
nbd39                        43:1248  0     0B  0 disk
nbd40                        43:1280  0     0B  0 disk
nbd41                        43:1312  0     0B  0 disk
nbd42                        43:1344  0     0B  0 disk
nbd43                        43:1376  0     0B  0 disk
nbd44                        43:1408  0     0B  0 disk
nbd45                        43:1440  0     0B  0 disk
nbd46                        43:1472  0     0B  0 disk
nbd47                        43:1504  0     0B  0 disk
nbd48                        43:1536  0     0B  0 disk
nbd49                        43:1568  0     0B  0 disk
nbd50                        43:1600  0     0B  0 disk
nbd51                        43:1632  0     0B  0 disk
nbd52                        43:1664  0     0B  0 disk
nbd53                        43:1696  0     0B  0 disk
nbd54                        43:1728  0     0B  0 disk
nbd55                        43:1760  0     0B  0 disk
nbd56                        43:1792  0     0B  0 disk
nbd57                        43:1824  0     0B  0 disk
nbd58                        43:1856  0     0B  0 disk
nbd59                        43:1888  0     0B  0 disk
nbd60                        43:1920  0     0B  0 disk
nbd61                        43:1952  0     0B  0 disk
nbd62                        43:1984  0     0B  0 disk
nbd63                        43:2016  0     0B  0 disk
 
Code:
Server 1:
root@pve1:/etc# iscsiadm -m session
tcp: [1] 192.168.2.64:3260,1 iqn.2000-09.com.fujitsu:storage-system.eternus-dxl:003230e4:0000 (non-flash)
tcp: [2] 192.168.3.66:3260,3 iqn.2000-09.com.fujitsu:storage-system.eternus-dxl:003230e4:0100 (non-flash)
tcp: [3] 192.168.5.249:3260,2 iqn.1986-03.com.ibm:2145.xxxx.node2 (non-flash)

Server 2:
root@pve2:~# iscsiadm -m session
tcp: [1] 192.168.5.249:3260,2 iqn.1986-03.com.ibm:2145.xxxx.node2 (non-flash)
tcp: [2] 192.168.2.65:3260,2 iqn.2000-09.com.fujitsu:storage-system.eternus-dxl:003230e4:0001 (non-flash)
tcp: [3] 192.168.3.67:3260,4 iqn.2000-09.com.fujitsu:storage-system.eternus-dxl:003230e4:0101 (non-flash)
 
Hi bbgeek17,

Thanks for your patience and your tips.

There was indeed a configuration problem in the storage. We fixed it, and after restarting server 2, it also has the mpath0 connection.
Now it works as expected.

Thanks again for your help.

BR and Cheers Stefan

[Closed]
 
Hi @ASM-Network , thank you for reporting back. I am glad to hear your issue is resolved.

You can mark this thread as "SOLVED" by editing the first post and selecting the appropriate prefix from the subject dropdown. This helps in keeping the forum tidy, and assists future users.

Cheers


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
One other point - the output from "lsscsi" on server1 does not actually show two devices which would indicate a healthy multipath device. Your "multipath -ll" also shows single path.
So although you now have an "mpath" device it may actually be in reduced functionality state and would not provide the redundancy you expect.
Things may have changed since you made updates to your SAN, however I would recommend reviewing your configuration one more time.

Cheers


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox