Fujitsu LUN doesn't shows up

grusso

New Member
Apr 4, 2023
13
0
1
Hello,
I've tried may times.
I've added the iSCSI target, iqn.2000-09.com.fujitsu (iSCSI), I can see four fujtisu target, I choosed one, but when I try to create a LVM disk, I can't go futher because base volume is empty:
Datacenter -> Storage -> Add -> LVM

1680620790116.png

I've tried to remove iSCSI target and add again on the host without results. It's a 12TB Thin LUN.

ls -l /dev/disk/by-id

Bash:
total 0
lrwxrwxrwx 1 root root 10 Apr  3 16:41 dm-name-pve-root -> ../../dm-1
lrwxrwxrwx 1 root root 10 Apr  3 16:41 dm-name-pve-swap -> ../../dm-0
lrwxrwxrwx 1 root root 10 Apr  3 16:41 dm-uuid-LVM-5zWj58ww4Elzf8UHnCdTfTeZlpBmaFxE9dcn9xuCr6G1eI8wSy5B5O185nSmHm56 -> ../../dm-1
lrwxrwxrwx 1 root root 10 Apr  3 16:41 dm-uuid-LVM-5zWj58ww4Elzf8UHnCdTfTeZlpBmaFxEqrml41GxQJObaYEJGykmfSuPn1KL1pzS -> ../../dm-0
lrwxrwxrwx 1 root root 10 Apr  4 15:47 lvm-pv-uuid-tNZX3Q-PyTX-rBd5-ti47-zQCe-1Edy-4rc2qz -> ../../sda3
lrwxrwxrwx 1 root root  9 Apr  4 15:44 scsi-3600000e00d3200000032121000060000 -> ../../sdc
lrwxrwxrwx 1 root root  9 Apr  4 15:44 scsi-SFUJITSU_ETERNUS_DXL_320484 -> ../../sdc
lrwxrwxrwx 1 root root  9 Apr  3 16:41 usb-FUJITSU_Dual_microSD_012345678901-0:0 -> ../../sda
lrwxrwxrwx 1 root root 10 Apr  3 16:41 usb-FUJITSU_Dual_microSD_012345678901-0:0-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Apr  3 16:41 usb-FUJITSU_Dual_microSD_012345678901-0:0-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Apr  4 15:47 usb-FUJITSU_Dual_microSD_012345678901-0:0-part3 -> ../../sda3
lrwxrwxrwx 1 root root  9 Apr  4 15:44 wwn-0x600000e00d3200000032121000060000 -> ../../sdc

iscsiadm --mode discovery --type sendtargets --portal 10.0.14.61

Bash:
[fe80::200:e50:dc84:8400]:3260,1 iqn.2000-09.com.fujitsu:storage-system.eternus-dxl:00321210:0000
10.0.12.61:3260,1 iqn.2000-09.com.fujitsu:storage-system.eternus-dxl:00321210:0000
10.0.14.61:3260,1 iqn.2000-09.com.fujitsu:storage-system.eternus-dxl:00321210:0000
[fe80::200:e50:dc84:8401]:3260,2 iqn.2000-09.com.fujitsu:storage-system.eternus-dxl:00321210:0001
10.0.14.62:3260,2 iqn.2000-09.com.fujitsu:storage-system.eternus-dxl:00321210:0001
10.0.12.62:3260,2 iqn.2000-09.com.fujitsu:storage-system.eternus-dxl:00321210:0001
[fe80::200:e50:dc84:8410]:3260,3 iqn.2000-09.com.fujitsu:storage-system.eternus-dxl:00321210:0100
10.0.12.63:3260,3 iqn.2000-09.com.fujitsu:storage-system.eternus-dxl:00321210:0100
10.0.14.63:3260,3 iqn.2000-09.com.fujitsu:storage-system.eternus-dxl:00321210:0100
[fe80::200:e50:dc84:8411]:3260,4 iqn.2000-09.com.fujitsu:storage-system.eternus-dxl:00321210:0101
10.0.12.64:3260,4 iqn.2000-09.com.fujitsu:storage-system.eternus-dxl:00321210:0101
10.0.14.64:3260,4 iqn.2000-09.com.fujitsu:storage-system.eternus-dxl:00321210:0101

Any suggestion?
 
Last edited:
I've partially solved the problem rebooting the host... It's a bit drastic way, and I would ask if there are more soft solution.
 
  • Like
Reactions: grusso
Hello,
when I try to add 2nd portal,
I'm getting this error message:
create storage failed: cfs-lock 'file-storage_cfg' error: got lock request timeout (500)

p.s. it seems that on last try it added the 2nd and 3rd portal anyway also if I got this message.
 
Last edited:
@grusso You don’t need a second portal. All paths are already connected. You must enable multipath and then use the md (multipathdevice) for your LVM Datastore.

Please read the wiki.
 
@grusso You don’t need a second portal. All paths are already connected. You must enable multipath and then use the md (multipathdevice) for your LVM Datastore.

Please read the wiki.
I've tried to add the other portals because I got only 1 numbers line, following the wiki at step:
/lib/udev/scsi_id -g -u -d /dev/sdb
xxxx000e00d32000000321210000xxxxx
 
Last edited:
The eternus Storages use an central Portal and expose all possible paths to the hosts.
You must only install multipath and edit the multipath settings. Per default are all Disks excluded from discovering multiple paths at Debian Systems. It’s little different to ESXi, Windows and Redhat Systems.
When you have configured correctly,
multipath -l
Shows 1 md with 4 paths.
At the Fujitsu Support Page are an example Multipath.conf file.
 
  • Like
Reactions: grusso
The eternus Storages use an central Portal and expose all possible paths to the hosts.
You must only install multipath and edit the multipath settings. Per default are all Disks excluded from discovering multiple paths at Debian Systems. It’s little different to ESXi, Windows and Redhat Systems.
When you have configured correctly,

Shows 1 md with 4 paths.
At the Fujitsu Support Page are an example Multipath.conf file.

Code:
root@pa-sp1-r1-hpxm-1a:~# multipath -l
Apr 05 13:07:40 | ignoring extra data starting with '(*1)' on line 9 of /etc/multipath.conf
Apr 05 13:07:40 | ignoring extra data starting with '(*2)' on line 11 of /etc/multipath.conf
mpath0 (3600000e00d3200000032121000060000) dm-6 FUJITSU,ETERNUS_DXL
size=12T features='0' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  `- 11:0:0:0 sdb 8:16 active undef running

Code:
/etc/multipath.conf

devices {
 device {
 vendor "FUJITSU"
 product "ETERNUS_DXL"
 prio alua
 path_grouping_policy group_by_prio
 path_selector "round-robin 0"
 failback immediate
 no_path_retry 0 (*1)
 path_checker tur
 dev_loss_tmo 2097151 (*2)
 fast_io_fail_tmo 1
 }
}

blacklist {
        wwid .*
}
blacklist_exceptions {
        wwid "3600000e00d3200000032121000060000"
}
multipaths {
  multipath {
        wwid "3600000e00d3200000032121000060000"
        alias mpath0
  }
 
I'm not sure about the configuration, I've followed the wiki and suggestion.
The performance test shows a lot of errors.

Code:
multipath -l
Apr 05 16:08:19 | ignoring extra data starting with '(*1)' on line 9 of /etc/multipath.conf
Apr 05 16:08:19 | ignoring extra data starting with '(*2)' on line 11 of /etc/multipath.conf
mpath0 (3600000e00d3200000032121000060000) dm-6 FUJITSU,ETERNUS_DXL
size=12T features='0' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  `- 11:0:0:0 sdb 8:16 active undef running


fio --filename=/dev/mapper/mpath0 --direct=1 --rw=read --bs=1m --size=20G --numjobs=200 --runtime=60 --group_reporting --name=file1

Code:
file1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=psync, iodepth=1
...
fio-3.25
Starting 200 processes
fio: io_u error on file /dev/mapper/mpath0: Input/output error: read offset=0, buflen=1048576
fio: first I/O failed. If /dev/mapper/mpath0 is a zoned block device, consider --zonemode=zbd
fio: io_u error on file /dev/mapper/mpath0: Input/output error: read offset=0, buflen=1048576
fio: io_u error on file /dev/mapper/mpath0: Input/output error: read offset=0, buflen=1048576
fio: io_u error on file /dev/mapper/mpath0: Input/output error: read offset=0, buflen=1048576
fio: first I/O failed. If /dev/mapper/mpath0 is a zoned block device, consider --zonemode=zbd
fio: pid=30994, err=5/file:io_u.c:1834, func=io_u error, error=Input/output error
~~~~~~~~~~~~~~~~~~~~~~~~~~~ CUT ~~~~~~~~~~~~~~~~~~~~~~~~~~~
fio: pid=30973, err=5/file:io_u.c:1834, func=io_u error, error=Input/output error

file1: (groupid=0, jobs=200): err= 5 (file:io_u.c:1834, func=io_u error, error=Input/output error): pid=30947: Wed Apr  5 16:06:38 2023
  read: IOPS=18, BW=92.2KiB/s (94.4kB/s)(1024KiB/11109msec)
    clat (nsec): min=28962k, max=28962k, avg=28962156.00, stdev= 0.00
     lat (nsec): min=28963k, max=28963k, avg=28963321.00, stdev= 0.00
    clat percentiles (usec):
     |  1.00th=[28967],  5.00th=[28967], 10.00th=[28967], 20.00th=[28967],
     | 30.00th=[28967], 40.00th=[28967], 50.00th=[28967], 60.00th=[28967],
     | 70.00th=[28967], 80.00th=[28967], 90.00th=[28967], 95.00th=[28967],
     | 99.00th=[28967], 99.50th=[28967], 99.90th=[28967], 99.95th=[28967],
     | 99.99th=[28967]
   bw (  KiB/s): min= 2047, max= 2048, per=100.00%, avg=2048.00, stdev= 0.00, samples=1
   iops        : min=    1, max=    2, avg= 2.00, stdev= 0.00, samples=1
  lat (msec)   : 50=0.50%
  cpu          : usr=0.00%, sys=0.00%, ctx=739, majf=16, minf=55855
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=49.9%, 4=50.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=201,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: bw=92.2KiB/s (94.4kB/s), 92.2KiB/s-92.2KiB/s (94.4kB/s-94.4kB/s), io=1024KiB (1049kB), run=11109-11109msec
 
Last edited:
Code:
root@pa-sp1-r1-hpxm-1a:~# multipath -l
Apr 05 13:07:40 | ignoring extra data starting with '(*1)' on line 9 of /etc/multipath.conf
Apr 05 13:07:40 | ignoring extra data starting with '(*2)' on line 11 of /etc/multipath.conf
mpath0 (3600000e00d3200000032121000060000) dm-6 FUJITSU,ETERNUS_DXL
size=12T features='0' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  `- 11:0:0:0 sdb 8:16 active undef running
Please remove the (*1) and (*2), there are hints for the variables in the article from Fujitsu.
Code:
/etc/multipath.conf

devices {
 device {
 vendor "FUJITSU"
 product "ETERNUS_DXL"
 prio alua
 path_grouping_policy group_by_prio
 path_selector "round-robin 0"
 failback immediate
 no_path_retry 0 (*1)
 path_checker tur
 dev_loss_tmo 2097151 (*2)
 fast_io_fail_tmo 1
 }
}

blacklist {
        wwid .*
}
blacklist_exceptions {
        wwid "3600000e00d3200000032121000060000"
}
multipaths {
  multipath {
        wwid "3600000e00d3200000032121000060000"
        alias mpath0
  }
You blacklist everything with wwid.*, then you have to set all 4 wwids as exception.
The wwids look almost the same and differ only in one number.

According to multipath -l you see only one path. If you want to write exclusively on one path, with a multipath storage, then you get exactly such errors as you have posted.
 
  • Like
Reactions: grusso
Please remove the (*1) and (*2), there are hints for the variables in the article from Fujitsu.

You blacklist everything with wwid.*, then you have to set all 4 wwids as exception.
The wwids look almost the same and differ only in one number.

According to multipath -l you see only one path. If you want to write exclusively on one path, with a multipath storage, then you get exactly such errors as you have posted.

I get only a wwid since the start.
I've used 1 portal because u said that is enough, I'm wrong?
Now I've tried to do another host affinity with another IP on same VLAN so the host have 2 IPs with affinity on same LUN.
Maybe I do something wrong on port group... I need to start from scratch multipath maybe
 
At Eternus Storages, the wwid looks similiar, but 1 digit in the middle is changed.
You can add all other target IPs manualy, but you can see all 4 Paths as 4 devices.
 
Hello,
after a while, we've tried from scratch and we encountered same problems.
It seems I have only 1 path/wwid if I add only 1 portal,
and when I create a Virtual Machine a get kernel panic (Rocky Linux 9).
I've created a LVM on device mpath0 as you suggested.

Do you have any suggestion? Ive attached screen and logs.

lvm.PNG

kernel panic.PNG


iscsiadm -m discovery -t sendtargets -p 10.0.14.61:3260
Code:
[fe80::200:e50:dc84:8400]:3260,1 iqn.2000-09.com.fujitsu:storage-system.eternus-dxl:00321210:0000
10.0.12.61:3260,1 iqn.2000-09.com.fujitsu:storage-system.eternus-dxl:00321210:0000
10.0.14.61:3260,1 iqn.2000-09.com.fujitsu:storage-system.eternus-dxl:00321210:0000
[fe80::200:e50:dc84:8401]:3260,2 iqn.2000-09.com.fujitsu:storage-system.eternus-dxl:00321210:0001
10.0.14.62:3260,2 iqn.2000-09.com.fujitsu:storage-system.eternus-dxl:00321210:0001
10.0.12.62:3260,2 iqn.2000-09.com.fujitsu:storage-system.eternus-dxl:00321210:0001
[fe80::200:e50:dc84:8410]:3260,3 iqn.2000-09.com.fujitsu:storage-system.eternus-dxl:00321210:0100
10.0.12.63:3260,3 iqn.2000-09.com.fujitsu:storage-system.eternus-dxl:00321210:0100
10.0.14.63:3260,3 iqn.2000-09.com.fujitsu:storage-system.eternus-dxl:00321210:0100
[fe80::200:e50:dc84:8411]:3260,4 iqn.2000-09.com.fujitsu:storage-system.eternus-dxl:00321210:0101
10.0.12.64:3260,4 iqn.2000-09.com.fujitsu:storage-system.eternus-dxl:00321210:0101
10.0.14.64:3260,4 iqn.2000-09.com.fujitsu:storage-system.eternus-dxl:00321210:0101

*****/etc/multipath.conf*****
Code:
defaults{
                user_friendly_names yes
}

devices {
        device {
                vendor                          "FUJITSU"
                product                         "DX200S5"
                prio                            alua
                path_grouping_policy            group_by_prio
                path_selector                   "round-robin 0"
                failback                        "immediate"
                no_path_retry                   "10"
        }
}

blacklist {
        wwid .*
}
blacklist_exceptions {
        wwid "3600000e00d3200000032121000060000"
}
multipaths {
  multipath {
        wwid "3600000e00d3200000032121000060000"
        alias mpath0
  }
}
 

Attachments

what ist the output of multipath -l
 
what ist the output of multipath -l
Thank you Falk.
This is the output:
mpath0 (3600000e00d3200000032121000060000) dm-5 FUJITSU,ETERNUS_DXL size=12T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw |-+- policy='service-time 0' prio=0 status=active | |- 12:0:0:0 sdd 8:48 active undef running | `- 13:0:0:0 sdb 8:16 active undef running `-+- policy='service-time 0' prio=0 status=enabled |- 15:0:0:0 sde 8:64 active undef running `- 14:0:0:0 sdc 8:32 active undef running
 
You can create under your Host a LVM Volume Group and then under Datacenter add LVM.
 
Hello Falk, thank you.
Now it seems work but the networking bond that I've configured with eno3+eno4 (LACP bond0) doesn't map the card on the single VM.
On this two card I've a LACP trunk, where all vLANs are passed/located.
Host network:
network configuration.PNG

But on the virtual machine I can't select bond/DMZ_Lab, but the mgmt only card:
network configuration VM 1.PNG
 
Last edited:
You can create under your Host a LVM Volume Group and then under Datacenter add LVM.
I've solved.
For people reading this, I've created a new vmbr(1)/Linux Bridge, using the bond0 LACP, and flagged VLAN aware.

@Falk_R. Is the output of Multipath -l correct? Is the host using all 4 paths?
 
I've solved.
For people reading this, I've created a new vmbr(1)/Linux Bridge, using the bond0 LACP, and flagged VLAN aware.

@Falk_R. Is the output of Multipath -l correct? Is the host using all 4 paths?
Yes, multipath looks good. But i see, you use a VLAN on your Bond for iSCSI. Multipathing and Bonding is not Supported by the most Vendors and also by Fujitsu. For iSCSI Multipath, please use multiple single NICs.
 
  • Like
Reactions: grusso