iscsi storage creating two devices for single target

frankw

New Member
Jan 21, 2021
3
0
1
50
Using proxmox 6.3-3 attempting to add iscsi from a TrueNAS core 12.0-U1.1.

Adding iScsi through the UI or manually through iscsiadm works, but it ends up creating TWO devices for the same target, which then has LVM creation fail.

The truenas box works fine to provide iscsi targets to other machines.

scanning shows the diffferent targets served by the box.
root@proxmox:~# pvesm scan iscsi 192.168.1.30
iqn.2005-10.org.freenas.ctl:pbs-backup 192.168.1.30:3260
iqn.2005-10.org.freenas.ctl:r620proxmox-vmdisks 192.168.1.30:3260
iqn.2005-10.org.freenas.ctl:windows-data 192.168.1.30:3260

However, when adding it, it seems to create one device using the DNS name as the portal name and one device using the IP as the portal name. It doesn't matter whether I specify the DNS name or the IP address as the portal in iscsiadm (or the UI)

root@proxmox:~# iscsiadm --mode node -d 3 --targetname iqn.2005-10.org.freenas.ctl:r620proxmox-vmdisks --portal 192.168.1.30 --login
iscsiadm: ip 192.168.1.30, port -1, tgpt -1
iscsiadm: Max file limits 1024 1048576
iscsiadm: exec_node_op: : node [iqn.2005-10.org.freenas.ctl:r620proxmox-vmdisks,192.168.1.30,3260] sid 0
iscsiadm: default: Creating session 1/1
Logging in to [iface: default, target: iqn.2005-10.org.freenas.ctl:r620proxmox-vmdisks, portal: truenas.theweigels.org,3260] (multiple)
iscsiadm: default: Creating session 1/1
Logging in to [iface: default, target: iqn.2005-10.org.freenas.ctl:r620proxmox-vmdisks, portal: 192.168.1.30,3260] (multiple)
Login to [iface: default, target: iqn.2005-10.org.freenas.ctl:r620proxmox-vmdisks, portal: truenas.theweigels.org,3260] successful.
Login to [iface: default, target: iqn.2005-10.org.freenas.ctl:r620proxmox-vmdisks, portal: 192.168.1.30,3260] successful.

And trying to create LVM fails. Doing it manually it results in the reasonable error about not being able to use the device as duplicates exist:
root@proxmox:~# pvcreate /dev/sde
/dev/sdd: open failed: No medium found
WARNING: Not using device /dev/sdf for PV aqqceH-TDt5-ojEn-Evtl-bjan-sEUd-JL1I41.
WARNING: PV aqqceH-TDt5-ojEn-Evtl-bjan-sEUd-JL1I41 prefers device /dev/sde because device was seen first.
Cannot use device /dev/sde with duplicates.

When using the UI I get (basically) the same error:
create storage failed: error during cfs-locked 'file-storage_cfg' operation: pvcreate '/dev/disk/by-id/scsi-STrueNAS_iSCSI_Disk_0cc47a6a5abe003' error: Cannot use device /dev/disk/by-id/scsi-STrueNAS_iSCSI_Disk_0cc47a6a5abe003 with duplicates. (500)

Any ideas? Anyone else seeing this?

Thanks!

Frank
 
Using proxmox 6.3-3 attempting to add iscsi from a TrueNAS core 12.0-U1.1.

Adding iScsi through the UI or manually through iscsiadm works, but it ends up creating TWO devices for the same target, which then has LVM creation fail.

The truenas box works fine to provide iscsi targets to other machines.

scanning shows the diffferent targets served by the box.
root@proxmox:~# pvesm scan iscsi 192.168.1.30
iqn.2005-10.org.freenas.ctl:pbs-backup 192.168.1.30:3260
iqn.2005-10.org.freenas.ctl:r620proxmox-vmdisks 192.168.1.30:3260
iqn.2005-10.org.freenas.ctl:windows-data 192.168.1.30:3260

However, when adding it, it seems to create one device using the DNS name as the portal name and one device using the IP as the portal name. It doesn't matter whether I specify the DNS name or the IP address as the portal in iscsiadm (or the UI)

root@proxmox:~# iscsiadm --mode node -d 3 --targetname iqn.2005-10.org.freenas.ctl:r620proxmox-vmdisks --portal 192.168.1.30 --login
iscsiadm: ip 192.168.1.30, port -1, tgpt -1
iscsiadm: Max file limits 1024 1048576
iscsiadm: exec_node_op: : node [iqn.2005-10.org.freenas.ctl:r620proxmox-vmdisks,192.168.1.30,3260] sid 0
iscsiadm: default: Creating session 1/1
Logging in to [iface: default, target: iqn.2005-10.org.freenas.ctl:r620proxmox-vmdisks, portal: truenas.theweigels.org,3260] (multiple)
iscsiadm: default: Creating session 1/1
Logging in to [iface: default, target: iqn.2005-10.org.freenas.ctl:r620proxmox-vmdisks, portal: 192.168.1.30,3260] (multiple)
Login to [iface: default, target: iqn.2005-10.org.freenas.ctl:r620proxmox-vmdisks, portal: truenas.theweigels.org,3260] successful.
Login to [iface: default, target: iqn.2005-10.org.freenas.ctl:r620proxmox-vmdisks, portal: 192.168.1.30,3260] successful.

And trying to create LVM fails. Doing it manually it results in the reasonable error about not being able to use the device as duplicates exist:
root@proxmox:~# pvcreate /dev/sde
/dev/sdd: open failed: No medium found
WARNING: Not using device /dev/sdf for PV aqqceH-TDt5-ojEn-Evtl-bjan-sEUd-JL1I41.
WARNING: PV aqqceH-TDt5-ojEn-Evtl-bjan-sEUd-JL1I41 prefers device /dev/sde because device was seen first.
Cannot use device /dev/sde with duplicates.

When using the UI I get (basically) the same error:
create storage failed: error during cfs-locked 'file-storage_cfg' operation: pvcreate '/dev/disk/by-id/scsi-STrueNAS_iSCSI_Disk_0cc47a6a5abe003' error: Cannot use device /dev/disk/by-id/scsi-STrueNAS_iSCSI_Disk_0cc47a6a5abe003 with duplicates. (500)

Any ideas? Anyone else seeing this?

Thanks!

Frank
You need to use multipath to avoid this.

See https://pve.proxmox.com/wiki/ISCSI_Multipath
 
Thanks @Richard that was my first thought as well, but as far as I understand multipath, this is not a multipath situation:
1) there are only single NICs in both the proxmox server and the TrueNas server
2) scsi_id returns the same wwid for both devices that get created (see below)
3) multipath -ll seems to show nothing about multipath either (see below).

Am I missing something or misunderstanding multipath?

root@proxmox:~# /lib/udev/scsi_id -g -u -d /dev/sde
36589cfc000000c0b4c7b5d47c5f59bf4
root@proxmox:~# /lib/udev/scsi_id -g -u -d /dev/sdf
36589cfc000000c0b4c7b5d47c5f59bf4

root@proxmox:~# multipath -v 3 -ll
Jan 21 10:24:00 | set open fds limit to 1048576/1048576
Jan 21 10:24:00 | loading //lib/multipath/libchecktur.so checker
Jan 21 10:24:00 | checker tur: message table size = 3
Jan 21 10:24:00 | loading //lib/multipath/libprioconst.so prioritizer
Jan 21 10:24:00 | foreign library "nvme" loaded successfully
Jan 21 10:24:00 | sda: udev property ID_WWN whitelisted
Jan 21 10:24:00 | sda: mask = 0x7
Jan 21 10:24:00 | sda: dev_t = 8:0
Jan 21 10:24:00 | sda: size = 976773168
Jan 21 10:24:00 | sda: vendor = ATA
Jan 21 10:24:00 | sda: product = Samsung SSD 860
Jan 21 10:24:00 | sda: rev = 3B6Q
Jan 21 10:24:00 | sda: h:b:t:l = 0:0:4:0
Jan 21 10:24:00 | sda: tgt_node_name =
Jan 21 10:24:00 | sda: path state = running
Jan 21 10:24:00 | sda: 60801 cyl, 255 heads, 63 sectors/track, start at 0
Jan 21 10:24:00 | sda: serial = S4CMNE0M703486B
Jan 21 10:24:00 | sda: get_state
Jan 21 10:24:00 | sda: detect_checker = yes (setting: multipath internal)
Jan 21 10:24:00 | failed to issue vpd inquiry for pgc9
Jan 21 10:24:00 | sda: path_checker = tur (setting: multipath internal)
Jan 21 10:24:00 | sda: checker timeout = 90 s (setting: kernel sysfs)
Jan 21 10:24:00 | sda: tur state = up
Jan 21 10:24:00 | sdb: udev property ID_WWN whitelisted
Jan 21 10:24:00 | sdb: mask = 0x7
Jan 21 10:24:00 | sdb: dev_t = 8:16
Jan 21 10:24:00 | sdb: size = 976773168
Jan 21 10:24:00 | sdb: vendor = ATA
Jan 21 10:24:00 | sdb: product = Samsung SSD 860
Jan 21 10:24:00 | sdb: rev = 3B6Q
Jan 21 10:24:00 | sdb: h:b:t:l = 0:0:5:0
Jan 21 10:24:00 | sdb: tgt_node_name =
Jan 21 10:24:00 | sdb: path state = running
Jan 21 10:24:00 | sdb: 60801 cyl, 255 heads, 63 sectors/track, start at 0
Jan 21 10:24:00 | sdb: serial = S4CMNE0M703477N
Jan 21 10:24:00 | sdb: get_state
Jan 21 10:24:00 | sdb: detect_checker = yes (setting: multipath internal)
Jan 21 10:24:00 | failed to issue vpd inquiry for pgc9
Jan 21 10:24:00 | sdb: path_checker = tur (setting: multipath internal)
Jan 21 10:24:00 | sdb: checker timeout = 90 s (setting: kernel sysfs)
Jan 21 10:24:00 | sdb: tur state = up
Jan 21 10:24:00 | sdc: udev property ID_WWN whitelisted
Jan 21 10:24:00 | sdc: mask = 0x7
Jan 21 10:24:00 | sdc: dev_t = 8:32
Jan 21 10:24:00 | sdc: size = 570949632
Jan 21 10:24:00 | sdc: vendor = DELL
Jan 21 10:24:00 | sdc: product = PERC H310
Jan 21 10:24:00 | sdc: rev = 2.12
Jan 21 10:24:00 | sdc: h:b:t:l = 0:2:0:0
Jan 21 10:24:00 | sdc: tgt_node_name =
Jan 21 10:24:00 | sdc: path state = running
Jan 21 10:24:00 | sdc: 35539 cyl, 255 heads, 63 sectors/track, start at 0
Jan 21 10:24:00 | sdc: serial = 0048b250098d003c27000f0f17c0110b
Jan 21 10:24:00 | sdc: get_state
Jan 21 10:24:00 | sdc: detect_checker = yes (setting: multipath internal)
Jan 21 10:24:00 | failed to issue vpd inquiry for pgc9
Jan 21 10:24:00 | sdc: path_checker = tur (setting: multipath internal)
Jan 21 10:24:00 | sdc: checker timeout = 90 s (setting: kernel sysfs)
Jan 21 10:24:00 | sdc: tur state = up
Jan 21 10:24:00 | sdd: blacklisted, udev property missing
Jan 21 10:24:00 | sr0: udev property ID_WWN whitelisted
Jan 21 10:24:00 | sr0: device node name blacklisted
Jan 21 10:24:00 | sde: udev property ID_WWN whitelisted
Jan 21 10:24:00 | sde: mask = 0x7
Jan 21 10:24:00 | sde: dev_t = 8:64
Jan 21 10:24:00 | sde: size = 21474836544
Jan 21 10:24:00 | sde: vendor = TrueNAS
Jan 21 10:24:00 | sde: product = iSCSI Disk
Jan 21 10:24:00 | sde: rev = 0123
Jan 21 10:24:00 | sde: h:b:t:l = 8:0:0:21
Jan 21 10:24:00 | sde: tgt_node_name = iqn.2005-10.org.freenas.ctl:r620proxmox-vmdisks
Jan 21 10:24:00 | sde: path state = running
Jan 21 10:24:00 | sde: 65535 cyl, 255 heads, 63 sectors/track, start at 0
Jan 21 10:24:00 | sde: serial = 0cc47a6a5abe003
Jan 21 10:24:00 | sde: get_state
Jan 21 10:24:00 | sde: detect_checker = yes (setting: multipath internal)
Jan 21 10:24:00 | failed to issue vpd inquiry for pgc9
Jan 21 10:24:00 | sde: path_checker = tur (setting: storage device autodetected)
Jan 21 10:24:00 | sde: checker timeout = 30 s (setting: kernel sysfs)
Jan 21 10:24:00 | sde: tur state = up
Jan 21 10:24:00 | sdf: udev property ID_WWN whitelisted
Jan 21 10:24:00 | sdf: mask = 0x7
Jan 21 10:24:00 | sdf: dev_t = 8:80
Jan 21 10:24:00 | sdf: size = 21474836544
Jan 21 10:24:00 | sdf: vendor = TrueNAS
Jan 21 10:24:00 | sdf: product = iSCSI Disk
Jan 21 10:24:00 | sdf: rev = 0123
Jan 21 10:24:00 | sdf: h:b:t:l = 9:0:0:21
Jan 21 10:24:00 | sdf: tgt_node_name = iqn.2005-10.org.freenas.ctl:r620proxmox-vmdisks
Jan 21 10:24:00 | sdf: path state = running
Jan 21 10:24:00 | sdf: 65535 cyl, 255 heads, 63 sectors/track, start at 0
Jan 21 10:24:00 | sdf: serial = 0cc47a6a5abe003
Jan 21 10:24:00 | sdf: get_state
Jan 21 10:24:00 | sdf: detect_checker = yes (setting: multipath internal)
Jan 21 10:24:00 | failed to issue vpd inquiry for pgc9
Jan 21 10:24:00 | sdf: path_checker = tur (setting: storage device autodetected)
Jan 21 10:24:00 | sdf: checker timeout = 30 s (setting: kernel sysfs)
Jan 21 10:24:00 | sdf: tur state = up
Jan 21 10:24:00 | loop0: blacklisted, udev property missing
Jan 21 10:24:00 | loop1: blacklisted, udev property missing
Jan 21 10:24:00 | loop2: blacklisted, udev property missing
Jan 21 10:24:00 | loop3: blacklisted, udev property missing
Jan 21 10:24:00 | loop4: blacklisted, udev property missing
Jan 21 10:24:00 | loop5: blacklisted, udev property missing
Jan 21 10:24:00 | loop6: blacklisted, udev property missing
Jan 21 10:24:00 | loop7: blacklisted, udev property missing
===== paths list =====
uuid hcil dev dev_t pri dm_st chk_st vend/prod/rev dev_st
0:0:4:0 sda 8:0 -1 undef undef ATA,Samsung SSD 860 unknown
0:0:5:0 sdb 8:16 -1 undef undef ATA,Samsung SSD 860 unknown
0:2:0:0 sdc 8:32 -1 undef undef DELL,PERC H310 unknown
8:0:0:21 sde 8:64 -1 undef undef TrueNAS,iSCSI Disk unknown
9:0:0:21 sdf 8:80 -1 undef undef TrueNAS,iSCSI Disk unknown
Jan 21 10:24:00 | libdevmapper version 1.02.155 (2018-12-18)
Jan 21 10:24:00 | DM multipath kernel driver v1.13.0
Jan 21 10:24:00 | unloading const prioritizer
Jan 21 10:24:00 | unloading tur checker
 
Thanks @Richard that was my first thought as well, but as far as I understand multipath, this is not a multipath situation:
1) there are only single NICs in both the proxmox server and the TrueNas server

"multi"path means more than one logical connection to target. Yes, usually in case of having more NICs, but not necessarily. To figure out which connections currently exist run

Code:
iscsiadm --mode session

2) scsi_id returns the same wwid for both devices that get created (see below)

Yes, this is normal

3) multipath -ll seems to show nothing about multipath either (see below).

Follow exactly the advices is https://pve.proxmox.com/wiki/ISCSI_Multipath#Configuration

As a result you should see running

Code:
lsblk

something like

Code:
sdd                                                                                                     8:48   0   4.9G  0 disk
└─14945540000000000bb88fa967f9725acabff7f6c16f4189c                                                   253:11   0   4.9G  0 mpath
sde                                                                                                     8:64   0   4.9G  0 disk
└─14945540000000000bb88fa967f9725acabff7f6c16f4189c                                                   253:11   0   4.9G  0 mpath

i.e. you see the disk two times as /dev/sdd and /dev/sde, but OS uses the mpath device

Code:
/dev/mapper/14945540000000000bb88fa967f9725acabff7f6c16f4189c
 
@Richard thanks so much for clearing up my assumption multipath only occured through multiple physical routes to the SAN!

Followed the multipath docs, and even though it has a single wwid (as both devices report the same wwid) it now seems to roundrobin and treat them as a multipath device.

So thanks a bunch for helping me get this figured out!

I still am very confused though why the TrueNAS reports multiple devices to iscsiadm. I had also raised it on the TrueNAS forum, there I was advised to file a bug, as it didn't seem right to them as well.

Issue is tracked there as https://jira.ixsystems.com/browse/NAS-109083 (mainly adding it so if others run into the same issue they can check the status over there, should it be unexpected behavior of TrueNAS). Discussion is here

And for anyone trying to do the same fix/workaround, here is the simple multipath.conf file that did the magic for me:

root@proxmox:~# cat /etc/multipath.conf
blacklist {
wwid .*
}

blacklist_exceptions {
wwid "36589cfc000000c0b4c7b5d47c5f59bf4"
}
multipaths {
multipath {
wwid "36589cfc000000c0b4c7b5d47c5f59bf4"
alias mpath0
}
}

defaults {
polling_interval 2
path_selector "round-robin 0"
path_grouping_policy multibus
uid_attribute ID_SERIAL
rr_min_io 100
failback immediate
no_path_retry queue
user_friendly_names yes
}
 
thank you!
i have been experiencing the same problem.
it started occurring on all of my nodes a week ago and i haven't been able to access the ISCSI storage since.
going to try this fix later today and see if it works...

every example of multipath i was able to find used multiple WWIDs but my drives are also exactly the same as you describe.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!