HP Nimble defaults to GST as of NOS 5.x+, we ran into this recently on reinit installs migrating from VMware. GST uses a single target to many LUN mappings. (https://support.hpe.com/hpesc/publi...C-4A77-A096-C3004AD1DB7B.html&docLocale=en_US)
In short...
-PVE 8-2-7 connected to HPE Nimble arrays wont see any LUN ID above 1 as a shared LVM. If the LVM is not shared the LUN will work up to ID 2.
-Was able to replicate this same behavior on a Synology iSCSI setup using a single target with multiple LUNs (exported as LUN0, LUN1, LUN2, LUN3...etc), where any numbered LUN above 1 would fail to bring up the VG if shared, and are able to bring up the VG for LUN2 if it is not shared.
-In all cases issciadm connects to the target, pvesm can see the LUN ids all the way up to the testing group (ID0-ID20) but can only grab on LUN0 or LUN1 for shared LVM.
-In cases were Host1 grabs LUN2 and its shared, it shows up as ? on all other hosts in the cluster, can see the LVM under vgscan but not vgs.
-this shows up in the syslog as soon as the LVM goes ? on any additional hosts for Lun ID2+ "pvestatd[1600]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5"
After iscsiadm does map to LUN1-LUN2 we get this in the syslog every time we replicate this
kernel: sd 1:0:0:1: LUN assignments on this target have changed. The Linux SCSI layer does not automatically remap LUN assignments.
kernel: scsi 1:0:0:2: Direct-Access SYNOLOGY Storage 4.0 PQ: 0 ANSI: 5
kernel: sd 1:0:0:2: Attached scsi generic sg2 type 0
kernel: sd 1:0:0:2: [sdc] 1073741824 512-byte logical blocks: (550 GB/512 GiB)
kernel: sd 1:0:0:2: [sdc] Write Protect is off
kernel: sd 1:0:0:2: [sdc] Mode Sense: 43 00 10 08
kernel: sd 1:0:0:2: [sdc] Write cache: enabled, read cache: enabled, supports DPO and FUA
kernel: sd 1:0:0:2: [sdc] Preferred minimum I/O size 512 bytes
kernel: sd 1:0:0:2: [sdc] Optimal transfer size 16384 logical blocks > dev_max (8192 logical blocks)
kernel: sd 1:0:0:2: [sdc] Attached SCSI disk
So, Does iSCSI used by PVE not support GST?
Why does LUN0 and LUN1 work together in this model but not LUN2+?
Sorry for the really dumb questions, most iSCSI deployments we have done always do target:lun0 mappings, this GST thing does not seem to affect VMware the same way it does for PVE/iscsiadm so it seems this might be something not used outside of VMware shops? All of our Nimble arrays were older OS's upgraded through the years, then volumes migrated to new shelfs. This is the first time we reinit these units wholesale like this..in a very very long time. all our migrated volumes were always VST.
FYI for anyone with this issue - On Nimble we can put the config back to the VST model (from SSH, "group --edit --default_iscsi_target_scope volume")
In short...
-PVE 8-2-7 connected to HPE Nimble arrays wont see any LUN ID above 1 as a shared LVM. If the LVM is not shared the LUN will work up to ID 2.
-Was able to replicate this same behavior on a Synology iSCSI setup using a single target with multiple LUNs (exported as LUN0, LUN1, LUN2, LUN3...etc), where any numbered LUN above 1 would fail to bring up the VG if shared, and are able to bring up the VG for LUN2 if it is not shared.
-In all cases issciadm connects to the target, pvesm can see the LUN ids all the way up to the testing group (ID0-ID20) but can only grab on LUN0 or LUN1 for shared LVM.
-In cases were Host1 grabs LUN2 and its shared, it shows up as ? on all other hosts in the cluster, can see the LVM under vgscan but not vgs.
-this shows up in the syslog as soon as the LVM goes ? on any additional hosts for Lun ID2+ "pvestatd[1600]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5"
After iscsiadm does map to LUN1-LUN2 we get this in the syslog every time we replicate this
kernel: sd 1:0:0:1: LUN assignments on this target have changed. The Linux SCSI layer does not automatically remap LUN assignments.
kernel: scsi 1:0:0:2: Direct-Access SYNOLOGY Storage 4.0 PQ: 0 ANSI: 5
kernel: sd 1:0:0:2: Attached scsi generic sg2 type 0
kernel: sd 1:0:0:2: [sdc] 1073741824 512-byte logical blocks: (550 GB/512 GiB)
kernel: sd 1:0:0:2: [sdc] Write Protect is off
kernel: sd 1:0:0:2: [sdc] Mode Sense: 43 00 10 08
kernel: sd 1:0:0:2: [sdc] Write cache: enabled, read cache: enabled, supports DPO and FUA
kernel: sd 1:0:0:2: [sdc] Preferred minimum I/O size 512 bytes
kernel: sd 1:0:0:2: [sdc] Optimal transfer size 16384 logical blocks > dev_max (8192 logical blocks)
kernel: sd 1:0:0:2: [sdc] Attached SCSI disk
So, Does iSCSI used by PVE not support GST?
Why does LUN0 and LUN1 work together in this model but not LUN2+?
Sorry for the really dumb questions, most iSCSI deployments we have done always do target:lun0 mappings, this GST thing does not seem to affect VMware the same way it does for PVE/iscsiadm so it seems this might be something not used outside of VMware shops? All of our Nimble arrays were older OS's upgraded through the years, then volumes migrated to new shelfs. This is the first time we reinit these units wholesale like this..in a very very long time. all our migrated volumes were always VST.
FYI for anyone with this issue - On Nimble we can put the config back to the VST model (from SSH, "group --edit --default_iscsi_target_scope volume")