Installation FC Storage (Q-Logic QLE2692)

milo1987

Member
Oct 21, 2020
9
0
21
37
Hi all,

i'm working now for 5 days on installing a Fujitsu Externus dx100 S5 - Storage on Proxmox.
I use a Q-Logic QLE2692 Card for a direct FibreChannel connection.

If i use a Windows VM and pass this card to this VM directly, i can see and interact with the Storage without any Problems.
So i think it's a driver or connection problem on Proxmox.

The driver is the original qla2xxx driver build into the default debian kernel.
(https://driverdownloads.qlogic.com/...uct.aspx?ProductCategory=39&Product=1259&Os=5)

lspci
Code:
b3:00.0 Fibre Channel: QLogic Corp. ISP2722-based 16/32Gb Fibre Channel to PCIe Adapter (rev 01)
b3:00.1 Fibre Channel: QLogic Corp. ISP2722-based 16/32Gb Fibre Channel to PCIe Adapter (rev 01)

lspci -s b3:00.0 -vvv
Code:
b3:00.0 Fibre Channel: QLogic Corp. ISP2722-based 16/32Gb Fibre Channel to PCIe Adapter (rev 01)
        Subsystem: QLogic Corp. QLE2692 Dual Port 16Gb Fibre Channel to PCIe Adapter
        Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR+ FastB2B- DisINTx-
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 0, Cache Line Size: 64 bytes
        Interrupt: pin A routed to IRQ 159
        NUMA node: 0
        Region 0: Memory at fba05000 (64-bit, prefetchable) [size=4K]
        Region 2: Memory at fba02000 (64-bit, prefetchable) [size=8K]
        Region 4: Memory at fb900000 (64-bit, prefetchable) [size=1M]
        Expansion ROM at fbe40000 [disabled] [size=256K]
        Capabilities: [44] Power Management version 3
                Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
                Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
        Capabilities: [4c] Express (v2) Endpoint, MSI 00
                DevCap: MaxPayload 2048 bytes, PhantFunc 0, Latency L0s <4us, L1 <1us
                        ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset- SlotPowerLimit 50.000W
                DevCtl: Report errors: Correctable+ Non-Fatal+ Fatal+ Unsupported+
                        RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+
                        MaxPayload 256 bytes, MaxReadReq 4096 bytes
                DevSta: CorrErr+ UncorrErr- FatalErr- UnsuppReq+ AuxPwr- TransPend-
                LnkCap: Port #0, Speed 8GT/s, Width x8, ASPM L0s L1, Exit Latency L0s <512ns, L1 <2us
                        ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp-
                LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+
                        ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
                LnkSta: Speed 8GT/s, Width x8, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
                DevCap2: Completion Timeout: Range B, TimeoutDis+, LTR-, OBFF Not Supported
                DevCtl2: Completion Timeout: 65ms to 210ms, TimeoutDis-, LTR-, OBFF Disabled
                LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis-
                         Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
                         Compliance De-emphasis: -6dB
                LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete+, EqualizationPhase1+
                         EqualizationPhase2+, EqualizationPhase3+, LinkEqualizationRequest-
        Capabilities: [88] Vital Product Data
pcilib: sysfs_read_vpd: read failed: Input/output error
                Not readable
        Capabilities: [90] MSI-X: Enable+ Count=16 Masked-
                Vector table: BAR=2 offset=00000000
                PBA: BAR=2 offset=00001000
        Capabilities: [9c] Vendor Specific Information: Len=0c <?>
        Capabilities: [100 v1] Advanced Error Reporting
                UESta:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UEMsk:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-
                UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
                CESta:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr-
                CEMsk:  RxErr+ BadTLP+ BadDLLP+ Rollover+ Timeout+ NonFatalErr+
                AERCap: First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn-
        Capabilities: [154 v1] Alternative Routing-ID Interpretation (ARI)
                ARICap: MFVC- ACS-, Next Function: 1
                ARICtl: MFVC- ACS-, Function Group: 0
        Capabilities: [1c0 v1] #19
        Capabilities: [1f4 v1] Vendor Specific Information: ID=0001 Rev=1 Len=014 <?>
        Kernel driver in use: qla2xxx
        Kernel modules: qla2xxx

qaucli
Now i can access the CommandLine Tool qaucli. Here i can see the storage as well. (See Pictures "device list")
The only message i'm thinking about is, that it says "no fabric attached device", what does this mean? If i try to set it with the tool, i can't set this fabric attached device?


You are my last hope, i have read the complete google and debianforum.
I think i've just something i overlooked and i need only one function to say mount "-this one"
i even tried to Install the Storage on an ubuntu VM. But here is the same trouble.

If you need more information, just write me a message...

Hope to find anyone to give the special hint

Thanks for your answers


(This is the (bad) english Version from the German Proxmoxpart)
 

Attachments

  • adapterinfo.PNG
    adapterinfo.PNG
    114.8 KB · Views: 44
  • adpteruebersicht.PNG
    adpteruebersicht.PNG
    55.5 KB · Views: 38
  • device list.PNG
    device list.PNG
    33.8 KB · Views: 34
  • no fabric.PNG
    no fabric.PNG
    17 KB · Views: 35
Another Day working and the problem seems to be identified.

It's the qla2xxx driver, which does not work on Proxmox 6.2 (Kernel 5.4)
When i install Proxmox 6.1 everything works fine.


@6,2 dmesg shows the following:

Code:
[    1.354193] qla2xxx [0000:00:00.0]-0005: : QLogic Fibre Channel HBA Driver: 10.01.00.19-k.
[    1.500888] qla2xxx [0000:00:10.0]-011c: : MSI-X vector count: 16.
[    1.500890] qla2xxx [0000:00:10.0]-001d: : Found an ISP2261 irq 11 iobase 0x(____ptrval____).
[    1.526330] qla2xxx [0000:00:10.0]-00c6:3: MSI-X: Using 3 vectors
[    1.603857] qla2xxx [0000:00:10.0]-0075:3: ZIO mode 6 enabled; timer delay (200 us).
[    3.443945] qla2xxx [0000:00:10.0]-d302:3: qla2x00_get_fw_version: FC-NVMe is Enabled (0x785a)
[    3.724040] scsi host3: qla2xxx
[    3.725032] qla2xxx [0000:00:10.0]-00fb:3: QLogic QLE2692 - QLogic 16Gb 2-port FC to PCIe Gen3 x8 Adapter.
[    3.725067] qla2xxx [0000:00:10.0]-00fc:3: ISP2261: PCIe (8.0GT/s x8) @ 0000:00:10.0 hdma- host#=3 fw=8.08.231 (d0d5).
[    3.743940] qla2xxx [0000:00:10.1]-011c: : MSI-X vector count: 16.
[    3.743942] qla2xxx [0000:00:10.1]-001d: : Found an ISP2261 irq 10 iobase 0x(____ptrval____).
[    3.758020] qla2xxx [0000:00:10.1]-00c6:4: MSI-X: Using 3 vectors
[    3.835923] qla2xxx [0000:00:10.1]-0075:4: ZIO mode 6 enabled; timer delay (200 us).
[    4.201133] qla2xxx [0000:00:10.0]-500a:3: LOOP UP detected (16 Gbps).
[    4.340765] WARNING: CPU: 0 PID: 7 at drivers/scsi/qla2xxx/qla_nvme.c:689 qla_nvme_register_hba+0x132/0x150 [qla2xxx]
[    4.340765] Modules linked in: psmouse(E) qla2xxx(E+) virtio_net(E) net_failover(E) failover(E) nvme_fc(E) nvme_fabrics(E) scsi_transport_fc(E) virtio_scsi(E) i2c_piix4(E) pata_acpi(E) floppy(E)
[    4.340785] Workqueue: events_unbound qla_register_fcport_fn [qla2xxx]
[    4.340796] RIP: 0010:qla_nvme_register_hba+0x132/0x150 [qla2xxx]
[    4.340829]  qla_nvme_register_remote+0x10a/0x179 [qla2xxx]
[    4.340841]  qla2x00_update_fcport+0x1e9/0x270 [qla2xxx]
[    4.340849]  qla_register_fcport_fn+0x58/0xc0 [qla2xxx]
[    4.340861] qla2xxx [0000:00:10.0]-ffff:3: register_localport: host-traddr=nn-0x200034800d72b742:pn-0x210034800d72b742 on portID:e8
[    4.340863] qla2xxx [0000:00:10.0]-ffff:3: register_localport failed: ret=ffffffea
[    5.687899] qla2xxx [0000:00:10.1]-d302:4: qla2x00_get_fw_version: FC-NVMe is Enabled (0x785a)
[    5.947906] scsi host4: qla2xxx
[    5.948878] qla2xxx [0000:00:10.1]-00fb:4: QLogic QLE2692 - QLogic 16Gb 2-port FC to PCIe Gen3 x8 Adapter.
[    5.948912] qla2xxx [0000:00:10.1]-00fc:4: ISP2261: PCIe (8.0GT/s x8) @ 0000:00:10.1 hdma- host#=4 fw=8.08.231 (d0d5).
[    6.426524] qla2xxx [0000:00:10.1]-500a:4: LOOP UP detected (16 Gbps).
[    6.567032] WARNING: CPU: 0 PID: 7 at drivers/scsi/qla2xxx/qla_nvme.c:689 qla_nvme_register_hba+0x132/0x150 [qla2xxx]
[    6.567033] Modules linked in: psmouse(E) qla2xxx(E) virtio_net(E) net_failover(E) failover(E) nvme_fc(E) nvme_fabrics(E) scsi_transport_fc(E) virtio_scsi(E) i2c_piix4(E) pata_acpi(E) floppy(E)
[    6.567052] Workqueue: events_unbound qla_register_fcport_fn [qla2xxx]
[    6.567064] RIP: 0010:qla_nvme_register_hba+0x132/0x150 [qla2xxx]
[    6.567095]  qla_nvme_register_remote+0x10a/0x179 [qla2xxx]
[    6.567107]  qla2x00_update_fcport+0x1e9/0x270 [qla2xxx]
[    6.567116]  qla_register_fcport_fn+0x58/0xc0 [qla2xxx]
[    6.567129] qla2xxx [0000:00:10.1]-ffff:4: register_localport: host-traddr=nn-0x200034800d72b743:pn-0x210034800d72b743 on portID:e8
[    6.567131] qla2xxx [0000:00:10.1]-ffff:4: register_localport failed: ret=ffffffea


Here especially the last line shows a register_localport failure.

Other guys on this forum seems to have the same problems after updating. Is there any known workarround?
 
Working with Kernel 5.3 works.

In the new Ubuntu (I think Kernel 5.8) it is working again. So there is hope, it will work on Proxmox 6.3
I think this can be closed.
 
Sorry to revive this thread again.

I have the same problem as described here with my Q-Logic QLE2692, but running Proxmox 7.1 with kernel 5.13.
Is the problem already known and is there possibly a solution or workaround for it?

Thanks in advance!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!