[SOLVED] Emulex FC Setup on Proxmox

rusquad

Member
Feb 3, 2023
40
0
6
Emulex LightPulse LPe32000
hello. does anyone know how to configure these controllers on proxmox? do they need to be configured somehow? I have a clean installation on fujitsu rx2530 m4, I've tried different kernel versions, but it doesn't help. the disk appears every other time by itself. does anyone have a similar adapter? please advise.
 
Last edited:
after turning on the power nothing, no block devices are present. the adapter starts working only after the commands:
Code:
modprobe -r lpfc
modprobe lpfc
sometimes it doesn't work the first time.
root@fuji-px1:~# modinfo lpfc
filename: /lib/modules/6.1.15-1-pve/kernel/drivers/scsi/lpfc/lpfc.ko
version: 0:14.2.0.7
author: Broadcom
description: Emulex LightPulse Fibre Channel SCSI driver 14.2.0.7
license: GPL
srcversion: 108FDA82111075920404B79
alias: pci:v0000117Cd00000094sv0000117Csd000040A7bc*sc*i*
alias: pci:v0000117Cd00000094sv0000117Csd000040A6bc*sc*i*
alias: pci:v0000117Cd00000064sv0000117Csd00004064bc*sc*i*
alias: pci:v0000117Cd000000BBsv0000117Csd000000BEbc*sc*i*
alias: pci:v0000117Cd000000BBsv0000117Csd000000BDbc*sc*i*
alias: pci:v0000117Cd000000BBsv0000117Csd000000BCbc*sc*i*
alias: pci:v0000117Cd00000094sv0000117Csd000000ACbc*sc*i*
alias: pci:v0000117Cd00000094sv0000117Csd000000A3bc*sc*i*
alias: pci:v0000117Cd00000094sv0000117Csd000000A2bc*sc*i*
alias: pci:v0000117Cd00000094sv0000117Csd000000A1bc*sc*i*
alias: pci:v0000117Cd00000094sv0000117Csd00000094bc*sc*i*
alias: pci:v0000117Cd00000094sv0000117Csd000000A0bc*sc*i*
alias: pci:v0000117Cd00000064sv0000117Csd00000065bc*sc*i*
alias: pci:v0000117Cd00000064sv0000117Csd00000064bc*sc*i*
alias: pci:v0000117Cd00000064sv0000117Csd00000063bc*sc*i*
alias: pci:v000010DFd0000072Csv*sd*bc*sc*i*
alias: pci:v000010DFd00000724sv*sd*bc*sc*i*
alias: pci:v000010DFd0000F500sv*sd*bc*sc*i*
alias: pci:v000010DFd0000F400sv*sd*bc*sc*i*
alias: pci:v000010DFd0000E300sv*sd*bc*sc*i*
alias: pci:v000010DFd0000E268sv*sd*bc*sc*i*
alias: pci:v000010DFd0000E208sv*sd*bc*sc*i*
alias: pci:v000010DFd0000E260sv*sd*bc*sc*i*
alias: pci:v000010DFd0000E200sv*sd*bc*sc*i*
alias: pci:v000010DFd0000E131sv*sd*bc*sc*i*
alias: pci:v000010DFd0000F180sv*sd*bc*sc*i*
alias: pci:v000019A2d00000714sv*sd*bc*sc*i*
alias: pci:v000019A2d00000704sv*sd*bc*sc*i*
alias: pci:v000010DFd0000FC50sv*sd*bc*sc*i*
alias: pci:v000010DFd0000E180sv*sd*bc*sc*i*
alias: pci:v000010DFd0000E100sv*sd*bc*sc*i*
alias: pci:v000010DFd0000FC40sv*sd*bc*sc*i*
alias: pci:v000010DFd0000F111sv*sd*bc*sc*i*
alias: pci:v000010DFd0000F112sv*sd*bc*sc*i*
alias: pci:v000010DFd0000F011sv*sd*bc*sc*i*
alias: pci:v000010DFd0000F015sv*sd*bc*sc*i*
alias: pci:v000010DFd0000F100sv*sd*bc*sc*i*
alias: pci:v000010DFd0000FC20sv*sd*bc*sc*i*
alias: pci:v000010DFd0000FC10sv*sd*bc*sc*i*
alias: pci:v000010DFd0000FC00sv*sd*bc*sc*i*
alias: pci:v000010DFd0000F0A1sv*sd*bc*sc*i*
alias: pci:v000010DFd0000F0A5sv*sd*bc*sc*i*
alias: pci:v000010DFd0000F0E1sv*sd*bc*sc*i*
alias: pci:v000010DFd0000F0E5sv*sd*bc*sc*i*
alias: pci:v000010DFd0000FE12sv*sd*bc*sc*i*
alias: pci:v000010DFd0000FE11sv*sd*bc*sc*i*
alias: pci:v000010DFd0000FE00sv*sd*bc*sc*i*
alias: pci:v000010DFd0000F0D1sv*sd*bc*sc*i*
alias: pci:v000010DFd0000F0D5sv*sd*bc*sc*i*
alias: pci:v000010DFd0000FD12sv*sd*bc*sc*i*
alias: pci:v000010DFd0000FD11sv*sd*bc*sc*i*
alias: pci:v000010DFd0000FD00sv*sd*bc*sc*i*
alias: pci:v000010DFd0000F0F7sv*sd*bc*sc*i*
alias: pci:v000010DFd0000F0F6sv*sd*bc*sc*i*
alias: pci:v000010DFd0000F0F5sv*sd*bc*sc*i*
alias: pci:v000010DFd0000F098sv*sd*bc*sc*i*
alias: pci:v000010DFd0000F095sv*sd*bc*sc*i*
alias: pci:v000010DFd0000F700sv*sd*bc*sc*i*
alias: pci:v000010DFd0000F800sv*sd*bc*sc*i*
alias: pci:v000010DFd0000F900sv*sd*bc*sc*i*
alias: pci:v000010DFd0000F980sv*sd*bc*sc*i*
alias: pci:v000010DFd0000FA00sv*sd*bc*sc*i*
alias: pci:v000010DFd00001AE5sv*sd*bc*sc*i*
alias: pci:v000010DFd0000FB00sv*sd*bc*sc*i*
depends: scsi_transport_fc,nvme-fc,nvmet-fc
retpoline: Y
intree: Y
name: lpfc
vermagic: 6.1.15-1-pve SMP preempt mod_unload modversions
parm: lpfc_debugfs_enable:Enable debugfs services (int)
parm: lpfc_debugfs_max_disc_trc:Set debugfs discovery trace depth (int)
parm: lpfc_debugfs_max_slow_ring_trc:Set debugfs slow ring trace depth (int)
parm: lpfc_debugfs_max_nvmeio_trc:Set debugfs NVME IO trace depth (int)
parm: lpfc_debugfs_mask_disc_trc:Set debugfs discovery trace mask (int)
parm: lpfc_enable_nvmet:Enable HBA port(s) WWPN as a NVME Target (array of ullong)
parm: lpfc_poll:FCP ring polling mode control: 0 - none, 1 - poll with interrupts enabled 3 - poll and disable FCP ring interrupts (int)
parm: lpfc_no_hba_reset:WWPN of HBAs that should not be reset (array of ulong)
parm: lpfc_sli_mode:SLI mode selector: 3 - select SLI-3 (uint)
parm: lpfc_enable_npiv:Enable NPIV functionality (uint)
parm: lpfc_fcf_failover_policy:FCF Fast failover=1 Priority failover=2 (uint)
parm: lpfc_fcp_wait_abts_rsp:Wait for FCP ABTS completion (uint)
parm: lpfc_enable_rrq:Enable RRQ functionality (uint)
parm: lpfc_suppress_link_up:Suppress Link Up at initialization (uint)
parm: lpfc_nodev_tmo:Seconds driver will hold I/O waiting for a device to come back (int)
parm: lpfc_devloss_tmo:Seconds driver will hold I/O waiting for a device to come back (int)
parm: lpfc_suppress_rsp:Enable suppress rsp feature is firmware supports it (uint)
parm: lpfc_nvmet_mrq:Specify number of RQ pairs for processing NVMET cmds (uint)
parm: lpfc_nvmet_mrq_post:Specify number of RQ buffers to initially post (uint)
parm: lpfc_enable_fc4_type:Enable FC4 Protocol support - FCP / NVME (uint)
parm: lpfc_log_verbose:Verbose logging bit-mask (uint)
parm: lpfc_enable_da_id:Deregister nameserver objects before LOGO (uint)
parm: lpfc_lun_queue_depth:Max number of FCP commands we can queue to a specific LUN (uint)
parm: lpfc_tgt_queue_depth:Set max Target queue depth (uint)
parm: lpfc_hba_queue_depth:Max number of FCP commands we can queue to a lpfc HBA (uint)
parm: lpfc_peer_port_login:Allow peer ports on the same physical port to login to each other. (uint)
parm: lpfc_restrict_login:Restrict virtual ports login to remote initiators. (int)
parm: lpfc_scan_down:Start scanning for devices from highest ALPA to lowest (uint)
parm: lpfc_topology:Select Fibre Channel topology (uint)
parm: lpfc_link_speed:Select link speed (int)
parm: lpfc_aer_support:Enable PCIe device AER support (uint)
parm: lpfc_sriov_nr_virtfn:Enable PCIe device SR-IOV virtual fn (uint)
parm: lpfc_req_fw_upgrade:Enable Linux generic firmware upgrade (int)
parm: lpfc_force_rscn:Force an RSCN to be sent to all remote NPorts (int)
parm: lpfc_fcp_imax:Set the maximum number of FCP interrupts per second per HBA (int)
parm: lpfc_cq_max_proc_limit:Set the maximum number CQEs processed in an iteration of CQ processing (int)
parm: lpfc_cq_poll_threshold:CQE Processing Threshold to enable Polling (uint)
parm: lpfc_fcp_cpu_map:Defines how to map CPUs to IRQ vectors per HBA (int)
parm: lpfc_fcp_class:Select Fibre Channel class of service for FCP sequences (uint)
parm: lpfc_use_adisc:Use ADISC on rediscovery to authenticate FCP devices (uint)
parm: lpfc_first_burst_size:First burst size for Targets that support first burst (uint)
parm: lpfc_nvmet_fb_size:NVME Target mode first burst size in 512B increments. (uint)
parm: lpfc_nvme_enable_fb:Enable First Burst feature for NVME Initiator. (uint)
parm: lpfc_max_scsicmpl_time:Use command completion time to control queue depth (uint)
parm: lpfc_ack0:Enable ACK0 support (uint)
parm: lpfc_xri_rebalancing:Enable/Disable XRI rebalancing (uint)
parm: lpfc_fcp_io_sched:Determine scheduling algorithm for issuing commands [0] - Hardware Queue, [1] - Current CPU (uint)
parm: lpfc_ns_query:Determine algorithm NameServer queries after RSCN [0] - GID_FT, [1] - GID_PT (uint)
parm: lpfc_fcp2_no_tgt_reset:Determine bus reset behavior for FCP2 devices [0] - issue tgt reset, [1] - no tgt reset (uint)
parm: lpfc_cr_delay:A count of milliseconds after which an interrupt response is generated (uint)
parm: lpfc_cr_count:A count of I/O completions after which an interrupt response is generated (uint)
parm: lpfc_multi_ring_support:Determines number of primary SLI rings to spread IOCB entries across (uint)
parm: lpfc_multi_ring_rctl:Identifies RCTL for additional ring configuration (uint)
parm: lpfc_multi_ring_type:Identifies TYPE for additional ring configuration (uint)
parm: lpfc_enable_SmartSAN:Enable SmartSAN functionality (uint)
parm: lpfc_fdmi_on:Enable FDMI support (uint)
parm: lpfc_discovery_threads:Maximum number of ELS commands during discovery (uint)
parm: lpfc_max_luns:Maximum allowed LUN ID (ullong)
parm: lpfc_poll_tmo:Milliseconds driver will wait between polling FCP ring (uint)
parm: lpfc_task_mgmt_tmo:Maximum time to wait for task management commands to complete (uint)
parm: lpfc_use_msi:Use Message Signaled Interrupts (1) or MSI-X (2), if possible (uint)
parm: lpfc_nvme_oas:Use OAS bit on NVME IOs (uint)
parm: lpfc_nvme_embed_cmd:Embed NVME Command in WQE (uint)
parm: lpfc_fcp_mq_threshold:Set the number of SCSI Queues advertised (uint)
parm: lpfc_hdw_queue:Set the number of I/O Hardware Queues (uint)
parm: lpfc_irq_chann:Set number of interrupt vectors to allocate (uint)
parm: lpfc_enable_hba_reset:Enable HBA resets from the driver. (uint)
parm: lpfc_enable_hba_heartbeat:Enable HBA Heartbeat. (uint)
parm: lpfc_EnableXLane:Enable Express Lane Feature. (uint)
parm: lpfc_XLanePriority:CS_CTL for Express Lane Feature. (uint)
parm: lpfc_enable_bg:Enable BlockGuard Support (uint)
parm: lpfc_prot_mask:T10-DIF host protection capabilities mask (uint)
parm: lpfc_prot_guard:T10-DIF host protection guard type (uint)
parm: lpfc_delay_discovery:Delay NPort discovery when Clean Address bit is cleared. (uint)
parm: lpfc_sg_seg_cnt:Max Scatter Gather Segment Count (uint)
parm: lpfc_enable_mds_diags:Enable MDS Diagnostics (uint)
parm: lpfc_ras_fwlog_buffsize:Host memory for FW logging (uint)
parm: lpfc_ras_fwlog_level:Firmware Logging Level (uint)
parm: lpfc_ras_fwlog_func:Firmware Logging Enabled on Function (uint)
parm: lpfc_enable_bbcr:Enable BBC Recovery (uint)
parm: lpfc_fabric_cgn_frequency:Congestion signaling fabric freq (int)
parm: lpfc_acqe_cgn_frequency:Congestion signaling ACQE freq (int)
parm: lpfc_use_cgn_signal:Use Congestion signaling if available (int)
parm: lpfc_enable_dpp:Enable Direct Packet Push (uint)
parm: lpfc_enable_mi:Enable MI (uint)
parm: lpfc_max_vmid:Maximum number of VMs supported (uint)
parm: lpfc_vmid_inactivity_timeout:Inactivity timeout in hours (uint)
parm: lpfc_vmid_app_header:Enable App Header VMID support (uint)
parm: lpfc_vmid_priority_tagging:Enable Priority Tagging VMID support (uint)

please any thoughts and suggestions
 

Attachments

  • log after power on.txt
    36.5 KB · Views: 8
  • log after modprobe.txt
    39.2 KB · Views: 4
Last edited:
Hmm ... that is strange, yet not unhear of. I encountered a similar problem way back with qlogic cards. Newer firmware fixed the issue. Have you tried that?
 
Hmm ... that is strange, yet not unhear of. I encountered a similar problem way back with qlogic cards. Newer firmware fixed the issue. Have you tried that?
I can't figure out how they can be configured? or do they just work or not work? if so, how?
 
Last edited:
I can't figure out how they can be configured? or do they just work or not work? if so, how?
Yes, they normally just work.

For QLogic, there is a linux cli tool to manage the controllers, maybe there is something similar with emulex? Another idea is to just go to ebay and buy some QLogic ones, they are not that expensive. For many years now, we exclusively use QLogic and run through 4 to 32 GBit without any problems.
 
I updated the firmware, it seemed to get better. but the connection takes a very long time, is this normal? the first link is connected in ten minutes, and the second - in 5 minutes after it. I haven't dealt with a fiber optic channel before. before that, there was esxi on the server, and it worked
 
You may know this already but its worth highlighting that Proxmox/PVE is a virtualization suite that is based on Debian with Linux kernel that is mostly standard. You want to search for resources that describe your specific card installed/used with Linux. There is nothing PVE specific involved in this.
Under normal circumstances connections shouldn't take more than a few seconds.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
kernel 5.15.102-1-pve

here is the difference between connections from dmesg. there is a link up down event between them
16:47:28 fuji-px1 kernel: [ 96.897134] sd 13:0:0:0: [sdb] Attached SCSI disk 16:55:07 fuji-px1 kernel: [ 556.138917] sd 12:0:0:0: [sdc] Attached SCSI disk

update: won't connect after reboot
 
Last edited:
In general, I'd also issue a LIP and rescan the scsi bus in order to see them, but after a normal boot of the server, the disks are immediately present in my experience.
 
Its not an exact match, since you seem to get a connection eventually and the kernel version you have should contain the fix:
https://forum.proxmox.com/threads/e...er-not-working-in-5-15-64-and-5-15-74.118555/

I would give 5.19 and/or 6.x a try


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
yes, I have tried different kernels. they have different versions of the lpfc driver. they still don't work instantly. the adapter manufacturer has a new driver, but only for ubuntu from the source. I have little experience in compiling.
 
In general, I'd also issue a LIP and rescan the scsi bus in order to see them, but after a normal boot of the server, the disks are immediately present in my experience.
rescan-scsi-bus.sh did not help, does not see any LUNs. It seems to me that the storage is configured somehow wrong. but the whole trick is that it used to work earlier for esxi. two servers and one storage
 
Are the server (specifically, the emulex cards) the EXACT same ones? if not, you probably need to add the WWNs to your storage LUN mapping before you can see anything.

nm. just saw that it comes up intermittently.
 
storage is configured on one shared disk for 2 ports, show the same disk. probably in point to point mode, is that normal?
Code:
NAME               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                  8:0    0 223.6G  0 disk
├─sda1               8:1    0  1007K  0 part
├─sda2               8:2    0   512M  0 part /boot/efi
└─sda3               8:3    0 223.1G  0 part
  ├─pve-swap       253:0    0     8G  0 lvm  [SWAP]
  ├─pve-root       253:1    0  65.8G  0 lvm  /
  ├─pve-data_tmeta 253:2    0   1.3G  0 lvm
  │ └─pve-data     253:4    0 130.6G  0 lvm
  └─pve-data_tdata 253:3    0 130.6G  0 lvm
    └─pve-data     253:4    0 130.6G  0 lvm
sdb                  8:16   0    22T  0 disk
sdc                  8:32   0    22T  0 disk
 
Last edited:
optics connected directly to storage.
I already tried to set up multipath, but it does not make sense because the storage is not stable.
Here is what I found in the logs when the first link was connected:
264.299640] lpfc 0000:18:00.0: 0:1305 Link Down Event x206 received Data: x206 x20 x800011 x0
[ 264.377755] lpfc 0000:18:00.1: 1:1305 Link Down Event x200 received Data: x200 x20 x801011 x0
[ 264.400383] ================================================================================
[ 264.401380] UBSAN: shift-out-of-bounds in ./include/scsi/scsi_cmnd.h:227:42
[ 264.402153] shift exponent 4294967286 is too large for 64-bit type 'long long unsigned int'
[ 264.402808] CPU: 11 PID: 0 Comm: swapper/11 Tainted: P O 6.2.6-1-pve #1
[ 264.402813] Hardware name: FUJITSU PRIMERGY RX2530 M4/D3383-A1, BIOS V5.0.0.12 R1.28.0 for D3383-A1x 11/12/2018
[ 264.402816] Call Trace:
[ 264.402819] <IRQ>
[ 264.402822] dump_stack_lvl+0x48/0x70
[ 264.402833] dump_stack+0x10/0x20
[ 264.402836] __ubsan_handle_shift_out_of_bounds+0x156/0x2f0
[ 264.402846] ? lpfc_fcp_io_cmd_wqe_cmpl+0xd88/0xfe0 [lpfc]
[ 264.402891] lpfc_fcp_io_cmd_wqe_cmpl.cold+0x1c/0x62 [lpfc]
[ 264.402922] lpfc_sli4_fp_handle_cqe+0x19e/0x870 [lpfc]
[ 264.402952] ? ttwu_do_wakeup+0x1c/0x190
[ 264.402959] __lpfc_sli4_process_cq+0x107/0x270 [lpfc]
[ 264.402982] ? __lpfc_sli4_process_cq+0x107/0x270 [lpfc]
[ 264.403005] ? __pfx_lpfc_sli4_fp_handle_cqe+0x10/0x10 [lpfc]
[ 264.403035] __lpfc_sli4_hba_process_cq+0x41/0x160 [lpfc]
[ 264.403059] lpfc_cq_poll_hdler+0x1a/0x30 [lpfc]
[ 264.403082] irq_poll_softirq+0x9c/0x120
[ 264.403085] __do_softirq+0xd8/0x319
[ 264.403093] __irq_exit_rcu+0x8e/0xb0
[ 264.403098] irq_exit_rcu+0xe/0x20
[ 264.403101] common_interrupt+0x8e/0xa0
[ 264.403107] </IRQ>
[ 264.403108] <TASK>
[ 264.403109] asm_common_interrupt+0x27/0x40
[ 264.403115] RIP: 0010:cpuidle_enter_state+0xd8/0x6e0
[ 264.403121] Code: 8b 3d 08 49 61 57 e8 e7 49 53 ff 49 89 c7 0f 1f 44 00 00 31 ff e8 88 67 52 ff 80 7d d0 00 0f 85 d8 00 00 00 fb 0f 1f 44 00 00 <45> 85 f6 0f 88 05 02 00 00 4d 63 ee 49 83 fd 09 0f 87 b4 04 00 00
[ 264.403124] RSP: 0018:ffffb6a488267e38 EFLAGS: 00000246
[ 264.403127] RAX: ffff9699dfaf16c0 RBX: ffffd6a47fac2530 RCX: 000000000000001f
[ 264.403129] RDX: 0000000000000c9d RSI: 000000003d1877c2 RDI: 0000000000000000
[ 264.403131] RBP: ffffb6a488267e88 R08: 0000003d8f7d4351 R09: 0000000000000003
[ 264.403133] R10: 0000000000000001 R11: 0000000000000000 R12: ffffffffa9ac0900
[ 264.403135] R13: 0000000000000003 R14: 0000000000000003 R15: 0000003d8f7d4351
[ 264.403140] ? cpuidle_enter_state+0xc8/0x6e0
[ 264.403144] cpuidle_enter+0x2e/0x50
[ 264.403147] do_idle+0x20a/0x2a0
[ 264.403153] cpu_startup_entry+0x20/0x30
[ 264.403156] start_secondary+0x122/0x160
[ 264.403162] secondary_startup_64_no_verify+0xe5/0xeb
[ 264.403172] </TASK>
[ 264.403176] ================================================================================
[ 265.128877] lpfc 0000:18:00.0: 0:1303 Link Up Event x207 received Data: x207 x0 x80 x0 x0
[ 265.226428] lpfc 0000:18:00.1: 1:1303 Link Up Event x201 received Data: x201 x0 x80 x0 x0
[ 265.234748] lpfc 0000:18:00.0: 0:1305 Link Down Event x208 received Data: x208 x20 x800011 x0
[ 265.237803] scsi 13:0:0:0: Direct-Access FUJITSU ETERNUS_DXL 1084 PQ: 0 ANSI: 6
[ 265.238577] scsi 13:0:0:0: Attached scsi generic sg1 type 0
[ 265.353582] sd 13:0:0:0: Power-on or device reset occurred
[ 265.354251] sd 13:0:0:0: [sdb] 47244640256 512-byte logical blocks: (24.2 TB/22.0 TiB)
[ 265.354549] sd 13:0:0:0: [sdb] Write Protect is off
[ 265.354551] sd 13:0:0:0: [sdb] Mode Sense: 8f 00 00 08
[ 265.354893] sd 13:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 265.386549] sdb: sdb1 sdb9
[ 265.386820] sd 13:0:0:0: [sdb] Attached SCSI disk
[ 266.064077] lpfc 0000:18:00.0: 0:1303 Link Up Event x209 received Data: x209 x0 x80 x0 x0
[ 266.170017] lpfc 0000:18:00.0: 0:1305 Link Down Event x20a received Data: x20a x20 x800011 x0
 
esxi connects the storage right at the installation stage, as well as when loading esxi, both links are connected simultaneously. and on proxmox, only one link blinks
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!