new installation connecting to existing FC SAN

tdemetriou

New Member
Oct 15, 2025
11
1
3
Hi All,

I am new to proxmox

Just set up test server on IBM SR650 server ve v8.4

It has 2 internal RAIDs

One is about 2TB and the other is the installation drive about 450GB

Also installed an Emulex 2port 16GB FC HBA connected to a pair of FC Switches connected to FC SAN that our VM environment is housed in

Proxmox recognizes the HBA

root@test:~# lspci | grep -i fibre
06:00.0 Fibre Channel: Emulex Corporation LPe15000/LPe16000 Series 8Gb/16Gb Fibre Channel Adapter (rev 30)
06:00.1 Fibre Channel: Emulex Corporation LPe15000/LPe16000 Series 8Gb/16Gb Fibre Channel Adapter (rev 30)


How do I attach the storage?


Also how do I have the system recognize the other RAID array?
 
Last edited:
Hi @tdemetriou ,

PVE is based on Debian Userland with Ubuntu Derived Kernel. The block storage is handled by Linux Kernel. The process of connecting the SAN to PVE host is the same as any other Debian/Ubuntu host. The steps are probably listed in your SAN Vendor's documentation.
At a high level, you will need to zone the switches, create LUNs inside the SAN, map the LUNs to HBA WWN. Once, the Kernel is seeing the disks/LUNs (i.e. in lsblk output), you can then proceed with the software part (Multipath, LVM, PVE).


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
@bbgeek17 hi the storage (SAN/switches) is currently supporting our VMWare environment so it is all set up (zoned too).

I can see the new adapters in pve as I connected each port to one switch and aliased and zoned them on the switches

i did add multipath but it was actually before the system saw the adapters.

Do i need to readd that?
 
hi the storage (SAN/switches) is currently supporting our VMWare environment so it is all set up (zoned too).
You do not want PVE to be mapped to the same LUNs and Zones as your VMware. You need to create new LUNs, new Zones, new mappings.
I can see the new adapters in pve as I connected each port to one switch and aliased and zoned them on the switches
If the LUNs are properly mapped , you should see the disks in "lsblk" and "lsscsi" output. If you don't - the configuration is not complete. You cannot move to next step until this is done.
i did add multipath but it was actually before the system saw the adapters.
It will not change anything if you do not see disks in the above two outputs.

As I said, none of these steps are Proxmox-specific - this is basic FC SAN + Linux administration. I’m hesitant to provide specific steps without knowing the full picture, especially since it sounds like you’re working on a production SAN. A misstep here could result in data inaccessibility or loss.

If you have a storage administrator available or an active support contract with your SAN vendor, I strongly recommend involving them.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: alexskysilk
How do I attach the storage?
Same as you would do in VMware. Setup zoning (if you don't use a flat hierarchy) so the new hosts see the storage.
Create virtual volumes and present them to your hosts (preferably with ALUA host personality if your storage supports it).
Configure multipath to consolidate the paths to one device name. Create filesystem at the raw device and include it in pve.
 
Create filesystem at the raw device and include it in pve.
Hi @BD-Nets, you were on point up until this sentence. There are no built-in, or even recommended, cluster-aware filesystems for PVE. Based on all the information OP has provided so far, they should not attempt to configure a CAF. Using LVM is the appropriate approach.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
ok so I was able to zone and map to hosts on the storage SAN which is our prod SAN - created a sep volume for PVE...
run lsblk and still not seeing the storage
 
run lsblk and still not seeing the storage
Thank you for the update, @tdemetriou. There could be many reasons for this behavior:
  • Wrong cable
  • Wrong transceiver
  • Faulty port on the FC switch
  • Incorrect BIOS settings on the FC cards
  • Firmware issues requiring update
  • Missing or incorrect drivers for the FC cards on the client (PVE)
  • Misconfigured Zone/LUN on the SAN
Have you rebooted the PVE host since finishing the zoning? Have you checked dmesg for related messages? What about journalctl -b0?

In 99% of cases, if the SAN is properly configured and connected to the host, the disks should simply appear on a Linux machine, which PVE essentially is.

Have you already located your storage vendor’s Linux Connectivity Guide?

There is no button in the PVE GUI to make disks visible to the Linux kernel if they aren’t detected. I’d recommend methodically rechecking all your infrastructure settings.

If you prefer not rebooting between your changes, you can try:
Code:
for host in /sys/class/fc_host/host*; do
    echo "- - -" > /sys/class/scsi_host/${host##*/}/scan
done

or

rescan-scsi-bus.sh -a

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
ok i rebooted the pve and now i see the storage
Excellent, thank you for the update.
Continue your configuration with Multipath, using your storage vendor's recommendation as a guide. Once done, you can configure LVM, or find a guide on configuring 3rd party Clustered FS.

For LVM, you can use our guide starting at this point https://kb.blockbridge.com/technote...multipath-device-as-an-lvm-physical-volume-pv


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited: