new installation connecting to existing FC SAN

tdemetriou

New Member
Oct 15, 2025
3
0
1
Hi All,

I am new to proxmox

Just set up test server on IBM SR650 server ve v8.4

It has 2 internal RAIDs

One is about 2TB and the other is the installation drive about 450GB

Also installed an Emulex 2port 16GB FC HBA connected to a pair of FC Switches connected to FC SAN that our VM environment is housed in

Proxmox recognizes the HBA

root@test:~# lspci | grep -i fibre
06:00.0 Fibre Channel: Emulex Corporation LPe15000/LPe16000 Series 8Gb/16Gb Fibre Channel Adapter (rev 30)
06:00.1 Fibre Channel: Emulex Corporation LPe15000/LPe16000 Series 8Gb/16Gb Fibre Channel Adapter (rev 30)


How do I attach the storage?


Also how do I have the system recognize the other RAID array?
 
Last edited:
Hi @tdemetriou ,

PVE is based on Debian Userland with Ubuntu Derived Kernel. The block storage is handled by Linux Kernel. The process of connecting the SAN to PVE host is the same as any other Debian/Ubuntu host. The steps are probably listed in your SAN Vendor's documentation.
At a high level, you will need to zone the switches, create LUNs inside the SAN, map the LUNs to HBA WWN. Once, the Kernel is seeing the disks/LUNs (i.e. in lsblk output), you can then proceed with the software part (Multipath, LVM, PVE).


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
@bbgeek17 hi the storage (SAN/switches) is currently supporting our VMWare environment so it is all set up (zoned too).

I can see the new adapters in pve as I connected each port to one switch and aliased and zoned them on the switches

i did add multipath but it was actually before the system saw the adapters.

Do i need to readd that?
 
hi the storage (SAN/switches) is currently supporting our VMWare environment so it is all set up (zoned too).
You do not want PVE to be mapped to the same LUNs and Zones as your VMware. You need to create new LUNs, new Zones, new mappings.
I can see the new adapters in pve as I connected each port to one switch and aliased and zoned them on the switches
If the LUNs are properly mapped , you should see the disks in "lsblk" and "lsscsi" output. If you don't - the configuration is not complete. You cannot move to next step until this is done.
i did add multipath but it was actually before the system saw the adapters.
It will not change anything if you do not see disks in the above two outputs.

As I said, none of these steps are Proxmox-specific - this is basic FC SAN + Linux administration. I’m hesitant to provide specific steps without knowing the full picture, especially since it sounds like you’re working on a production SAN. A misstep here could result in data inaccessibility or loss.

If you have a storage administrator available or an active support contract with your SAN vendor, I strongly recommend involving them.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: alexskysilk