[TUTORIAL] How to Configure Fibre Channel SAN Storage with Multipath and High Availability on Proxmox VE 9

if you have some advices /tips are welcomed
Dont' use images to display text from the console. If you use text, format it properly (indentation).

You should also add text and a link to the storage vendor multipath configuration, which is in their documentation about the storage. No one knowns from you blog post where you get the specific configuration you mention. If the vendor cannot provide it, use a vendor that can. All big players do and you will only have support if you use the vendor provided settings.

Don't blacklist block devices like sda or sdb, block either wwid or storage devices like this if you use a HP branded raid controller, or LSI or whatever:

Code:
blacklist {
       device {
               vendor  "HP"
               product ".*"
       }
}

Another approach often used is to blacklist all and create blacklist exceptions with the wwid's you use.

For some reason, the second node shared storage „dorado-40TB“ was „unknown“ and in the storage tab was „Active – NO“ but reboot of the pve2 solved the issue. I read some post about this, and is known issue.
This is not a known issue, it's not reading the documentation or understanding what you did and need to do. You just added the disks and did not reread the lvm configuration with pvscan and probably vgscan. This should be reflected in your blog post, because the target behind the link could vanish, so no one knows how you fixed it.
 
  • Like
Reactions: Onslow
Hi, will this setup let me use snapshots with the VM's on and let me migrate them from one node to another node?
The snapshot support sits above the FC/Multipath layers. It requires LVM to be placed on top of the Multipath device, as well as, snapshot-as-volume-chain 1 attributed on the storage pool. This attribute is present on the referenced page, however the primary goal of the article does not focus on the snapshot support.
The migration is a basic primitive of the system and is a given in a properly configured cluster.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Hi,
if i want to use my SAN that way, do i need a cluster-aware filesystem like GFS2 / OCFS2 (not supported by Proxmox as i know) or LVM?
Currently i am using ZFS and would like to keep that.
 
Hi @bbgeek17
i ment, using a shared device over all proxmox servers, currently iam using seperate LUNs for my Proxmox Server (not shared).

Thanks for your answer.
 
LVM is a PVE integrated way to use FC as shared storage. You can read this article to get high level understanding of the components involved:
https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage/

Although it references iSCSI as underlying storage - the concepts are the same.

You can also use one of the available Clustered File Systems, however they are not integrated into PVE. Installing, configuring, and maintaining the CFS will be on you. There are multiple guides available from 3rd parties on how to do it.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox