Fibre Channel, shared storage, how?

Znuff

Active Member
Nov 9, 2017
17
1
43
38
Hello,

I'm looking into a new setup using Proxmox.

I will have 4 x Proxmox Nodes, each with a FC HBA.

These will be connected to another Server with FC HBA in target mode, running FreeBSD or Linux (not sure yet) with 2 zpools (we will have different storage for SSDs and HDDs).

What is not clear to me, is what shared storage type I should use, so I can enable High Availability?!

I don't have much experience with SANs, so inputs on both sides (the zfs server side) is appreciated.

Any tips?
 
Unfortunately I don't have a budget for a specialized SAN, so I will have to use LiO or FreeBSD's tools to do that.

I have read the Wiki in regards to storage, but it's still not clear.

Seems like the best candidate would be to use ZFS over iSCSI, but Fibre Channel is SCSI not iSCSI, so I'm still unclear about that :-/
 
Hi,
normaly via FC connected disks are visible on all enabled (WWN must enabled on switches and devices/luns) systems like normal disks.

With two FC-connections twice (in this case use multipath).

Simply create an volume-group (lvm) on your shared FC-Lun and use this on all systems.

Udo
 
Thank you.

So there is no need to worry about simultaneous access on the same PVs/VGs when using LVM?

I am leaning more on actually doing ZFS (because we're deep into it, more than LVM), but the docs/wiki only mention ZFS on iSCSI (not simply SCSI which is what FC is basically). Could this be adapted somehow?

ie: using the same zpool on all initiators that is created/resides on the target?
 
Thank you very much. This will be at least my fall-back plan if I don't figure out how to get ZFS working in a similar way.
 
This is probably a stupid idea, but:

Could I use ZFS on the target, and then add LVM on top of a zvol, and export the zvol as a LUN for the initiator(s)?
 
  • Like
Reactions: mdream
Its not a stupid idea.

Since you will want to aggregate your disks on the storage head anyway, you most certainly can create a zpool(s) with zvols exported scsi targets exposed via FC. since the SCSI targets will appear as normal block devices, MPIO will detect the luns normally and you can use the mpx devices with LVM for SAN functionality.

I have not done this in the real world and havent touched FC in years so this is all theoretical. good luck :)
 
  • Like
Reactions: Tmanok
ZFS and HA are not easy to maintain and there is not-of-the-box HA solution (commercially available, but you said everything in free software). Use LVM with multipath which is the simplest setup, supported and works out of the box.
 
ZFS and HA are not easy to maintain and there is not-of-the-box HA solution (commercially available, but you said everything in free software). Use LVM with multipath which is the simplest setup, supported and works out of the box.

Could you be a bit more specific on Multipath?

I'm still planning on doing it on top of ZFS (because I want the daily snapshots), but it's not clear to me how multipath works and the documentation from Red Hat is sort of lacking.
 
Could you elaborate how you want to squeeze in ZFS and then do multipathing? Are you talking about beeing the FC-counterpart of a iscsi target? If there is a way to be a storage provider for FC, this could work, but I do not know how to achieve this. I only "consume" fc-based luns that are exported from a storage in a shared block storage way, so there is only the HA-way of LVM or gfs (not glusterfs).

The general multipath documentation in RHEL is pretty good, as the other documentation is. You only need multipathing if your storages show up multiple times and needs to be combined to be accessible as one name and therefore can tolerate path failures (normally achieved by multiple FC-adaptors to multiple switches to multiple ports on your fc-based storage). The multipath configuration is normally only needed if you have special parameters for things like alua, timeouts etc. and also for naming the multipath anything other than mpathXX (XX beeing any number)
 
Meanwhile my plan was a bit revised. I will have to do FCoE instead of FC (not a big difference), due to not being able to score some cheap FC cards. Yes, FCoE was cheaper, for some reasons.

Anyway,

I'm planning on creating two different ZVOLs on the zpool, one for SSD storage and one for HDD storage.

I want to export that ZVOL as a lun to all my nodes accessing the storage, then create the PV for LVM on top of that (and then VG, LV etc.).

And yes, you can configure a Linux (or FreeBSD) in target mode using LIO: http://linux-iscsi.org/wiki/LIO (or the FreeBSD equivalent, setting ISP_TARGET_MODE).

I'm not sure how Multipath fits in this.
 
Meanwhile my plan was a bit revised. I will have to do FCoE instead of FC (not a big difference), due to not being able to score some cheap FC cards. Yes, FCoE was cheaper, for some reasons.

You did score 10 GBE for less money than 8 GB FC cards? WHERE?? I also need to buy some of them.

I'm not sure how Multipath fits in this.

To be able to use this you need multiple paths, otherwise it does not work. It depends heavily on the target uuids presented to the other hosts. I cannot help with FCoE, because we tried once and it was a total whack job on the driver side of the card we used. We switched back to using FC with a dedicated storage network and never looked back.

And yes, you can configure a Linux (or FreeBSD) in target mode using LIO: http://linux-iscsi.org/wiki/LIO (or the FreeBSD equivalent, setting ISP_TARGET_MODE).

oh yes, that one. Have you tried it yet? I tried a few years back and was not able to get it to work.

Do you plan to have some HA stuff in that setup? The only way it could work is to use some HA software like pacemaker, heartbeat or similar stuff and attach the disks to two hosts, otherwise you'll have a single point of failure. Your target uuid via FCoE should be the same on both nodes that you can switch directly, yet I do not know if there is a save-failover method available in such a setup. ZFS is not made for this kind of setup.
 
You did score 10 GBE for less money than 8 GB FC cards? WHERE?? I also need to buy some of them.
You can find X540-T2 on eBay for $135/ea from chinese sellers.

To be able to use this you need multiple paths, otherwise it does not work. It depends heavily on the target uuids presented to the other hosts. I cannot help with FCoE, because we tried once and it was a total whack job on the driver side of the card we used. We switched back to using FC with a dedicated storage network and never looked back.



oh yes, that one. Have you tried it yet? I tried a few years back and was not able to get it to work.

Do you plan to have some HA stuff in that setup? The only way it could work is to use some HA software like pacemaker, heartbeat or similar stuff and attach the disks to two hosts, otherwise you'll have a single point of failure. Your target uuid via FCoE should be the same on both nodes that you can switch directly, yet I do not know if there is a save-failover method available in such a setup. ZFS is not made for this kind of setup.

I haven't tried it, yet.

This will be my first time doing this, so I hope I'm not gonna run into any issues.

Yes, I am planning to have HA, tough for the moment the single point of failure will indeed be the storage, depending on how the client wants to proceed.

But why would I need Multipath for this to work? Wouldn't LVM do the LV-locking?
 
Meanwhile my plan was a bit revised. I will have to do FCoE instead of FC (not a big difference), due to not being able to score some cheap FC cards. Yes, FCoE was cheaper, for some reasons.

Why bother with fcal at all then? the only reason to even know that FCOE exists is if you're trying to bridge your old FC stuff into the 21st century. if you're building a new cluster and you're using ethernet, avoid FC alltogether and use iscsi.

better yet, if your servers can be used this way, redistribute your disks between your 5 nodes and set up a ceph cluster.

You did score 10 GBE for less money than 8 GB FC cards? WHERE?? I also need to buy some of them.

HBAs are cheap. Mellanox ConnectX2 based cards can be had for $10-15 (as for where- ebay is full of them.) It really comes down to what switches you're planning to use; if you have existing FC switches then its a no brainer- but if you need to buy new switches I wouldnt spent a cent for FC.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!