Shared Storage with FC-SAN

valentin_ops

New Member
Feb 21, 2024
1
1
1
Hello,

I'm trying to get the best storage possible using fiber-channel SAN.

With Multipath, I'm able to get the SAN appear as disk on my nodes, and I'm able to create a PV/VG on it.

So I have now a Shared storage based on LVM, on my SAN, available on all my nodes.

The only issue I get, is about Snapshots. LVM doesn't support snapshots.

We tried a ZFS Pool on top of the SAN, but because it appear as not shared on Proxmox side, it cause migration issues (it's long like local storage).

Any idea how to get a Shared and Snapshot ready storage on top of Proxmox, based on FC-SAN ?

Thank you
 
  • Like
Reactions: patefoniq
Any idea how to get a Shared and Snapshot ready storage on top of Proxmox, based on FC-SAN ?
It's unfortunately impossible with LVM and there is no other supported solution available.

Other threads about this:
https://forum.proxmox.com/threads/fibre-channel-san-with-live-snapshot.41205/
https://forum.proxmox.com/threads/proxmox-cluster-with-san-storage.40756/
https://forum.proxmox.com/threads/fibre-channel-shared-storage-how.37986/

I presented a solution that we use in this thread. It's not pretty, but it works:

Create a HA-VM on top of LVM with ZFS inside, export it's datasets to the same cluster as a storage as ZFS-over-iSCSI and you'll have a snapshottable shared storage in your system. We use this for tests if we need to:
  • host VMs on LVM
  • move VM live to other storage with snapshot capability
  • snapshot VM, do what we need to do with the VM
  • move it back to LVM
This works for us and we also use PBS, so we also have very fast backup and restore if we need to.
 
Hello.
I´m facing the same question. We have a five nodes Proxmox Cluster, and considering adopt a central storage. We have two options from two vendors: first uses a Zadara storage with iSCSI, and the second requires the instalation of HBA hardware in each of my hosts, and then create a FC based storage.
@LnxBil, regarding the use of FC, you presented a solution that consists of using ZFS-over-iSCSI and a local VM. Could you share how to configure this VM to share its contents with the Proxmox using ZFS-over-iSCSI and the final cojnfiguration in the Proxmox side? I will question the storage vendors if any of the equipament supports this ZFS-over-iSCSI protocol.
 
first uses a Zadara storage with iSCSI, and the second requires the instalation of HBA hardware in each of my hosts, and then create a FC based storage.
Neither of these particular options will give you snapshot support. Thin provisioning might be somewhat possible with Zadara, but not in a way that can be controlled by Proxmox.
regarding the use of FC, you presented a solution that consists of using ZFS-over-iSCSI and a local VM. Could you share how to configure this VM to share its contents with the Proxmox using ZFS-over-iSCSI and the final cojnfiguration in the Proxmox sid
You simply need to present the VM with raw LUNs from either FC or iSCSI. This could be a direct pass-through, or via hypervisor.

I will question the storage vendors if any of the equipament supports this ZFS-over-iSCSI protocol.
I dont believe Zadara uses ZFS , so you wont be able to use ZFS/iSCSI directly. This is why you would be deploying that intermediate VM - it will provide that missing ZFS layer, in addition to ssh driven iSCSI management.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Daniel Lauck
In addition to @bbgeek17 good answer, I do not recommend to run the ZFS-in-between-storage in a production environment with FC-HA storage. You will introduce a single point of failure (SPOF) there. If you know about it and have failsafe in place, it is a possible way to go.
 
  • Like
Reactions: Daniel Lauck
Thanks a lot for your replies! I will use iSCSI, that is stable and well documented.
Sure, if it works better for you - go for it. At the end of the day they both provide block level storage and have identical limitations as far as PVE is concerned.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Daniel Lauck
No snapshot available thougt...
Hi @troublestarter, you are correct. iSCSI is a block transfer protocol it has no notion about what a snapshot might be. Snapshots are implemented by each vendor behind the iSCSI target. Each implementation depends on the block or the filesystem that particular vendor is using. Essentially, each implementation requires its own integration. That integration has to be driven by the storage vendor, so if you want snapshots - you need to pick the vendor that provides such integration.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
Thanks @bbgeek17 for the reply.

Yes, so for enterprise level :

- Can't use ISCSI
- Can't use NFS because lack of performance
- Can't use anything else for shared storage...
Only CEPH with very high cost hardware because NVME SSD and many nodes...
 
Last edited:
  • Like
Reactions: patefoniq
What I am missing from these discussions is another option: ask the storage vendor if they have a Proxmox VE storage plugin for their system. As it is possible to have 3rd party storage plugins, not just the ones we provide. If enough customers ask for it, they might be inclined to provide one for a seamless integration.
 
  • Like
Reactions: patefoniq and UdoB
What I am missing from these discussions is another option: ask the storage vendor if they have a Proxmox VE storage plugin for their system. As it is possible to have 3rd party storage plugins, not just the ones we provide. If enough customers ask for it, they might be inclined to provide one for a seamless integration.
There are not a lot of vendor to my knowledge that support Proxmox. If you have any advise on this ...
But, i see that some other hypervisor don't have those limitations.
So it's kindly a pain to have something enterprise level. I like very much proxmox, but it is hard to have a good implementation in shared storage.
 
  • Like
Reactions: patefoniq
I like very much proxmox, but it is hard to have a good implementation in shared storage.
Yes, but that is the limitation of the hardware you have. It's not PVE's fault that your storage vendor does not support PVE. If you go with a supported vendor, you will not have those restrictions.

Can't use ISCSI
Depends on the implementation. We use ZFS-over-iSCSI and it is superior to anything else I've ever seen on other hypervisors.

Only CEPH with very high cost hardware because NVME SSD and many nodes...
3 nodes is sufficient (it is also the least amount of a supported PVE HA cluster) and the TCO for ceph is less than anything you would by with FC (including the SAN). Hyperconverged storage is the future because it is in fact cheap and scales horizontaly, but yes, it may be hard to implement with your currently available on-premise hardware.
 
Last edited:
Hi,

but implementing QCOW on shared LV(M) with snapshot (and ev. thin provisioning) support would be easier and less error prone to implement then legacy clustering filesystems like OCFS2/GFS2 - and of course Storage Vendor neutral ! - only one caveat .... it's (currently) also not implemented/supported by Proxmox
 
Last edited:
but implementing QCOW on LV(M) with snapshot (and ev. thin provisioning) support would be easier
I am curious about this, can you provide some reference documentation on how to place qcow on raw disk, while maintaining qcow format and snapshot functionality? I didnt find anything after a quick glance.

Thank you!


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I am curious about this, can you provide some reference documentation on how to place qcow on raw disk,
This keeps coming up all the time, and I'm usually just ready to kneejerk answer "no you cant blah blah blah," but recently I got to thinking about this more outside the box. Since most (many) iscsi backed storage solutions are thin provisioned at their own back end level, this should allow lvm-thick deployments without too much wasted space on the backing store. What if, through careful administration, you create one lv PER VIRTUAL DISK, partition and write ONE qcow file to it? performance will likely be shit (cow on cow) but it should remain coherent as only one host can have the disk open by design? that would facilitate thin provisioning on the actual backing store, and snapshots via qcow... it may even be made performant through careful experimentation of block size but I dont know I'd want to expend the effort...

--edit-- need a way to force a file system reload on the target node on migration. would need some work.
 
Last edited:
What if, through careful administration, you create one lv PER VIRTUAL DISK
This is exactly what we did for our first Proxmox customer: VM-disk<>BB disk. Although that customer used backend (Blockbridge snapshots). Essentially, the VM was supplied with a raw iSCSI disk and there was no LVM/QCOW intermediary. The disk was multi-host attached with multipath on top of it.

As you can imagine this hardly scales to hundreds of disks, let alone thousands.
Will these disks be hung off the same iSCSI target? What happens on rescan or add/remove of a LUN?
Can the storage system handle multiple iSCSI targets to isolate the LUNs? How many? Does it really scale and is it fully dynamic?
In short, completely dependent on the storage vendor.

you create one lv PER VIRTUAL DISK, partition and write ONE qcow file to it?
But that is the question, which admittedly I only spent very little time on. How do you write a qcow format directly to raw disk and keep the QCOW addressing? LVM is essentially a raw disk for this purpose. @alma21 said it was easy, how? Do you really retain snapshot ability when there is no underlying filesystem?
as only one host can have the disk open by design?
This has to be carefully fenced. In standard iSCSI/FC setups the underlying raw disk is attached to all servers in the cluster simultaneously. Keeping LVM fenced off is hard enough, now you also have open file locks above it that need to be released.

Its software, anything is possible. But nothing comes for free with no effort.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!