Fiber Chanel and Shared Storage - Snapshot supported (HA enabled)

berkaybulut

Member
Feb 8, 2023
19
0
6
I will soon be purchasing IBM FlashSystem Storage for my 5 node Proxmox cluster. There are questions I'm wondering.

1 - ) I will provide 2 16G connections with FC from Storage to each node. How am I going to describe this to Proxmox.
2 - ) I have to use HA mode active. Any problems with live streaming with FC Link?
3 - ) According to my research I should use LVM to use FC connections as shared storage. But this does not support snapshot storage. How can I run this?

Thanks in advance for your responses.
 
1 - ) I will provide 2 16G connections with FC from Storage to each node. How am I going to describe this to Proxmox.
You will use Multipath. Although this article is geared towards iSCSI/Multipath, the concepts are the same https://pve.proxmox.com/wiki/ISCSI_Multipath. The vendor (IBM) has instructions on the appropriate setup of their storage system with Debian.
I have to use HA mode active. Any problems with live streaming with FC Link?
Do you mean PVE HA? If you do - then yes, live failover will work fine. I am not sure what "live streaming" is in the context of PVE/HA/Storage.
According to my research I should use LVM to use FC connections as shared storage. But this does not support snapshot storage.
You are correct, PVE supported option is non-thin/thick LVM. No snapshot support in such configuration.
How can I run this?
Best option - buy storage that is Proxmox aware/compatible/supported.
A DYI option is to self-configure Cluster-Aware Filesystem (ie. OCFS2) and use it as directory storage with QCOW images.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: berkaybulut
You will use Multipath. Although this article is geared towards iSCSI/Multipath, the concepts are the same https://pve.proxmox.com/wiki/ISCSI_Multipath. The vendor (IBM) has instructions on the appropriate setup of their storage system with Debian.

Do you mean PVE HA? If you do - then yes, live failover will work fine. I am not sure what "live streaming" is in the context of PVE/HA/Storage.

You are correct, PVE supported option is non-thin/thick LVM. No snapshot support in such configuration.

Best option - buy storage that is Proxmox aware/compatible/supported.
A DYI option is to self-configure Cluster-Aware Filesystem (ie. OCFS2) and use it as directory storage with QCOW images.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Is there a way I would follow to bring snapshot support? I'm considering using LVM with iSCSI Multipath. Because I have to use HA.
 
Is there a way I would follow to bring snapshot support? I'm considering using LVM with iSCSI Multipath. Because I have to use HA.
If the equation is : FlashSystem FC storage + Multipath + Multi Host access (shared) + HA + XXX = snapshots, then the only reasonable solution to XXX, at this point in time, is OCFS2 or similar.

Or you you can replace FlashSystem with Blockbridge, where Blockbridge+Multipath+Shared+HA+Snapshot+Clones+Thin=Profit

Good luck.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
If the equation is : FlashSystem FC storage + Multipath + Multi Host access (shared) + HA + XXX = snapshots, then the only reasonable solution to XXX, at this point in time, is OCFS2 or similar.

Or you you can replace FlashSystem with Blockbridge, where Blockbridge+Multipath+Shared+HA+Snapshot+Clones+Thin=Profit

Good luck.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
As Storage, we gave up on IBM for certain reasons. We now have the DELL Unity 600F. Do you have an equation for this?
 
AFAIK iSCSI with ZFS-over-iSCSI has also snapshot support but lacks the multipathing.
Isn't ZFS over iSCSI basically ZFS on the storage host where PVE manages the ZFS volumes and snapshots via SSH and exports them as iSCSI LUNs?

Not ZFS on top of iSCSI LUNs?
 
As Storage, we gave up on IBM for certain reasons. We now have the DELL Unity 600F. Do you have an equation for this?

IBM=DELL

Isn't ZFS over iSCSI basically ZFS on the storage host where PVE manages the ZFS volumes and snapshots via SSH and exports them as iSCSI LUNs?
This is exactly correct. One would have to attach IBM/Dell/etc to a supported Linux host via iSCSI or FC, then partition/format the LUNs into ZFS pools, then ZFS/iSCSI PVE storage plugin will facilitate export of the the ZFS volumes carved out from the pool using iSCSI to PVE.

The HA portion of the ZFS and iSCSI is left to the implementer.

One may also pass-through the LUNs to a VM running in PVE cluster, so the HA will be somewhat offloaded to PVE. Its up to the admin to weigh benefits vs complexity.

IMHO, in any semblance of production, ZFS/iSCSI makes sense only with something like HA TrueNAS, where there is no an extra intermediary. There is an unofficial fork of the ZFS/iSCSI plugin on github that is adapted for this.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
One may also pass-through the LUNs to a VM running in PVE cluster, so the HA will be somewhat offloaded to PVE. Its up to the admin to weigh benefits vs complexity.
Yes, this is what we do.

IMHO, in any semblance of production, ZFS/iSCSI makes sense only with something like HA TrueNAS, where there is no an extra intermediary. There is an unofficial fork of the ZFS/iSCSI plugin on github that is adapted for this.
Unfortunately, there is only snapshot support in dedicated HA shared storage solutions like @bbgeek17's company offers and this pains me a lot. The "old enterprise way" with a centralized HA storage does unfortunately not work well with PVE (besides NFS of course).
 
Unfortunately, there is only snapshot support in dedicated HA shared storage solutions like @bbgeek17's company offers and this pains me a lot. The "old enterprise way" with a centralized HA storage does unfortunately not work well with PVE (besides NFS of course).
A stable production ready support for any hypervisor takes time, effort and money. I can say that we made and continue to make a significant investment in supporting Proxmox via dedicated proper Plugin to ensure stable functionality.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Johannes S
HA mode with FC usually works well, but streaming depends on your network and storage speed—so keep an eye on latency. About LVM and snapshots, it’s tricky since LVM on FC doesn’t do snapshots natively; some folks layer a backup or snapshot tool on top. While figuring out my own tech gear, I stumbled on some neat tips about Insta360 cameras for YouTube—not storage, but that fresh perspective on optimizing workflow really helped me tackle complex setups more confidently.
 
Last edited:
I’ve been running snapshots on shared storage over Fibre Channel without issues, but you do need to make sure your storage supports safe concurrent access and proper locking from all nodes.
kindly please explain your setup for that what exactly are you using would be great of help thanks.
 
would love to see a shared storage solution over FC which support thin provisioning and snapshot features in proxmox.
 
zfs-over-iscsi is independent from multipathing. Both work in parallel if configured.
ZFS-over-iSCSI is not configured in the OS, so there is no multipath support via multipathd available. I don't know if the QEMU counterpart has multipath support nowadays, yet it did not in the past.
 
  • Like
Reactions: Johannes S
Multipath references to the iscsi part of zfs-over-iscsi. If you configure it right then the iscsi portal ships automatically a second or third ip.
nothing to do with a multipathdeamon or qemu. Did you comprehend completely zfs-over- iscsi?
2 Node zfs-over-iscsi is possible with RSF-1 if someone is interested.
 
Did you comprehend completely zfs-over- iscsi?
I'm using it for years, so I would assume yes, but there is always ways to improve, so please share your view on this.

If you configure it right then the iscsi portal ships automatically a second or third ip.
nothing to do with a multipathdeamon or qemu.
That's clear and the portal part, but it does not mean that you have bandwidth aggregation or even path failback on the initiator. Qemu is according to the documentation not able to login into multiple portals, just a single host, so ZFS-over-iSCSI is not able to use multiple portals like you would on a multipathed host. I don't see how that would work multipathed or even failback unless you have high available IP as the portal.
 
  • Like
Reactions: Johannes S