Fiberchannel Shared Storage Support

megabyte927

New Member
May 14, 2025
5
2
3
Hi All,

I was wondering if there has been any progress/change over the last 12 months that would allow getting proxmox to work with Fibrechannel shared storage and snapshots. There are a few articles about but the newest one I could find was 12 months old.

Thank you all.

M.
 
Hi @bbgeek17, thanks for your response.

I find that the lack of snapsnots is a concern. I have read that backups are still possbile - can someone explain how backups are possible without snapshots. I was playing with the Veeam proxmox capability and it was able to backup a VM that I had on an LVM volume so clearly backups work without snapshots.

Do you know if you can live migrate machines to/from LVM shared storage to a local LVM thin disk ? If so this might allow you to snapshot (you move the VM from shared storage to local, snapshot and then when you have finished and cleared the snapshot, move it back?).

I have seen various articles about with people using other filesystems for shared storage which are not specifically supported by proxmox. OCFS was an example I saw.

Is anyone using a non supported cluster file system with proxmox and if so what are you using please ?

Thank you.

M.
 
Hi @bbgeek17, thanks for your response.

I find that the lack of snapsnots is a concern. I have read that backups are still possbile - can someone explain how backups are possible without snapshots.
For vm backups with Proxmox native backup functionality or Proxmox backup server qemus snapshots are used ( so on the hypervisor Level) who don't need support in the storage backend. I guess Veeam does something similiar.


I was playing with the Veeam proxmox capability and it was able to backup a VM that I had on an LVM volume so clearly backups work without snapshots.

Do you know if you can live migrate machines to/from LVM shared storage to a local LVM thin disk ? If so this might allow you to snapshot (you move the VM from shared storage to local, snapshot and then when you have finished and cleared the snapshot, move it back?).

This should be possible obviouvsly you would need enough local storage space for such an operation:
1747280233223.png

Personally I would go with ZFS (if you don't have HW raid), since then you could also replicate the local saved image to the other nodes in your cluster so in case of a failure of your local node you still have a working (although some minutes older) copy of your VM:
https://pve.proxmox.com/wiki/Storage_Replication

But ZFS and HW RAID don't play nice together and storage replication only works with ZFS.

I have seen various articles about with people using other filesystems for shared storage which are not specifically supported by proxmox. OCFS was an example I saw.

Is anyone using a non supported cluster file system with proxmox and if so what are you using please ?

I know that the service provider company Heinlein Support is using this for customers who would want such functionality:
https://www.heinlein-support.de/sites/default/files/media/documents/2022-03/Proxmox und Storage vom SAN.pdf

There was a discussion in the German forum here with their staff member @gurubert https://forum.proxmox.com/threads/datacenter-und-oder-cluster-mit-local-storage-only.145189/ In it he mentioned also that his company is happy to offer their services if one might want support on it ;)

One issue I see is that OCFS is not really in active development and (due to being not supported officially) seldom used in the Proxmox eco system, see here:
But since ProxmoxVE is basically a Debian Linux nobody can or will people stopping from using anything which might also run on a normal Debian including OCFS.

Here is one more thread on it although @LnxBil wasn't very happy with OCFS in it: https://forum.proxmox.com/threads/ocfs2-support.142407/

So my take (although I'm using ProxmoxVE just in my homelab, not at work) from that is, that I would only go with OCFS if I have a service provider (be it Heinlein Support or some other company) who will help if any problems arise and only as a temporary workaround until the next time your storage hardware needs to be renewed. At the next hardware renewal I then would switch to a storage which is natively supported on ProxmoxVE (be it Ceph, ZFS with storage replication or ZFS over ISCSI or Storage hardware with PVE support like from Blockbridge). The exact choice would depend on the actual needs and environment obviouvsly.
 
Last edited:
@Johannes S, thaks for the reply.

The servers I am looking at using have hardware raid. Performance is very good. Any reason why you would us ZFS if you have hardware raid available. The cards are capable of just passing through the disks but I trust and have had good experience with hardware raid.

Thanks.

M.
 
  • Like
Reactions: Johannes S
The servers I am looking at using have hardware raid. Performance is very good. Any reason why you would us ZFS if you have hardware raid available. The cards are capable of just passing through the disks but I trust and have had good experience with hardware raid.
I see this constantly.

"speed" is NOT the only metric by which a storage solution is measured. In truth, you RARELY use whatever "speed" a subsystem is capable of, but you depend on features regularly. If you can get by without integrated filesystem level checksum, PVE intefrated snapshots, compression, deduplication, thin provisioning, etc etc etc then sure- use your hardware raid. I would counsel that unless a zfs subsystem doesnt meet your performance REQUIREMENTS, who cares if raid is "faster" (in quotes because this is actually not true. RAID is faster for single I/O requests; as the queue depth and requestor count increases it reverses because the raid controller presents a single lun per volume.)
 
The conversation appears to have moved from the topic of the thread "FC shared storage" to local ZFS vs RAID. While the latter combination is always a spirited discussion, I'd like to remind that ZFS is not a shared storage solution.
That's true and thanks for the hint not to derail too much. I just wanted to give an option how to reduce the riscs with the "using local storage for snapshot"-workaround proposed here. I think we all can agree that it's not the solution if you want your data on a permanent shared storage in production (although often enough the replication of ZFS is still "good enough" for various usecases, but of course not a solution for evyrthing!). And the OP asked for the benefits of ZFS beside the RAID feature so I don't think that the detour wasn't to bad.

Just for sake of completeness: In theory a shared storage with ZFS is possible with ZFS over ISCSI which would then also support snapshots.: https://pve.proxmox.com/wiki/Storage:_ZFS_over_ISCSI

It needs to be supported on the storage side though so this won't help in the OPs case either.
 
For vm backups with Proxmox native backup functionality or Proxmox backup server qemus snapshots are used ( so on the hypervisor Level) who don't need support in the storage backend. I guess Veeam does something similiar.
So question then. Can you use hypervisor snapshots for purposes other than backup when using LVM on shared storage? If the backup system can do it why cant you do it as a user? This would solve my problem.
 
So question then. Can you use hypervisor snapshots for purposes other than backup when using LVM on shared storage? If the backup system can do it why cant you do it as a user? This would solve my problem.
No it's only for backup. But: You can do a live-restore, where the the vm boots and launch it's services while the restore continues: You don't have a downtime for the complete duration of the restore.
Of course this isn't a solution for all needs, but still a lot better than no workaround at all.

See the migration guide for more info:
https://pve.proxmox.com/wiki/Migrate_to_Proxmox_VE#Alternatives_to_Snapshots
 
If the backup system can do it why cant you do it as a user?
As already said by @Johannes S , the backup system uses an internal QEMU snapshot mechanism that does the block level snapshot in memory (changes or changed blockes are queued), whereas there is no consistent snapshot method for clustered LVM. That's it not a recent problem but a design problem which will probably never fixed in this incarnation of LVM, maybe in LVM 3. Just use a supported dedicated shared storage and you will not have any problems.
 
As already said by @Johannes S , the backup system uses an internal QEMU snapshot mechanism that does the block level snapshot in memory (changes or changed blockes are queued), whereas there is no consistent snapshot method for clustered LVM. That's it not a recent problem but a design problem which will probably never fixed in this incarnation of LVM, maybe in LVM 3. Just use a supported dedicated shared storage and you will not have any problems.
To be fair the migration guide wiki article ( https://pve.proxmox.com/wiki/Migrate_to_Proxmox_VE#Storage_boxes_(SAN/NAS) )also referenced the bug report https://bugzilla.proxmox.com/show_bug.cgi?id=6096 with "recent developments". It seems that one developer at a Proxmox partner is in the process of implementing qcow2-on-LVM. As soon his work is finished, snapshots would be possible with qcow2 images on LVM. But this doesn't help people who need that functionality right now.