Fibre Channel SAN with Live Snapshot

Mitterhuemer

New Member
Jan 29, 2018
15
1
1
31
Hello,

we like to integrate our Fibre Channel San to Proxmox.

We already did this successfully with LVM, but there we cannot make Live Snapshots.

But is there maybe a way to make it happen with a GlusterFS Volume to Connect 4 Nodes with a shared Filesystem to the Fibre Channel SAN?

The San also Supports ISCSI, but this does not bring anything because it has no ZFS Support.
 
Is there a official supported way from Proxmox how to make a GFS2 Filesystem with a Fibre Channel San?
We do not support GFS2.
But you do not lose support of the Proxmox VE supported components.
 
Last edited:
I never tried GFS2.
The only think I know about GFS2 with Proxmox VE is some users use it.
 
I never tried GFS2.
The only think I know about GFS2 with Proxmox VE is some users use it.

But this could be a valid solution for enabling Fibre Channel SAN Systems with Live Snapshots.
Why is proxmox not trying to find a official way to support that?
We have to use vmfs from Vmware at the moment because of this.
 
If you don't need live migration between nodes and HA you could create a local ZFS pool on LUN's presented to the node from the SAN. An experimental live migration method including local disks exists but is only available from command line.
 
If you don't need live migration between nodes and HA you could create a local ZFS pool on LUN's presented to the node from the SAN. An experimental live migration method including local disks exists but is only available from command line.

ZFS is no option for the most SAN storages like a HP MSA. There should be added support for a shared Filesystem which supports all kinds of SAN and SCSI
 
  • Like
Reactions: janos
Is it known as a stable solution with Proxmox?

I also struggled with the same topic and I tried GFS2 and was very disappointed. This was in 2015 (Wheezy-based PVE) and it has maybe changed, yet I never tried it again. The problem back then was, that the GFS2 itself was so unstable, that a simple fio file benchmark failed with I/O errors. If there was no error, it was terribly slow.
 
The problem back then was, that the GFS2 itself was so unstable, that a simple fio file benchmark failed with I/O errors. If there was no error, it was terribly slow.

I tried CLVM with GFS2 today and it worked including Backup, Snapshots etc...

But if i copy much data the System gets Stuck for a few Seconds everytime.

Copy a 5GB File from a Share to local Disk....after about 80% Stuck for a few Seconds. Explorer Stops working. After is works again as it was.

I also noticed, that the Sync of the System takes a long time.
During the time i restored 3 Backup Machines to Proxmox back, the GFS2 disk on Server 1, where i restored the machiens had already 150GB in use, while Server 2 in the Cluster tells me, he as only 140GB in Use.
After all Restore is done it had taken about 3 minutes until both Servers showed me the same disk usage again.
During the sync time all machines stucked and got very slow.
 
So at least similar results with respect to the performance.

Are you planning to try ocfs2?

I dont know if it is worth the time.
I heard OCFS is already very old and not really under dev anymore.
But i dont know if that is true.
Can be possible that i read is about the old OCFS Filesystem in 2.6 kernel.

What do you mean? Is it worth to try with proxmox on a SAN Storage?

Here you can read the Same with OCFS2 (german article)

They say it gets already a bit unstable with a middle running write process.
https://www.heise.de/newsticker/meldung/Die-Technik-hinter-heise-online-3262514.html
 
Last edited:
I dont know if it is worth the time.
I heard OCFS is already very old and not really under dev anymore.
But i dont know if that is true.
Can be possible that i read is about the old OFCS Filesystem in 2.6 kernel.

Yeah, that's also my impression, yet there is still development on their mailing list, yet not so many oracle devs anymore. They shifted years ago to develop ACFS (advanced cluster filesystem), which is only available in an Oracle Grid Infrastructure. I haven't tried OCFS for several years, so I have no "new" experience. As long as you use it in a switched not cross-over network, you were fine back in the days.

What do you mean? Is it worth to try with Proxmox VE on a SAN Storage?
I use it for years and I only miss the snapshot feature from time to time. We run approx 50 production machines on a clustered LVM (all KVM), and multiple containers on a local SSD storage in one machine with all the test machines on ZFS and with QCOW2 on ZFS for tree-like snapshot structures. Every time I really need something like a snapshot, I just create a backup and hope, that I do not need to restore it. All tests of production machine upgrades run in a clone first, yet this is general practice and not related to the snapshot feature (as long as it is not zfs with clone from a snapshot).
 
I tried all Filesystems Compatible with Proxmox.
Nothing works perfectly.
Maybe we will switch to Proxmox when we get a Ceph environment later.

But i got a Free Hypervisor Cluster Solution with Shared Storage, Snapshots, and Replication Feature.
I worked with "Microsoft Hyper-V Server 2016".
It was totally tricky to get it working, because i needed to install all Drivers, Network etc.. from the Core Shell.

But now i have a Cluster which can be managed from any Windows 10 PC in my network.

We will use it, until there is a good Proxmox solution with shared Storages.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!