HBA Fiberchannel - Shared storage - Snapshot on Proxmox

acapoprox

New Member
Sep 9, 2024
5
0
1
Hello everyone,
I try to be as concise as possible. In our company we have around 80 virtual machines in production.
10 nodes (HPE BladeCenter c7000) with fiberchannel HBA (Dell EMC vnx5200) all on VMware and Veeam as backup system.
Given recent developments, we are considering moving to a new Hypervisor.
We tested XCP-ng + Xen-Orchestra and Proxmox VE/Proxmox Backup Server.
We are happy with both for the most part and each has its pros and cons (for istance, Xen-orchestra allows you to import a virtual machine directly from the vmware vcenter) but if we had the choice, we would opt for proxmox even though there could be more problems during VMs migration.
But the real problem is that with the hardware configuration we have (again, fiberchannel HBA with share storage) there is no support for snapshots : in fact the "take snapshot" button is disabled/grayed.
We followed this guide:

blog.mohsen.co/proxmox-shared-storage-with-fc-san-multipath-and-17a10e4edd8d

I have to admit that this is a big limitation for us. With Xcp-ng/Xen and of course VMware, this problem does not exist but, I told my colleagues: I still have hope of being able to use it on proxmox.
It's true that 'Proxmox Backup Server' can be a workaround: You make a complete backup of the single server where you want to make changes or whatever you want to do, and then in case of problems you perform a complete restore, but having the possibility of being able to perform a snap "on the fly" is priceless, and the times are much tighter in case of problems.
This solution take a lot of time.
Is there a workaround to get snapshots working in this scenario?

Thanks in advance
 
Last edited:
Hi @acapoprox ,

You are not the first person asking for this functionality. It's been a recurring topic in the forum.

In summary, if you’re forced to make a decision based on your current infrastructure limitations, the best approach is to choose a product that meets your needs out of the box.

For the long term, there’s some promising work underway by external developers to enable snapshots directly on iSCSI/FC storage. However, the timeline for this to become available and reach production readiness is still uncertain. You can find updates and details on the PVE developer mailing list if you're interested.

Alternatively, you might consider sticking with VMware for the remaining useful life of your infrastructure, especially if it’s nearing the end of its lifecycle (10+ years, based on your description). This could allow you to reevaluate your entire setup (compute, storage, network, and virtualization) in a more holistic way when it’s time to refresh.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
U dont understand me. I could not comprehend how u do that. xcp-ng also doesnt use a clustered filesystem except lvm thick, but this has no snapshot support. So, my question is, how do u get snapshots with xcp-ng?
 
You could use a cluster filesystem like ocfs2. It's not offically supported though and needs some tweaks but there are people in this forum who uses it in production. A forum search should give some insights whether that might be suitable or not for your usecase

Regarding the PBS workaround: While it's true it has it's own limitations like the one you mentioned it also has the benefit that you can do single file restores if e.g. a update overwrites configuration of one application but the remaining System is ok
And with live-restore the downtime could be hold minimal enough depending on the usecase
 
Last edited:
  • Like
Reactions: acapoprox
U dont understand me. I could not comprehend how u do that. xcp-ng also doesnt use a clustered filesystem except lvm thick, but this has no snapshot support. So, my question is, how do u get snapshots with xcp-ng?
When I said "this problem is not present" it means that I did not need to do any special configuration.
I configured 2 nodes, activated multipath and HA, added hba storage, everything literally from xen-orcherstra.
And it worked immediately without problems, precisely.
I do not know xpg-ng and xen enough to tell you how it manages things in the "background".
One thing is for sure: snapshots worked right away.
 
xcp-ng also doesnt use a clustered filesystem except lvm thick, but this has no snapshot support.
snapshots are not a function of LVM thin. XCP-ng has provision to support snapshots with LVM thick, PVE doesnt.

Then u never access parallel from both nodes to that lun.
That is how shared storage works in a virtualization environment, yes. true for both.
 
U dont understand me. I could not comprehend how u do that. xcp-ng also doesnt use a clustered filesystem except lvm thick, but this has no snapshot support.
LVM of course has snapshot support for lvm thick and even for lvm thin volumes but the functioning isn't given in pve which is a difference.
Maybe read the examples by google and you have to setup extra snapshot volumes for thick and thin but snapshots are avail to both.
 
  • Like
Reactions: Johannes S
Then u never access parallel from both nodes to that lun.
Thats why People coming from Vsphere find proxmox (Linux) very limited regarding storage. And hope for OCFS2 to replace their VMFS.
Many consultants need to explain over and over that proxmox does not like shared storage with concurrent access.
 
  • Like
Reactions: Johannes S
What do people use for HA-NFS?
The future looks easier to not have dedicated HA-NFS systems to purchase if needed as ...:

The latest capabilities, all available in the 6.12 Linux kernel, include:
  • Parallel reads and writes across multiple servers.
    Parallel NFSv4.2 with FlexFiles splits file data across multiple servers, allowing clients to access data in parallel. When coupled with N-Connect, multiple data paths are unified into a single, efficient stream for workloads requiring massive read or write operations.
  • Accelerate metadata-heavy workloads by reducing latency.
    The new Attribute Delegations allows clients to cache metadata locally, cutting down on repeated server queries.
  • Ensure uninterrupted access during server failures.
    With Fast Failover, workloads from unavailable servers are seamlessly redirected to healthy servers, minimizing downtime.
  • Optimize data transfer speeds.
    For data residing on NFS data servers that are co-located with the application, LOCALIO eliminates data path bottlenecks by delivering data directly without going across the network.
 
  • Like
Reactions: Johannes S
I’ve encountered proprietary pNFS implementations in the past, specifically with EMC, and while they had limited success at the time, it’s encouraging to see the technology now making its way into the mainstream.

It’s worth noting, however, that the adoption of pNFS seems to be closely associated with a particular storage vendor, which means some degree of "proprietary" elements may continue to be present in its development.

At a high level, pNFS appears to be particularly well-suited for high-performance computing (HPC) environments, where multiple clients need to read and write the same data in parallel. VM disk storage is not like that until you’re working at the application layer.
Accelerate metadata-heavy workloads by reducing latency.

This is particularly beneficial when dealing with millions of files. However, a single VM disk, in contrast, would not be a typical use case for a metadata-heavy workload.

For enterprises, the full benefits of this technology will likely be realized by those who actively invest in and update their infrastructure (network, CPU, and storage) rather than trying to repurpose legacy systems.

http://www.pnfs.com/index.html


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Johannes S
As far as I know, the only "real" high performance nfs server available at the open source level is ganesha, and they've effectively abandoned pNFS:

https://github.com/nfs-ganesha/nfs-ganesha/wiki said:
NFS-Ganesha has support infrastructure for pNFS but efforts in this area have not been active for many years and pNFS is not currently supported by the active community.

For supercomputing Lustre long since replaced NFS anyway (and gpfs if you have ibm money,) but a multiheaded multithreaded open source nfs server would be a fantastic option for virtualization and general purpose HA filer.
 
  • Like
Reactions: Johannes S
  • Like
Reactions: Johannes S

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!