MSA 2060 SAN FC with single server (no shared access)

neoleg

New Member
Jul 11, 2025
3
0
1
Hi everyone,

We currently have the following setup running under VMware, and I’m looking into migrating it to Proxmox VE:
  • One physical server: HPE ProLiant Gen11
  • One external storage system: HPE MSA 2060 FC, configured with a RAID MSA-DP+ array built from 24 HDDs
  • The server is connected to the MSA via two Fibre Channel links: one to Controller A, and the second to Controller B (for redundancy, dual-controller setup).
  • Only this single server is connected to the storage — shared access is not using at this time.
I would like to know if this setup can be used with Proxmox VE, and if so — what is the recommended configuration, especially in terms of snapshot support and potential future Proxmox cluster expansion (using Replications of VMs).

Specifically:
  1. Can I use LVM Thin on top of a Fibre Channel LUN in this setup to enable VM snapshots?
    • Are there any limitations I should be aware of?
    • If I only have one server accessing the LUN, is this safe and stable?
    • Will I lose the ability to use VM replication later if I decide to add another Proxmox node to the cluster?
    • Does multipath (due to dual FC connections to MSA) cause any issues with LVM Thin? Should I configure anything special for this scenario?
  2. Can I use ZFS on top of an FC LUN instead?
    • I read that ZFS is the only option that supports native VM replication in Proxmox.
    • However, some sources recommend not using ZFS on top of hardware RAID LUNs (like those from an MSA). Is this a real concern in production?
I would really appreciate any advice or suggestions on the best approach for this kind of configuration — especially from those who may be using a similar setup with Proxmox.

Thanks in advance!
 
Can I use LVM Thin on top of a Fibre Channel LUN in this setup to enable VM snapshots?
Yes you can as long as you only one host.

Are there any limitations I should be aware of?
None that I'm aware off. It's essentially a LUN connected via FC from that host to the storage.

If I only have one server accessing the LUN, is this safe and stable?
Yes it is, as long as only one server accesses that LUN.

Will I lose the ability to use VM replication later if I decide to add another Proxmox node to the cluster?
Yes you do: storage replication requires ZFS. ZFS does not support any kind of SAN/RAID and requires direct access to the disks. To be fully clear, technically you can setup ZFS on a SAN disk but it is unsupported and issues will arise due to cache management and no one will help you because you are running an unsupported configuration.

Does multipath (due to dual FC connections to MSA) cause any issues with LVM Thin? Should I configure anything special for this scenario?
AFAIK, no. Multipath will show one disk per LUN and the host and you setup LVM on top of that.

Can I use ZFS on top of an FC LUN instead?
  • I read that ZFS is the only option that supports native VM replication in Proxmox.
  • However, some sources recommend not using ZFS on top of hardware RAID LUNs (like those from an MSA). Is this a real concern in production?
Replied before, but if you value your data don't use unsupported configurations. You can't connect to a ZFS LUN from more than one host, so you can't use it in a cluster.

Also, beware of this setup if you will expand to more nodes in the near future, as you will have to move from LVM Thin to plain LVM to allow more than one host to access the LUN. NEVER EVER CONNECT MORE THAN ONE HOST TO AN LVM THIN or ZFS LUN! You've been warned. For you peace of mind (and mine!), there are plans of supporting snapshots on plain LVM over shared LUNs for PVE9, although don't have much details on the feature yet.
 
I would like to know if this setup can be used with Proxmox VE, and if so — what is the recommended configuration, especially in terms of snapshot support and potential future Proxmox cluster expansion (using Replications of VMs).
At this point in time these two goals are incompatible when only native built-in PVE technologies are used. You need to pick either future cluster expandability or snapshots.

@VictorSTS covered pretty much all sides of your question. You may also find this a useful read: https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage/


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
At this point in time these two goals are incompatible when only native built-in PVE technologies are used. You need to pick either future cluster expandability or snapshots.

@VictorSTS covered pretty much all sides of your question. You may also find this a useful read: https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage/


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox

Thank you for the reply!
Regarding cluster creation in the future — I didn’t mean adding more servers to the cluster in order to share access to the HPE MSA storage (for HA, shared storage, etc.).
What I meant was simply creating a cluster from separate servers, each with its own local storage.
 
Yes you can as long as you only one host.


None that I'm aware off. It's essentially a LUN connected via FC from that host to the storage.


Yes it is, as long as only one server accesses that LUN.


Yes you do: storage replication requires ZFS. ZFS does not support any kind of SAN/RAID and requires direct access to the disks. To be fully clear, technically you can setup ZFS on a SAN disk but it is unsupported and issues will arise due to cache management and no one will help you because you are running an unsupported configuration.


AFAIK, no. Multipath will show one disk per LUN and the host and you setup LVM on top of that.


Replied before, but if you value your data don't use unsupported configurations. You can't connect to a ZFS LUN from more than one host, so you can't use it in a cluster.

Also, beware of this setup if you will expand to more nodes in the near future, as you will have to move from LVM Thin to plain LVM to allow more than one host to access the LUN. NEVER EVER CONNECT MORE THAN ONE HOST TO AN LVM THIN or ZFS LUN! You've been warned. For you peace of mind (and mine!), there are plans of supporting snapshots on plain LVM over shared LUNs for PVE9, although don't have much details on the feature yet.
Thank you for the reply!

Regarding cluster creation in the future — I didn’t mean adding more servers to the cluster in order to share access to the HPE MSA storage (for HA, shared storage, etc.).
What I meant was simply creating a cluster from separate servers, each with its own local storage.

And that’s exactly why I was planning to use replication, but it seems that I won’t be able to do that without ZFS.
So, for example, even if I have another HPE server with internal disks configured as a RAID 6 array, I won’t be able to choose ZFS as the filesystem during Proxmox installation in that case?

How should I properly configure the server internal storage if I want to use ZFS for replication?
 
Last edited:
And that’s exactly why I was planning to use replication, but it seems that I won’t be able to do that without ZFS.
So, for example, even if I have another HPE server with internal disks configured as a RAID 6 array, I won’t be able to choose ZFS as the filesystem during Proxmox installation in that case?

How should I properly configure the server internal storage if I want to use ZFS for replication?
PVE will let you choose ZFS even if you have raid, but you will be running a fully unsupported configuration and when issues arise no one will be able to/willing to provide support.

Check your hardware: many SmartArray allow to either change their personality to JBOD / IT / Passthrough mode or they are dual personality were a disk not configured for an array is automatically fully passthrough to the host. Anyway, a HBA is cheap enough to replace those smartarray if need arises. Then use those disks for ZFS and avoid using RAIDz due to it's lower performance (something that may no matter if using NVMe SSD) and space inefficiency on VM (zvol) workloads due to padding.
 
  • Like
Reactions: Johannes S