Hi everyone,
I’m trying to understand how Proxmox handles shared block storage compared to VMware VMFS, and I’d like to ask for some clarification or best practices.
In VMware, VMFS allows multiple ESXi hosts to mount the same LUN concurrently with read/write access, since it is a cluster-aware filesystem.
In Proxmox, when using iSCSI:
I understand this is expected behavior, but it raises a few questions for me:
I’m mainly looking for a shared storage solution that:
Any insights, recommendations, or references would be greatly appreciated.
Thanks in advance!
I’m trying to understand how Proxmox handles shared block storage compared to VMware VMFS, and I’d like to ask for some clarification or best practices.
In VMware, VMFS allows multiple ESXi hosts to mount the same LUN concurrently with read/write access, since it is a cluster-aware filesystem.
In Proxmox, when using iSCSI:
- If I format the LUN with ext4 or XFS, it can only be safely mounted by a single host at a time.
- Mounting the same ext4/XFS filesystem on multiple Proxmox nodes is not possible (or risks corruption), because those filesystems are not cluster-aware.
I understand this is expected behavior, but it raises a few questions for me:
- Is there any VMFS-like equivalent in Proxmox (i.e. a cluster-aware filesystem for shared block storage)?
- Are there any ongoing developments or plans in Proxmox for such a filesystem?
- Besides Ceph (RBD / CephFS), are there any recommended approaches to achieve shared iSCSI storage across multiple nodes?
- Would cluster filesystems like OCFS2 or GFS2 be considered viable or supported in production with Proxmox?
- NFS works for shared storage, but in my case performance becomes an issue — are there recommended optimizations or alternatives in this scenario?
I’m mainly looking for a shared storage solution that:
- Can be accessed by multiple Proxmox nodes simultaneously
- Supports live migration
- Offers better performance than traditional NFS
Any insights, recommendations, or references would be greatly appreciated.
Thanks in advance!