Shared Storage and Concurrent Access Handling (NFS, iSCSI, ZFS over SAN)

Specimen

New Member
Jul 9, 2025
13
0
1
Hi everyone,

I’m trying to understand how Proxmox handles concurrent access to shared storage when using:
  • NFS shares
  • iSCSI targets (used with LVM or ZFS over SAN/Fibre Channel)
In VMware, VMFS takes care of coordinating access between multiple hosts to avoid corruption. How does Proxmox deal with this?
For example:
  • If two Proxmox nodes access the same iSCSI LUN (used with ZFS or LVM), what mechanisms are in place to manage concurrent access?
  • Same question for NFS: does Proxmox rely entirely on the underlying NFS server for locking and access coordination?
  • Is there a risk of corruption if the storage is improperly shared between nodes?

I’m looking for technical clarity on whether Proxmox has any built-in features (like VMware’s VMFS) to manage this, or if the responsibility lies fully with the storage backend.


Thanks.
 
Hi @Specimen , welcome to the forum.

The primary type of file that multiple PVE nodes access in shared storage is the disk image (similar to VMDK in ESXi). When using a file-based shared storage like NFS, these are typically in QCOW2 format. You can also use RAW, which only changes the internal structure of the image—it does not affect how PVE accesses it.
PVE uses its own cluster coordination mechanisms to prevent simultaneous access. It ensures that only one host runs a VM at a time, thus accessing the disk image exclusively.


When you use iSCSI or NVMe over TCP, PVE receives access to raw block devices. These large LUNs are not file systems themselves. Instead, PVE typically uses LVM (Logical Volume Manager) to carve them into smaller volumes, which are then mapped to individual VMs.

Unlike VMFS in ESXi, LVM is not inherently cluster-aware. However, PVE’s management layer coordinates volume access to ensure safe, single-host usage of each logical volume within the cluster.


As for ZFS, it is not a cluster-aware file system. It cannot be shared safely between hosts, with or without PVE. This makes it unsuitable for shared storage in a clustered environment.

This is a frequently discussed topic in the community. We’ve published a detailed knowledge base article that might be especially helpful as you explore shared storage options in PVE:
https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage/

Hope this helps.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Hi @Specimen , welcome to the forum.

The primary type of file that multiple PVE nodes access in shared storage is the disk image (similar to VMDK in ESXi). When using a file-based shared storage like NFS, these are typically in QCOW2 format. You can also use RAW, which only changes the internal structure of the image—it does not affect how PVE accesses it.
PVE uses its own cluster coordination mechanisms to prevent simultaneous access. It ensures that only one host runs a VM at a time, thus accessing the disk image exclusively.


When you use iSCSI or NVMe over TCP, PVE receives access to raw block devices. These large LUNs are not file systems themselves. Instead, PVE typically uses LVM (Logical Volume Manager) to carve them into smaller volumes, which are then mapped to individual VMs.

Unlike VMFS in ESXi, LVM is not inherently cluster-aware. However, PVE’s management layer coordinates volume access to ensure safe, single-host usage of each logical volume within the cluster.


As for ZFS, it is not a cluster-aware file system. It cannot be shared safely between hosts, with or without PVE. This makes it unsuitable for shared storage in a clustered environment.

This is a frequently discussed topic in the community. We’ve published a detailed knowledge base article that might be especially helpful as you explore shared storage options in PVE:
https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage/

Hope this helps.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Hello,

Yes it helps a lot, thank you.

I've already managed to make ZFS over ISCSI works fine. (i'm talking about the way you use SSH next to ISCSI protocol). Does it suit well for a cluster usage unlike traditionnal ZFS ?
 
I've already managed to make ZFS over ISCSI works fine. (i'm talking about the way you use SSH next to ISCSI protocol). Does it suit well for a cluster usage unlike traditionnal ZFS ?
Whether you manage your backend storage via SSH, API or GUI, does not change the end result that the LUNs presented to PVE are iSCSI (raw block).

Since you tasked PVE with managing your backend storage via the appropriate plugin, it will use it's (PVE's) intelligence to ensure that steps it takes are compatible with PVE cluster technology.

The ZFS/iSCSI scheme is different than local ZFS. It shifts the ZFS layer to the backend storage device. How safely that device implements ZFS should be asked of the device manufacturer. If this is a home-lab with a non-HA storage - you are likely fine.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: jk3wl
If I understand correctly, there's no real limitation to using shared storage, regardless of the technology used. Proxmox is able handle multi access. What matters is that I'm able to maintain high availability for everything that isn't managed directly by Proxmox itself.
 
If I understand correctly, there's no real limitation to using shared storage, regardless of the technology used.
That's a very open-ended statement that can't be answered with "yes" or "no". There are many "it depends" here.
Proxmox is able handle multi access.
PVE can arbitrate exclusive access by multiple hosts.
What matters is that I'm able to maintain high availability for everything that isn't managed directly by Proxmox itself.
It may matter for your business needs, but it does not matter to PVE whether you have HA on external devices.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
That's a very open-ended statement that can't be answered with "yes" or "no". There are many "it depends" here.
Let me be more specific. I'm talking about using only the expected methods. I mean, if you manually create access to an iSCSI target via the CLI, without using the Datacenter menu, you can obviously expect some concurrent access issues.
PVE can arbitrate exclusive access by multiple hosts.

It may matter for your business needs, but it does not matter to PVE whether you have HA on external devices.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox

Just to make sure I got it right:
  • Proxmox itself doesn’t use a cluster-aware file system like VMFS.
  • Instead, it relies on its own cluster orchestration (via corosync and pmxcfs) to ensure that only one host at a time accesses a given VM disk, even if the storage is shared (NFS, iSCSI, etc.), thanks to the little checkbox "shared"
  • When using ZFS over iSCSI, each VM gets its own dedicated LUN (a ZVOL exported via iSCSI), so there's no risk of concurrent access, as long as the backend exports each volume independently.
  • Proxmox prevents conflicts by managing disk assignments at the cluster level, but it doesn't provide low-level locking like VMFS.
  • So if someone manually mounts or accesses the same LUN on two nodes outside of Proxmox, there’s a risk of corruption, right?

Let me know if I’ve misunderstood any part. Again, thank you for your patience and all the valuable information you're sharing.
 
I mean, if you manually create access to an iSCSI target via the CLI, without using the Datacenter menu, you can obviously expect some concurrent access issues.
This is not a correct expectation. Whether you create iSCSI session via DC menu, or manually, the higher layers will still take care of exclusivity. As long as you don't go out of your way to trip it up.
Proxmox itself doesn’t use a cluster-aware file system like VMFS.
Correct. There are very few Open Source options for CAF, none is directly supported or endorsed by PVE. You, however, can use them if you choose to.
Instead, it relies on its own cluster orchestration (via corosync and pmxcfs) to ensure that only one host at a time accesses a given VM disk,
the corosync and pmxcfs are building blocks, the intelligence for disk care is above them.
even if the storage is shared (NFS, iSCSI, etc.),
you can force simultaneous disk use across VMs, as long as your VM is equipped with proper guard-rails. This could be useful if you are running a virtualized cluster.
thanks to the little checkbox "shared"
checkbox just tells the PVE nodes that this storage pool is expected to be available on all nodes. It does not make storage shared. The intelligence is really at VM level. One does not want to run duplicate identical VMs. It will be bad for network, applications and storage.
When using ZFS over iSCSI, each VM gets its own dedicated LUN (a ZVOL exported via iSCSI), so there's no risk of concurrent access, as long as the backend exports each volume independently.
Let's just say "yes" here.
Proxmox prevents conflicts by managing disk assignments at the cluster level, but it doesn't provide low-level locking like VMFS.
It doesn't need to
So if someone manually mounts or accesses the same LUN on two nodes outside of Proxmox, there’s a risk of corruption, right?
If someone goes out of their way to mismanage VMFS, there is also a risk of corruption.

Cheers


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I mean, if you manually create access to an iSCSI target via the CLI, without using the Datacenter menu, you can obviously expect some concurrent access issues.
Isn't that the same for VMFS? You can always mess with things if you use then not properly. e.g. mounting it manually or on another host and mess it up from there. VMware is also just a Linux binary compatible system (some people argue it's also running a linux based kernel). If you want to mess with things, you're going to mess with things. Nothing will prevent you from doing so.

PVE however will access everything properly, if you access it properly (via GUI, API, PVE-CLI). It's so good at it, that they also dropped proper locking methods like cluster LVM (cluster aware volume manager) and do everything themselfes.
 
  • Like
Reactions: Johannes S