LVM shared + ISCSI Lun + lvmlockd + sanlock

AxelTwin

Well-Known Member
Oct 10, 2017
138
6
58
39
Hi guys,

In the below setup, trying to have an active/active ha cluster, we are wondering if we need to setup lvmlockd + sanlock to avoid possible data corruption:

building a high-availability (HA) infrastructure in Proxmox using shared LVM storage accessible by two Proxmox nodes, built on ISCSI LUN served by mirrored SAN vDisks.:

Objective

We want to:
  • Create a shared LVM volume group on a mirrored DataCore vDisk accessible by both Proxmox nodes (active/active).
  • Run VMs on both nodes with the ability to automatically fail over in case one node goes down (Proxmox HA).
  • Avoid data corruption by ensuring proper cluster-wide locking on shared LVM.

Infrastructure Setup

  • 2 Proxmox nodes (gri-it-pve01, gri-it-pve02)
  • 1 DataCore mirrored vDisk (visible as /dev/mapper/mpath-vd-lvm-01 on both nodes)
  • Multipath enabled to handle redundant paths to DataCore

⚙️ Technologies Used

ComponentTechnologyPurpose
StorageDataCore Virtual Disks (mirrored)Provides shared block storage replicated for high availability
MultipathLinux DM-Multipath (mpathX)Ensures resilience and redundancy to block storage
LVMshared LVMAllows concurrent access from both nodes using cluster-wide locking
Proxmox HAProxmox Cluster + HA ManagerDetects node failures and automatically migrates/starts VMs elsewhere
 
Last edited:
Hi bbgeek, thanks for answering.

I went across this documentation and found it very usefull. unfortunately there are a few things we don't understand the same way with my colleague and we started a battle regarding wether or not lvmlockd + sanlock are necessary or not
 
You have to keep in mind that PVE is not just an application running on a standard Linux server. If you're using PVE for serious production, you need to treat it as an appliance since it manages your disks and access to them.

If you’ve reviewed our article, you know that the underlying LUN is available to all hosts, the Volume Group is shared across hosts, but the Logical Volume (LV) is only active on the host where the parent VM is running.

Using a "sanlock" type lock, which according to the man page "places locks on disk within LVM storage," could potentially disrupt access for PVE hosts trying to interact with the LUN, VG, or LV.

PVE already manages its own cluster-wide lock on LVM, and adding an additional one would put you in unsupported territory.

Good luck.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: UdoB
Hi AxelTwin, 2 nodes cluster is not best practice for Proxmox VE, at least you have to build 3 nodes cluster, either use Qdevice act as quorum in the 2 nodes cluster. you can reference Corosync External Vote Support in Cluster Manager (https://pve.proxmox.com/wiki/Cluster_Manager#_corosync_external_vote_support).
And for shared LVM, I have experience on 5 nodes cluster with FC Storage. I'm not using lvmlockd and sanlock in that environment, Proxmox VE Cluster manage the shared LVM very will.
You can also reference following discuss, they are practiced on lvmlockd and sanlock to see how both affect with Proxmox VE Cluster.
 
  • Like
Reactions: Johannes S
You have to keep in mind that PVE is not just an application running on a standard Linux server. If you're using PVE for serious production, you need to treat it as an appliance since it manages your disks and access to them.

If you’ve reviewed our article, you know that the underlying LUN is available to all hosts, the Volume Group is shared across hosts, but the Logical Volume (LV) is only active on the host where the parent VM is running.

Using a "sanlock" type lock, which according to the man page "places locks on disk within LVM storage," could potentially disrupt access for PVE hosts trying to interact with the LUN, VG, or LV.

PVE already manages its own cluster-wide lock on LVM, and adding an additional one would put you in unsupported territory.

Good luck.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
So long story short, from what I understand shared LVM needs a locking mechanism to prevent data corruption.

if used standalone = lvmlockd + sanlock is needed
if used with proxmox = proxmox manage the locking by itself

Correct me if I am wrong.
 
Last edited:
Hi AxelTwin, 2 nodes cluster is not best practice for Proxmox VE, at least you have to build 3 nodes cluster, either use Qdevice act as quorum in the 2 nodes cluster. you can reference Corosync External Vote Support in Cluster Manager (https://pve.proxmox.com/wiki/Cluster_Manager#_corosync_external_vote_support).
And for shared LVM, I have experience on 5 nodes cluster with FC Storage. I'm not using lvmlockd and sanlock in that environment, Proxmox VE Cluster manage the shared LVM very will.
You can also reference following discuss, they are practiced on lvmlockd and sanlock to see how both affect with Proxmox VE Cluster.
Hi david,
Thanks for the hints, we are actually running some testing before to migrate our vmware infrastructure and go live.
The plan is a a bit more elaborated than a 2 nodes cluster.
 
Thanks for the hints, we are actually running some testing before to migrate our vmware infrastructure and go live.
The plan is a a bit more elaborated than a 2 nodes cluster.

The trouble is that with a two node cluster you might run into problems due to missing quorums so your testing will give you problems you wouldn't have otherwise.
If you don't have the hardware for a third node you could use a so called q-device just to give your cluster quorum:
https://pve.proxmox.com/wiki/Cluster_Manager#_corosync_external_vote_support

This qdevice can be on a small Linux VM in your existing vmware , a small pc/raspberry or (my preffered approach) on a ProxmoxBackupServer you might want to have anyhow for backups.