LVM vs LVM-Thin on Shared Fibre Channel LUN (3-Node Cluster) – Best Practice?

pvpaulo

Member
Jun 15, 2022
58
1
13
Hello everyone,


I have a 3-node Proxmox cluster connected to a SAN via dedicated Fibre Channel (single FC switch).


There is a 2TB LUN presented to all three nodes, configured with multipath on each host, and I intend to use it as shared storage.


Previously, I used iSCSI + multipath + LVM (thick) as shared storage, and the environment worked very well, especially for live migration.


Now that I am migrating to Fibre Channel + multipath, I am evaluating whether I should continue using traditional LVM (thick) or switch to LVM-Thin to benefit from more efficient VM snapshots and backup flexibility.


My questions are:


  1. Are there any known limitations or risks when using LVM-Thin on a shared LUN accessed by multiple cluster nodes?
  2. Can a thin pool shared between three nodes present any locking or concurrency issues?
  3. Is there any official recommendation or established best practice for FC + multipath environments regarding the use of LVM-Thin versus traditional LVM?
  4. For production-critical environments, is it generally safer to keep LVM thick and delegate snapshot functionality to the SAN instead?
  5. What specific precautions should be taken to avoid overprovisioning risks when using LVM-Thin on a shared Fibre Channel LUN?

I would like to confirm best practices before standardizing this setup in production.


Link documentation : https://pve.proxmox.com/wiki/Storage
 
Last edited:
Previously, I used iSCSI + multipath + LVM (thick) as shared storage, and the environment worked very well, especially for live migration.
Now that I am migrating to Fibre Channel + multipath, I am evaluating whether I should continue using traditional LVM (thick) or switch to LVM-Thin to benefit from more efficient VM snapshots and backup flexibility.
My questions are:
Are there any known limitations or risks when using LVM-Thin on a shared LUN accessed by multiple cluster nodes?

Yes, lvm/thin is not cluster-aware so you will risk a loss of data. It's not supported for a reason, see also the table on https://pve.proxmox.com/wiki/Storage which clearly states that lvm/thin is NOT a shared storage.

Can a thin pool shared between three nodes present any locking or concurrency issues?
No for exactly the reasons you state (the mechanisms used by PVE for lvm/thick to ensure that no issues with locking and cocurency are occuring don't work there).

Is there any official recommendation or established best practice for FC + multipath environments regarding the use of LVM-Thin versus traditional LVM?

Yes: Don't use LVM/thin with them.
For production-critical environments, is it generally safer to keep LVM thick and delegate snapshot functionality to the SAN instead?
Yes or use the snapshot-as-chain feature introduced as technical preview in PVE9. It comes with some caveat though see following pieces by @bbgeek17 :
The PVE doc has also a section on this feature: https://pve.proxmox.com/pve-docs/chapter-pvesm.html#pvesm_lvm_config

An alternative which is often used is to utilice the snapshot-backups and live-restore of the Proxmox Backup Server for typical snapshot usecases (like ensuring that you can rollback a update gone wrong):
https://pve.proxmox.com/wiki/Migrate_to_Proxmox_VE#Alternatives_to_Snapshots
Since this native backup feature of ProxmoxVE uses an internal mechanism of the qemu/kvm hypervisor it doesn't need snapshot support in the storage.

What specific precautions should be taken to avoid overprovisioning risks when using LVM-Thin on a shared Fibre Channel LUN?

Just don't. Use lvm/thick instead.