Proxmox 9.1.1 FC Storage via LVM

ertanerbek

Well-Known Member
Mar 29, 2019
96
7
48
44
Hello,

Has anyone successfully implemented a professional Proxmox setup with Fibre Channel (FC) SAN storage? I am not referring to IPSAN, but specifically FC-based SAN.

In a clustered environment, I am experiencing significant issues, particularly during cloning, wipe disk operations. The lock mechanisms appear problematic, and Proxmox seems unable to handle them reliably. In my test environment, when I attempt to delete or move disks simultaneously from different nodes, the system begins to encounter errors.

My current setup is as follows:

  • Proxmox 9.1.1
  • QCOW2 disk format
  • Huawei 5000v3 SAN Storage → HBA → Linux Multipath → LVM → Proxmox (2 node cluster with qdevice)
This raises the question: Should Proxmox’s LVM support be configured with CLVM? It feels as though standard LVM is not functioning correctly in this scenario. Regardless of whether I use RAW or QCOW, disk deletion and migration operations consistently cause problems.

If anyone has managed to run this configuration stably, could you share documentation or insights on how you achieved it? The storage lock issues are proving to be a major challenge.

Nov 26 11:59:38 PVE1 pvedaemon[94779]: lvremove 'STR-5TB-HUAWEI-NVME-045/vm-103-disk-0' error: 'storage-STR-5TB-HUAWEI-NVME-045'-locked command timed out - aborting
Nov 26 11:59:38 PVE1 pvedaemon[72268]: <root@pam> end task UPID:PVE1:0001723B:000F6884:6926C13E:imgdel:103@STR-5TB-HUAWEI-NVME-045:root@pam: lvremove 'STR-5TB-HUAWEI-NVME-045/vm-103-disk-0' error: 'storage-STR-5TB-HUAWEI-NVME-045'-locked command timed out - aborting
 
This setup should be stable, here's the documentation for multipath:
https://pve.proxmox.com/wiki/Multipath

Related information can be found here as well:
https://pve.proxmox.com/wiki/Migrate_to_Proxmox_VE#Storage_boxes_(SAN/NAS)


The error you get is due to a hard timeout of 60s for operation includes volume allocation on shared storage's. you need to make sure your storage is fast enough:
https://forum.proxmox.com/threads/u...-command-timed-out-aborting.98274/post-424883
https://forum.proxmox.com/threads/e...mage-got-lock-timeout-aborting-command.65786/
 
This raises the question: Should Proxmox’s LVM support be configured with CLVM?
CLVM championed by RedHat at one point, seems to have fallen out of favor, so taking on support for it might be quiet a tall task
https://salsa.debian.org/lvm-team/l...vmoved LVs.-,Remove clvmd,-Remove lvmlib (api
https://askubuntu.com/questions/1241259/clvm-package-in-repo
https://www.sourceware.org/cluster/clvm/

@bkry is correct, simultaneous operations on a shared storage where there are metadata consistency requirements must be serialized. It is quiet easy to overrun the timeout on such operations as "wipe".
https://github.com/proxmox/pve-cluster/blob/master/src/PVE/Cluster.pm#L642


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox