Proxmox + shared LVM

Sep 12, 2025
15
4
3
Hi all,

I’m in the process of connecting Pure Storage and IBM storage to Proxmox using multipath.
I created a shared LVM on top of it and things are working, but I noticed a couple of behaviours I’m not 100% sure about.
Maybe someone can confirm if this is expected behaviour.

Setup:
  • 3-node Proxmox cluster (9.0.6)
  • 4x Pure Storage arrays (2 active cluster)
  • 2x IBM arrays (hyperswap cluster)
  • Example: VM ID 100 on node1, VM ID 101 on node2

Behaviour 1​


When I run qm rescan on node1, I get errors for the VM disk that lives on node2:
failed to stat '/dev/Pure-storage-vg01/vm-101-disk-0.qcow2'
failed to stat '/dev/Pure-storage-vg02/vm-101-disk-0.qcow2'

When I do the same on node2, it shows the opposite (errors for the disk on node1):
failed to stat '/dev/Pure-storage-vg02/vm-100-disk-0.qcow2'
failed to stat '/dev/Pure-storage-vg01/vm-100-disk-0.qcow2'

My guess: only the node that “owns” the VM can write metadata, but not 100% sure.

Behaviour 2​

  • I created a VM (ID 100).
  • Moved its disks from VG01 → VG02 (same storage, different iSCSI volume).
  • Sometimes I get this error afterwards: can't deactivate LV, volume deactivation failed
When I delete VM 100 and then create a new VM again on VG01, the old data is still there
Looks like only the reference is deleted, not the actual data.
I tried enabling “wipe removed volumes”, but wiping is very slow (only ~10–15 MB/s).
The storage itself should not be the bottleneck.

This is all new to me so hope you can help me in making some sense in all of this.
Coming from vmfs it is a learning curve.
 
My guess: only the node that “owns” the VM can write metadata, but not 100% sure.
Normally when using Shared LVM the LV for a particular VM is only active on the node which owns the VM. You should not see a "dev" link for that LVM.

From your output, you are using the new QCOW/LVM technology. Keep in mind that it still has Experimental status. You may have run into something worth properly reporting.

You may want to collect and provide (best to use text encoded with CODE tags or SPOILER) from each node:
lsblk
multipath -ll
pvs
vgs
lvs

Are there any interesting messages in the "dmesg" output during the rescan?

Sometimes I get this error afterwards: can't deactivate LV, volume deactivation failed
We've seen this happen when there is an unexpected holder on the LV. You may need to investigate it a lower level to see why deactivation failed. Its not a good idea to have the LV activated/available on two nodes at the same time.
When I delete VM 100 and then create a new VM again on VG01, the old data is still there
In LVM, during removal only the metadata is removed. If you create a same-sized LVM, perhaps other cases as well, the data "comes back" because it was never erased. You simply put the header back that may have matched perfectly to what was there before.
I tried enabling “wipe removed volumes”, but wiping is very slow (only ~10–15 MB/s).
This is something we've seen in our QCOW/LVM analyses. I believe there is an ongoing work to improve in this area.


You may find these articles helpful:
https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage/
https://kb.blockbridge.com/technote/proxmox-qcow-snapshots-on-lvm/



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: barberos
Your link is login protected. Like @bbgeek17 mentioned your own configured something wrong. use the link of @bbgeek17 to configure your storage.
I am able to browse the link incognito, you can click the login prompt away.
The steps are more or less the same except some storage specific for the multipath.conf.
I will cross-reference it tomorrow again just to make sure.
 
@bbgeek17

Thank you for all the information.
I found your website just after the post.
Tomorrow i will cross-reference everything again and will give a beter reply to all your feedback.
 
Last edited:
Any news on this topic? I tink I found the very same trap. I have LVM-thick on shared iSCSI device. I can move qcow2 file of existing VMs to there and they work.

However, I cannot re-assign a disk image located there to another VM. If I move a disk to the iSCSI LVM and detach it there, I cannot attach it to another VM on another host which is possible with any "normal" storages. While the normal storages can be used in import an exsting disk, this is not possible for the LVMs.

I manually modified a VM cfg file which I assigned the the ID on both hosts. There it worked. But it cannot be assigned using the GUI.

On the other hand: If I have a detached image with the VM id in its name in LVM and delete the respective VM ticking on "Destroy unreferenced disks owned by guest", the image is deleted although I cannot select this storage to assign it.

Is this a bug or might I do anything wrong?
 
Hi @fant ,

While there are some similarities , it seems like your report is not directly related to OP's. It is generally preferred to open a new thread as it avoids mixing different things.
In your case it sounds like you have a good repro scenario. I would recommend a new thread that includes: storage configuration file, step by step instructions on reproductions, preferably via CLI.

Cheers


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox