Hi all,
I’m in the process of connecting Pure Storage and IBM storage to Proxmox using multipath.
I created a shared LVM on top of it and things are working, but I noticed a couple of behaviours I’m not 100% sure about.
Maybe someone can confirm if this is expected behaviour.
Setup:
When I run qm rescan on node1, I get errors for the VM disk that lives on node2:
failed to stat '/dev/Pure-storage-vg01/vm-101-disk-0.qcow2'
failed to stat '/dev/Pure-storage-vg02/vm-101-disk-0.qcow2'
When I do the same on node2, it shows the opposite (errors for the disk on node1):
failed to stat '/dev/Pure-storage-vg02/vm-100-disk-0.qcow2'
failed to stat '/dev/Pure-storage-vg01/vm-100-disk-0.qcow2'
My guess: only the node that “owns” the VM can write metadata, but not 100% sure.
Looks like only the reference is deleted, not the actual data.
I tried enabling “wipe removed volumes”, but wiping is very slow (only ~10–15 MB/s).
The storage itself should not be the bottleneck.
This is all new to me so hope you can help me in making some sense in all of this.
Coming from vmfs it is a learning curve.
I’m in the process of connecting Pure Storage and IBM storage to Proxmox using multipath.
I created a shared LVM on top of it and things are working, but I noticed a couple of behaviours I’m not 100% sure about.
Maybe someone can confirm if this is expected behaviour.
Setup:
- 3-node Proxmox cluster (9.0.6)
- 4x Pure Storage arrays (2 active cluster)
- 2x IBM arrays (hyperswap cluster)
- Example: VM ID 100 on node1, VM ID 101 on node2
Behaviour 1
When I run qm rescan on node1, I get errors for the VM disk that lives on node2:
failed to stat '/dev/Pure-storage-vg01/vm-101-disk-0.qcow2'
failed to stat '/dev/Pure-storage-vg02/vm-101-disk-0.qcow2'
When I do the same on node2, it shows the opposite (errors for the disk on node1):
failed to stat '/dev/Pure-storage-vg02/vm-100-disk-0.qcow2'
failed to stat '/dev/Pure-storage-vg01/vm-100-disk-0.qcow2'
My guess: only the node that “owns” the VM can write metadata, but not 100% sure.
Behaviour 2
- I created a VM (ID 100).
- Moved its disks from VG01 → VG02 (same storage, different iSCSI volume).
- Sometimes I get this error afterwards: can't deactivate LV, volume deactivation failed
Looks like only the reference is deleted, not the actual data.
I tried enabling “wipe removed volumes”, but wiping is very slow (only ~10–15 MB/s).
The storage itself should not be the bottleneck.
This is all new to me so hope you can help me in making some sense in all of this.
Coming from vmfs it is a learning curve.