Proxmox 9 with shared LVM on iSCSI problem

fant

New Member
May 20, 2024
8
0
1
Germany
Good morning,

I want to setup a shared LVM storage on a iSCSI target to be able shift VMs between Proxmox hosts. It might be a cluster later. Currently, we do these tests to migrate from VMWare ESXi to Proxmox 9.

For these tests, I used two Proxmox 9 servers. I denote them with server 1 and server 2.

iSCSI server using Ubuntu 24.04.03. iSCSI provided by tgt.

Config:

<target iqn.2026-01.iscsi1.com:lun1>
backing-store /dev/vdb
initiator-address 192.168.1.0/24
</target>


On server 1: Added iSCSI target using GUI Datacenter -> Storage -> Add -> iSCSI:
  • ID: iscsi1-disk
  • Portal :IP of iSCSI server above
  • Target is displayed with name above: iqn.2026-01.iscsi1.com:lun1
  • Nodes: All (no restrictions) <- this is default
  • Enable: ticked ) <- this is default
  • Use LUNs directly: unticked
On server 1 only: generation of LVM on new target in ssh terminal (CLI):
  • Check of availabililty: lsscsi shows disk as /dev/sdc:
    [7:0:0:0] storage IET Controller 0001 -
  • [7:0:0:1] disk IET VIRTUAL-DISK 0001 /dev/sdc
  • Generation of LVM physical volume:
    pvcreate /dev/sdc
  • Generation of LVM volume group:
    vgcreate iscsi1-lvm /dev/sdc
Import of LVM on server 1 using GUI Datacenter -> Storage -> Add -> LVM:
  • ID: shared-storage-1
  • Base storage: Existing volume groups
  • Volume group: iscsi1-lvm
  • Content: Disk image, Container <- this is default
  • Nodes: All (no restrictions) <- this is default
  • Enable: ticked ) <- this is default
  • Shared: ticked
  • Wipe Removed Volumes: unticked ) <- this is default
  • Advanced: Allow Snapshots as Volume-Chain: ticked

On server 2: Added iSCSI target using GUI Datacenter -> Storage -> Add -> iSCSI:
  • ID: iscsi1-disk
  • Portal: IP of iSCSI server above
  • Target is displayed with name above: iqn.2026-01.iscsi1.com:lun1
  • Nodes: All (no restrictions) <- this is default
  • Enable: ticked ) <- this is default
  • Use LUNs directly: unticked
Import of LVM on server 2 using GUI Datacenter -> Storage -> Add -> LVM:
  • ID: shared-storage-1
  • Base storage: Existing volume groups
  • Volume group: iscsi-lvm
  • Content: Disk image, Container <- this is default
  • Nodes: All (no restrictions) <- this is default
  • Enable: ticked ) <- this is default
  • Shared: ticked
  • Wipe Removed Volumes: unticked ) <- this is default
  • Advanced: Allow Snapshots as Volume-Chain: ticked
Failure description:
  • Added test VM (VM ID: 60000) on server 1.
  • Copy its qcow2 disk to shared-storage 1 using GUI keeping the original disk on local storage.
  • Detach disk copy from VM and re-attached disk on local storage.
  • Generate test VM on server 2 on GUI.
  • Import of disk from shared storage not possible as GUI does not show this storage.
  • CLI command
    qm disk rescan
    shows error message: failed to stat '/dev/iscsi1-lvm/vm-60000-disk-0.qcow2'
  • Cannot add disk.
  • Generate new VM using same VM ID 60000 on server 2.
  • Same failure.
  • Adding disk manually in CLI into config file: disk works in VM.
Can anybody comment on this? Do I configure anything incorrectly or is this a bug?

Thank you in advance.
 
I want to setup a shared LVM storage on a iSCSI target to be able shift VMs between Proxmox hosts. It might be a cluster later.
You cannot have Shared storage in a two node non-cluster installation. Shared storage, as defined by PVE terminology, is part of the cluster.

Everything you do after that is a road to data corruption. There are coordinated operations, particularly dealing with LVM cache, that are controlled and executed by PVE cluster. None of that happens in your "procedure".


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Johannes S and UdoB
@bbgeek17,

Thank you for your comment. Currently, we work with VMWare ESXi without cluster and we share all iSCSI targets with all nodes. This allows to manuall stop a VM on one host and transfer it manually to another and run it there. We do not employ cluster functionality at all. VMWare ESXi blocks access to virtual disks if they are connected to another VM.

This is what I try to get. What is the Proxmox way to achieve a similar scenario?
 
Thank you for your comment. Currently, we work with VMWare ESXi without cluster and we share all iSCSI targets with all nodes. This allows to manuall stop a VM on one host and transfer it manually to another and run it there. We do not employ cluster functionality at all. VMWare ESXi blocks access to virtual disks if they are connected to another VM.

VMware enables this behavior because it has VMFS - a cluster-aware, shared filesystem. VMFS implements proprietary on-disk locking and reservation mechanisms that safely coordinate multi-initiator access to the same disk.

Proxmox does not include an equivalent clustered filesystem. LVM by itself is not cluster-safe. Additional coordination is required to avoid concurrent access and to maintain metadata integrity. In Proxmox, this coordination is enforced at the cluster layer, which "blocks access to virtual disks if they are connected to another VM."

ESXi and Proxmox manage storage and concurrency very differently. You should not try to "lift and shift" operational models from VMware into a Proxmox environment.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
  • Like
Reactions: Johannes S