Need help with FC LUN in cluster

angry_beaver

New Member
Apr 23, 2026
5
1
3
Hi, I have 3 node in proxmox cluster. i have a network storage hp2050 and hp1040. I create zone in FC switch and present LUN to all nodes. FC LUN has seen as local disk in all 3 nodes. I connect LUN's as PV, then I create VM (with ID 105) in node_1 (storage was LUN), start it and then destroy. then I create new VM with ID 105 in node_2, VM use OLD disk image. And also happened on node 3.... new VM with same ID use OLD disk image.
How can I correct add this 2 FC LUN's as "storage for VM image"?
Thanks.
 

Attachments

  • cluster-node3.jpg
    cluster-node3.jpg
    62.3 KB · Views: 2
Hi @angry_beaver ,
If I understood you correctly, you need to enable this attribute:
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#:~:text=remote iSCSI server.-,saferemove,-Called "Wipe Removed


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
No,
I checked this option when I add FC LUN in Datacenter, but when I destroy VM and create new VM with same ID, old disk will be used by new VM.
I think, I incorrect add disk in datacenter....
I use this command to add

pvcreate /dev/mapper/hp2050data
pvcreate /dev/mapper/hp1040data


vgcreate main-repo /dev/mapper/hp2050data
vgcreate addon-repo /dev/mapper/hp1040data
 
Last edited:
I reproduce migration VM from nodes....
first I create VM on node3 with OS windows 10, then I destroy Vm on node3 and create new Vm on node1 with linux OS.
I migrate new vm (with linux) from node1 to node3
in log i see:

2026-04-29 15:43:16 starting migration of VM 100 to node 'ictdlpx03' (172.16.10.253)
2026-04-29 15:43:16 found local disk 'addon-data:vm-100-disk-1.qcow2' (attached)
2026-04-29 15:43:16 starting VM 100 on remote node 'ictdlpx03'
2026-04-29 15:43:19 volume 'addon-data:vm-100-disk-1.qcow2' is 'addon-data:vm-100-disk-0.qcow2' on the target
2026-04-29 15:43:19 start remote tunnel
2026-04-29 15:43:20 ssh tunnel ver 1
2026-04-29 15:43:20 starting storage migration
2026-04-29 15:43:20 scsi0: start migration to nbd:unix:/run/qemu-server/100_nbd.migrate:exportname=drive-scsi0

mirror-scsi0: transferred 50.0 GiB of 50.0 GiB (100.00%) in 7m 45s, ready


that this mean? why proxmox create new lv?


then i try migrate from node3 to node2, but i got error:

2026-04-29 15:54:39 found local disk 'addon-data:vm-100-disk-0.qcow2' (attached)
2026-04-29 15:54:39 copying local disk images
2026-04-29 15:54:39 ERROR: storage migration for 'addon-data:vm-100-disk-0.qcow2' to storage 'addon-data' failed - cannot migrate from storage type 'lvm' to 'lvm'
2026-04-29 15:54:39 aborting phase 1 - cleanup resources
2026-04-29 15:54:39 ERROR: migration aborted (duration 00:00:01): storage migration for 'addon-data:vm-100-disk-0.qcow2' to storage 'addon-data' failed - cannot migrate from storage type 'lvm' to 'lvm'
TASK ERROR: migration aborted


So, i think that I wrong create "shared" disk....

How can i correct add FC LUN disk for VM disk image with snap-shot?

UPDATE:
on node 2:

--- Logical volume ---
LV Path /dev/addon-repo/vm-100-disk-0.qcow2
LV Name vm-100-disk-0.qcow2
VG Name addon-repo
LV UUID tMipyK-zgGe-Mwm6-LFTZ-V7gV-J7sN-rHW6We
LV Write Access read/write
LV Creation host, time ictdlpx03, 2026-04-29 15:43:17 +0200
LV Status NOT available
LV Size 50.01 GiB
Current LE 12803
Segments 1
Allocation inherit
Read ahead sectors auto

on node1 lvdisplay:
--- Logical volume ---
LV Path /dev/addon-repo/vm-100-disk-0.qcow2
LV Name vm-100-disk-0.qcow2
VG Name addon-repo
LV UUID tMipyK-zgGe-Mwm6-LFTZ-V7gV-J7sN-rHW6We
LV Write Access read/write
LV Creation host, time ictdlpx03, 2026-04-29 15:43:17 +0200
LV Status NOT available
LV Size 50.01 GiB
Current LE 12803
Segments 1
Allocation inherit
Read ahead sectors auto

on node3:

--- Logical volume ---
LV Path /dev/addon-repo/vm-100-disk-0.qcow2
LV Name vm-100-disk-0.qcow2
VG Name addon-repo
LV UUID tMipyK-zgGe-Mwm6-LFTZ-V7gV-J7sN-rHW6We
LV Write Access read/write
LV Creation host, time ictdlpx03, 2026-04-29 15:43:17 +0200
LV Status available
# open 0
LV Size 50.01 GiB
Current LE 12803
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 4096
Block device 252:10
 
Last edited:
You did not configure your system properly. As you noted, you are not using shared storage functionality.
There is no value in continuing to troubleshoot beyond this discovery. Your best course of action is to:
remove all VMs
remove all storage entries
zap/wipe the disk
configure everything from scratch. Although this guide is iSCSI oriented, the FC concepts are very similar :
https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage/

Cheers


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox