Cluster two node active with shared storage in FC connection

aflocco

New Member
Jan 13, 2026
6
1
3
hi,

I'm trying to set up a cluster consisting of two nodes and shared storage, connected via FC to the two nodes.
The two nodes need to be able to access and write to the storage and allow live migration.
(Obviously, switching from VMware to Proxmox)
I can configure everything, but without live migration, with a Datastore, but in LVM.

The storage in question is a Dell Unity.

What's the best sequence, or is there a guide for such a configuration?
 
  • Like
Reactions: Johannes S
To add to @bbgeek17
check out the multipathing guide that also covers the finalization with a shared LVM on top: https://pve.proxmox.com/wiki/Multipath

And with just 2 nodes, you need to add a 3rd vote to the cluster, as otherwise if you lose/shutdown one node, the remaining would only have 50% of the votes, which is not the majority/quorum.

You don't need a full 3rd Proxmox VE node, but can utilize the QDevice mechanism where the external part can run on some other (small) machine. https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_corosync_external_vote_support
 
Last edited:
Yes, I was about to reply to @aaron. I'm aware of the quorum and that I have at least a third vote. I would have set it up. I was wondering if it was possible, since I can't manage live migration and shared storage access. If Proxmox can actually do this type of configuration, the problem lies in the multipath configuration of the two nodes and the storage sharing at the Linux level.
 
Well, live migration should usually work always. With a non-shared storage it will also transfer the disks of the guests, and that can take a long time.

So if you followed the multipath guide and still have some issues, the question would be, what exactly are you running into? any errors or logs?

And please share some details for example the contents of your storage configuration:
cat /etc/pve/storage.cfg

and paste the output of that command within CODE tags, or use the formatting buttons at the top of the editor :)
 
this is my storage config:
root@pve01:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content iso,import,backup,vztmpl

lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir

lvmthin: Datasore1
thinpool poolunity
vgname vgunity
content rootdir,images


root@pve01:~# multipath -ll
mpatha (36006016022f24400944f84694e3d2df3) dm-5 DGC,VRAID
size=6.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 15:0:0:0 sdb 8:16 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
`- 16:0:0:0 sdc 8:32 active ready running


root@pve01:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-a-tz-- <170.92g 0.00 0.91
root pve -wi-ao---- 79.46g
swap pve -wi-ao---- 8.00g
poolunity vgunity twi-aotz-- <5.86t 0.00 10.42
vm-100-disk-0 vgunity Vwi-aotz-- 32.00g poolunity 0.00


errore in node2 when i try open Datasore1
"activating LV 'vgunity/poolunity' failed: device-mapper: create ioctl on vgunity-poolunity LVM-cLNKz9CpeXxAAarBndtxr918B1sbWXN8v1t7s40BW2CPFeKnThDyyIGrWSkmCKhs-pool failed: Device or resource busy (500)"

i can' set set Shared the pool in gui and in cli..
 
lvmthin: Datasore1
I think this is the reason! A shared LVM cannot be of the type thin, but must be a regular/thick LVM!

In a thin LVM you can only have one writer -> local host only.

If you need snapshots, you can enable the new "Snapshot as a Volume Chain" feature when you add the DC -> Storage configuration for it.
 
  • Like
Reactions: Johannes S
ok, yes classic LVM everything works, except live migration..

root@pve01:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content iso,import,backup,vztmpl

lvmthin: local-lvm
thinpool data
vgname pve
content rootdir,images

lvm: Datastore1
vgname vgunity
content rootdir,images
saferemove 0
shared 1
snapshot-as-volume-chain 1