free Starwind x Proxmox SAN storage plugin

alma21

Member
May 4, 2024
39
22
8
Hi,

I have installed and tested the free / open source Starwind Proxmox SAN storage plugin on a small testcluster (PVE 8) with shared block storage and it works really good .... thin provisioning, snapshots, thin disk auto extension, live migration , linked clones.... at the end a cluster aware (shared) LVM thin implementation (SAN vendor neutral)

just wondering why I have not found more infos/feedback .... would be eventually also a good candidate/alternative for direct PVE integration

Is someone using this in a production environment ? any experience ?

https://www.starwindsoftware.com/re...-proxmox-san-integration-configuration-guide/
 
I have been using it in a test environment with 3 PVE nodes as a replacement for LVM, over an iSCSI multipath connection to 2 Compellent data stores that present multiple Highly Available live volumes.
It has proven very reliable and has all the features missing from LVM when used on top of iSCSI.
 
  • Like
Reactions: alma21
Why not trying out Ceph ? Starwinds is not open source, it is free for 3 node deployment. I am using Starwinds with ESXi as it is certified by Broadcom and it is working great. But it is somehow limited with functionality and it is iSCSI that is not the fastest protocol especially if the underlying hardware is NVMe.
 
  • Like
Reactions: Johannes S
Why not trying out Ceph ? Starwinds is not open source, it is free for 3 node deployment. I am using Starwinds with ESXi as it is certified by Broadcom and it is working great. But it is somehow limited with functionality and it is iSCSI that is not the fastest protocol especially if the underlying hardware is NVMe.
I think this is a misunderstanding .... the plugin which I mentioned can be used with any iscsi/FC SAN ... it's an open source shared/clustered lvm thin proxmox storage plugin developed by Starwind
 
  • Like
Reactions: Johannes S
Hi @itsnota2ma, here is my multipath.conf:

Code:
defaults {
        polling_interval        2
        path_selector           "round-robin 0"
        path_grouping_policy    multibus
        uid_attribute           ID_SERIAL
        rr_min_io               100
        failback                immediate
        prio                    iet
        prio_args               preferredip=x.x.x.x
        no_path_retry           queue
        user_friendly_names     yes
}

blacklist {
devnode "^sd[a-b]"
}

Restart multipathd.
Run multipath -v3 to get the uid and then add it/them to the whitelist.
In my case, because I had 2 live volumes i ran:
multipath -a 36000d31004066a000000000000000032
multipath -a 36000d31004066a000000000000000035

root@pve01:~# multipath -ll
mpatha (36000d31004066a000000000000000032) dm-7 COMPELNT,Compellent Vol
size=10T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 14:0:0:1 sde 8:64 active ready running
| `- 17:0:0:1 sdg 8:96 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 16:0:0:1 sdh 8:112 active ready running
`- 18:0:0:1 sdi 8:128 active ready running
mpathb (36000d31004066a000000000000000035) dm-6 COMPELNT,Compellent Vol
size=5.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 13:0:0:2 sdc 8:32 active ready running
| `- 15:0:0:2 sdf 8:80 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 12:0:0:2 sdd 8:48 active ready running
`- 19:0:0:2 sdj 8:144 active ready running
 
@mihsu81 Thank you!

When I run multipath -ll this is what I get.

36000d31004506c000000000000000003 dm-0 COMPELNT,Compellent Vol
size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
`- 12:0:0:1 sda 8:0 active ready running
36000d31004506c000000000000000004 dm-5 COMPELNT,Compellent Vol
size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
`- 14:0:0:2 sde 8:64 active ready running
36000d31004506c000000000000000005 dm-1 COMPELNT,Compellent Vol
size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
`- 12:0:0:3 sdb 8:16 active ready running
36000d31004506c000000000000000006 dm-7 COMPELNT,Compellent Vol
size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
`- 14:0:0:4 sdf 8:80 active ready running
36000d31004506c000000000000000007 dm-10 COMPELNT,Compellent Vol
size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
`- 14:0:0:5 sdg 8:96 active ready running
36000d31004506c000000000000000008 dm-2 COMPELNT,Compellent Vol
size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
`- 12:0:0:6 sdc 8:32 active ready running
36000d31004506c000000000000000009 dm-4 COMPELNT,Compellent Vol
size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
`- 12:0:0:7 sdd 8:48 active ready running

If I add a multipath.config file to /etc/ and restart multipath, I get nothing.

My WWIDS file looks like this:

# Multipath wwids, Version : 1.0
# NOTE: This file is automatically maintained by multipath and multipathd.
# You should not need to edit this file in normal circumstances.
#
# Valid WWIDs:
/500D31004506C08/
/500D31004506C05/
/500D31004506C1C/
/500D31004506C19/
/36000d31004506c000000000000000003/
/36000d31004506c000000000000000004/
/36000d31004506c000000000000000005/
/36000d31004506c000000000000000006/
/36000d31004506c000000000000000007/
/36000d31004506c000000000000000008/
/36000d31004506c000000000000000009/

What am I doing wrong??
 
@mihsu81 Thank you!

When I run multipath -ll this is what I get.

36000d31004506c000000000000000003 dm-0 COMPELNT,Compellent Vol
size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
`- 12:0:0:1 sda 8:0 active ready running
36000d31004506c000000000000000004 dm-5 COMPELNT,Compellent Vol
size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
`- 14:0:0:2 sde 8:64 active ready running
36000d31004506c000000000000000005 dm-1 COMPELNT,Compellent Vol
size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
`- 12:0:0:3 sdb 8:16 active ready running
36000d31004506c000000000000000006 dm-7 COMPELNT,Compellent Vol
size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
`- 14:0:0:4 sdf 8:80 active ready running
36000d31004506c000000000000000007 dm-10 COMPELNT,Compellent Vol
size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
`- 14:0:0:5 sdg 8:96 active ready running
36000d31004506c000000000000000008 dm-2 COMPELNT,Compellent Vol
size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
`- 12:0:0:6 sdc 8:32 active ready running
36000d31004506c000000000000000009 dm-4 COMPELNT,Compellent Vol
size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
`- 12:0:0:7 sdd 8:48 active ready running

If I add a multipath.config file to /etc/ and restart multipath, I get nothing.

My WWIDS file looks like this:

# Multipath wwids, Version : 1.0
# NOTE: This file is automatically maintained by multipath and multipathd.
# You should not need to edit this file in normal circumstances.
#
# Valid WWIDs:
/500D31004506C08/
/500D31004506C05/
/500D31004506C1C/
/500D31004506C19/
/36000d31004506c000000000000000003/
/36000d31004506c000000000000000004/
/36000d31004506c000000000000000005/
/36000d31004506c000000000000000006/
/36000d31004506c000000000000000007/
/36000d31004506c000000000000000008/
/36000d31004506c000000000000000009/

What am I doing wrong??

I would say you only have one path.

If this is iscsi, you need to adjust /etc/iscsi/iscsid.conf specifically this.

# For multipath configurations, you may want more than one session to be
# created on each iface record. If node.session.nr_sessions is greater
# than 1, performing a 'login' for that node will ensure that the
# appropriate number of sessions is created.
node.session.nr_sessions = 2

You will need to log out of iscsi, clean up /etc/iscsi/send_targets & /etc/iscsi/nodes, then re-discover and login to the targets. At that point you should have 2.
 
I would say you only have one path.

If this is iscsi, you need to adjust /etc/iscsi/iscsid.conf specifically this.

# For multipath configurations, you may want more than one session to be
# created on each iface record. If node.session.nr_sessions is greater
# than 1, performing a 'login' for that node will ensure that the
# appropriate number of sessions is created.
node.session.nr_sessions = 2

You will need to log out of iscsi, clean up /etc/iscsi/send_targets & /etc/iscsi/nodes, then re-discover and login to the targets. At that point you should have 2.
If this is the case why doesn't ProxMox have this in the instructions?? Or are you talking about these settings on the SAN??
 
Last edited:
If this is the case why doesn't ProxMox have this in the instructions?? Or are you talking about these settings on the SAN??

I don't pay to much attention to the documentation tbh.

The other option is to have multiple target IP's and log into all of them to get multiple path's.

This is all just basic linux storage knowledge per say.
 
I don't pay to much attention to the documentation tbh.

The other option is to have multiple target IP's and log into all of them to get multiple path's.

This is all just basic linux storage knowledge per say.
Well, I thought I did, but it does not seem to be working that way. I actually have 4 connections to the SAN, two per controller. Please understand I am going from VMware GUI to lots of command line and 50 different ways.

2026-04-01_13-08-38.png

2026-04-01_13-14-18.png

2026-04-01_13-14-45.png
 
This sounds interesting. Can you use thick LVM and snapshots?

I configured iSCSI to a Compellent array a months ago. performance was excellent. main gripe was losing snapshots.
 
@itsnota2ma in DSM I've created a new server cluster for which I specified "Other Multipath" as OS.
1775117875748.png

I also recommend PegaProx as a vCenter alternative. While it is still in Beta and doesn't come from Proxmox, it has a much better UI than PDM and offers more options, including setting up Multipath (as I discovered this morning :D).
1775118186335.png
 
@itsnota2ma in DSM I've created a new server cluster for which I specified "Other Multipath" as OS.
View attachment 96921

I also recommend PegaProx as a vCenter alternative. While it is still in Beta and doesn't come from Proxmox, it has a much better UI than PDM and offers more options, including setting up Multipath (as I discovered this morning :D).
View attachment 96922
The setting on the SAN was already correct, I will try out the Pega Prox. Thanks for the tip!
The irony in this is I will have to set it up in VMware because I cannot see images on the SAN yet - I haven't gotten that far.
 
Last edited: