free Starwind x Proxmox SAN storage plugin

alma21

Member
May 4, 2024
39
22
8
Hi,

I have installed and tested the free / open source Starwind Proxmox SAN storage plugin on a small testcluster (PVE 8) with shared block storage and it works really good .... thin provisioning, snapshots, thin disk auto extension, live migration , linked clones.... at the end a cluster aware (shared) LVM thin implementation (SAN vendor neutral)

just wondering why I have not found more infos/feedback .... would be eventually also a good candidate/alternative for direct PVE integration

Is someone using this in a production environment ? any experience ?

https://www.starwindsoftware.com/re...-proxmox-san-integration-configuration-guide/
 
I have been using it in a test environment with 3 PVE nodes as a replacement for LVM, over an iSCSI multipath connection to 2 Compellent data stores that present multiple Highly Available live volumes.
It has proven very reliable and has all the features missing from LVM when used on top of iSCSI.
 
  • Like
Reactions: alma21
Why not trying out Ceph ? Starwinds is not open source, it is free for 3 node deployment. I am using Starwinds with ESXi as it is certified by Broadcom and it is working great. But it is somehow limited with functionality and it is iSCSI that is not the fastest protocol especially if the underlying hardware is NVMe.
 
  • Like
Reactions: Johannes S
Why not trying out Ceph ? Starwinds is not open source, it is free for 3 node deployment. I am using Starwinds with ESXi as it is certified by Broadcom and it is working great. But it is somehow limited with functionality and it is iSCSI that is not the fastest protocol especially if the underlying hardware is NVMe.
I think this is a misunderstanding .... the plugin which I mentioned can be used with any iscsi/FC SAN ... it's an open source shared/clustered lvm thin proxmox storage plugin developed by Starwind
 
  • Like
Reactions: Johannes S
Hi @itsnota2ma, here is my multipath.conf:

Code:
defaults {
        polling_interval        2
        path_selector           "round-robin 0"
        path_grouping_policy    multibus
        uid_attribute           ID_SERIAL
        rr_min_io               100
        failback                immediate
        prio                    iet
        prio_args               preferredip=x.x.x.x
        no_path_retry           queue
        user_friendly_names     yes
}

blacklist {
devnode "^sd[a-b]"
}

Restart multipathd.
Run multipath -v3 to get the uid and then add it/them to the whitelist.
In my case, because I had 2 live volumes i ran:
multipath -a 36000d31004066a000000000000000032
multipath -a 36000d31004066a000000000000000035

root@pve01:~# multipath -ll
mpatha (36000d31004066a000000000000000032) dm-7 COMPELNT,Compellent Vol
size=10T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 14:0:0:1 sde 8:64 active ready running
| `- 17:0:0:1 sdg 8:96 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 16:0:0:1 sdh 8:112 active ready running
`- 18:0:0:1 sdi 8:128 active ready running
mpathb (36000d31004066a000000000000000035) dm-6 COMPELNT,Compellent Vol
size=5.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 13:0:0:2 sdc 8:32 active ready running
| `- 15:0:0:2 sdf 8:80 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 12:0:0:2 sdd 8:48 active ready running
`- 19:0:0:2 sdj 8:144 active ready running
 
@mihsu81 Thank you!

When I run multipath -ll this is what I get.

36000d31004506c000000000000000003 dm-0 COMPELNT,Compellent Vol
size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
`- 12:0:0:1 sda 8:0 active ready running
36000d31004506c000000000000000004 dm-5 COMPELNT,Compellent Vol
size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
`- 14:0:0:2 sde 8:64 active ready running
36000d31004506c000000000000000005 dm-1 COMPELNT,Compellent Vol
size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
`- 12:0:0:3 sdb 8:16 active ready running
36000d31004506c000000000000000006 dm-7 COMPELNT,Compellent Vol
size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
`- 14:0:0:4 sdf 8:80 active ready running
36000d31004506c000000000000000007 dm-10 COMPELNT,Compellent Vol
size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
`- 14:0:0:5 sdg 8:96 active ready running
36000d31004506c000000000000000008 dm-2 COMPELNT,Compellent Vol
size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
`- 12:0:0:6 sdc 8:32 active ready running
36000d31004506c000000000000000009 dm-4 COMPELNT,Compellent Vol
size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
`- 12:0:0:7 sdd 8:48 active ready running

If I add a multipath.config file to /etc/ and restart multipath, I get nothing.

My WWIDS file looks like this:

# Multipath wwids, Version : 1.0
# NOTE: This file is automatically maintained by multipath and multipathd.
# You should not need to edit this file in normal circumstances.
#
# Valid WWIDs:
/500D31004506C08/
/500D31004506C05/
/500D31004506C1C/
/500D31004506C19/
/36000d31004506c000000000000000003/
/36000d31004506c000000000000000004/
/36000d31004506c000000000000000005/
/36000d31004506c000000000000000006/
/36000d31004506c000000000000000007/
/36000d31004506c000000000000000008/
/36000d31004506c000000000000000009/

What am I doing wrong??