PureStorage FlashArray + Proxmox VA + Multipath

The plugin communicates with Pure array via API using its management interface.

It then uses iscsiadm and multipath to add/remove device mapping. In order for that to work the iscsi and multipathing should be properly configured and the array should be discoverable on the pve host(s).

I would start with attaching some Pure volume to PE via Datacenter -> Storage.

If it works fine, just confirm that node.startup is automatic for the array in the /etc/iscsi/nodes and that "multipath -ll" produces a meaningful output.

Note, you do not need the ISCSI storage afterwards. The plugin will handle things automatically.
 
Ahh okay, I had to add the WWID to the blacklist exception.

View attachment 80759

View attachment 80758

But still get the following error when moving a disk
Code:
create full clone of drive scsi0 (purenfs:111/vm-111-disk-1.qcow2)
Info :: Volume "proxmox/vm-111-disk-2" created (serial=96622A35A07E4A330001249C).
Info :: Volume "proxmox/vm-111-disk-2" is added to host "der-pve3-pve".
Info :: Volume "proxmox/vm-111-disk-2" is removed from host "der-pve3-pve".
Info :: Volume "proxmox/vm-111-disk-2" deactivated.
Info :: Volume "proxmox/vm-111-disk-2" destroyed.
TASK ERROR: storage migration failed: Error :: Failed to run 'multipath -a 3624a937096622a35a07e4a330001249c'. Error :: command '/sbin/multipath -a 3624a937096622a35a07e4a330001249c' failed: exit code 1
Could you provide your multipath.conf file?

Please check that you have "find_multipaths no" line in it. You do not need to add exceptions for each volume.
 
Last edited:
The plugin communicates with Pure array via API using its management interface.

It then uses iscsiadm and multipath to add/remove device mapping. In order for that to work the iscsi and multipathing should be properly configured and the array should be discoverable on the pve host(s).

I would start with attaching some Pure volume to PE via Datacenter -> Storage.

If it works fine, just confirm that node.startup is automatic for the array in the /etc/iscsi/nodes and that "multipath -ll" produces a meaningful output.

Note, you do not need the ISCSI storage afterwards. The plugin will handle things automatically.

Yup, looks like I can attach the LUN to a VM no problem. Verified in a Windows VM that the disk is accessible as well.

1736880028498.png

Could you provide your multipath.conf file?

Please check that you have "find_multipaths no" line in it. You do not need to add exceptions for each volume.

Code:
blacklist {
        wwid .*
}

blacklist_exceptions {
        wwid "3624a937096622a35a07e4a3300012020"  ### This is from the old direct iSCSI target
        device {
                vendor "PURE"
        }
}

defaults {
        polling_interval        2
        path_selector           "round-robin 0"
        path_grouping_policy    multibus
        uid_attribute           ID_SERIAL
        rr_min_io               100
        failback                immediate
        no_path_retry           queue
        user_friendly_names     yes
        find_multipaths no
}

devices {
  device {
    vendor               "PURE"
    product              "FlashArray"
    path_selector        "queue-length 0"
    hardware_handler     "1 alua"
    path_grouping_policy group_by_prio
    prio                 alua
    failback             immediate
    path_checker         tur
    fast_io_fail_tmo     10
    user_friendly_names  no
    no_path_retry        0
    features             "0"
    dev_loss_tmo         60
    recheck_wwid         yes
  }
}
 
Code:
wwid .*
in blacklist is causing this. Is it posible to eliminate it, and if needed, blacklist by device/etc.?

Or add exception with wildcard like
Code:
wwid "624a9370.*"
to blacklist_exceptions section.
 
Last edited:
Transfer rate seems to be low. Is your iscsi on a separate interface?

I am easily getting ~1.5GB/sec on a 25Gbit/s adapter.
 
Transfer rate seems to be low. Is your iscsi on a separate interface?

I am easily getting ~1.5GB/sec on a 25Gbit/s adapter.

It's actually a little better than that.
1736886069520.png

This is **cough** the production Pure, so it's actually sharing resources with an ESXi cluster.
 
  • Like
Reactions: cbka
Also, just for extra info, no issues with Veeam either. Not that I was expecting it to, but just a nice FYI.
 
Sorry, not trying to spam this, just very excited how well this works.

Storage level snapshots are working perfect
1736887939360.png

1736887909586.png
 
So, here's a question. How does the plugin determine what the portal is for the iSCSI target? Because, I'm half wondering if it's trying to use the management interface and not the specific 25GbE network setup for iSCSI.

View attachment 80762
Original plugin relies on iscsi configuration under the hood.

Plugin relies on iscsi configuration (manual config) and api calls thru management interface for disk operations (delete/clone/resize ...)


@timansky Any reason you're not looking into NVME-Over-Fabric? https://wiki.archlinux.org/title/NVMe_over_Fabrics

And are you in contact with anyone from Pure Storage?
Currently i do not have hardware to use NVME-Over-Fabric
Yes, but officially they does not support proxmox
why don't you guys use github issues for this chat?

Please lets move all issues discussions in one place (github.com)
Please also have a look here: https://github.com/dpetrov67/pve-purestorage-plugin

This is a fork of the original repository referenced above. The fixes-2 branch includes multiple fixes, but none of them made it into the original repository yet.

If anyone have Pure Storage Flash Array and is willing to test the plugin, I would appreciate any feedback.
Thanks for test and fixes
 
  • Like
Reactions: PwrBank
have further installed and tested amulet release of StorageClass:

getting weird error on starting vms because the suffix "-pve" is attached though not set in storage.cfg

opening an issue in the forked github ... EDIT: when i am at home at my old phone for 2fa :-/
EDIT2: https://github.com/amulet1/pve-purestorage-plugin/issues/5

Code:
TASK ERROR: Error :: PureStorage API :: Failed to modify connection. => Trace: ==> Code: 400 ==> Message: {"errors" => [{"message" => "Host does not exist.","context" => "pve01-pve"}]} at /usr/share/perl5/PVE/Storage/Custom/PureStoragePlugin.pm line 380.

1736953610674.png
 
Last edited:
Code:
TASK ERROR: Error :: PureStorage API :: Failed to modify connection. => Trace: ==> Code: 400 ==> Message: {"errors" => [{"message" => "Host does not exist.","context" => "pve01-pve"}]} at /usr/share/perl5/PVE/Storage/Custom/PureStoragePlugin.pm line 380.

Mine did that until I added -pve to the end of entry listed on the hosts page on the Pure array
1736955200559.png
 
one further issue found: when migration storage from old SAN to pure via plugin:
on host that has never connected to pure via iscsi ...

Code:
create full clone of drive scsi0 (TRUENAS-FAST-Pool2:122/vm-122-disk-1.qcow2)
Info :: Volume "ka-rz1-pve-cluster/vm-122-disk-0" created (serial=ACBF05BB9DD74C2700011417).
Info :: Volume "ka-rz1-pve-cluster/vm-122-disk-0" is added to host "pve03".
wwid '3624a9370acbf05bb9dd74c2700011417' added
iscsiadm: No session found.
Info :: Volume "ka-rz1-pve-cluster/vm-122-disk-0" is removed from host "pve03".
Info :: Volume "ka-rz1-pve-cluster/vm-122-disk-0" deactivated.
Info :: Volume "ka-rz1-pve-cluster/vm-122-disk-0" destroyed.
TASK ERROR: storage migration failed: Error :: Failed to run 'iscsiadm --node session --rescan' command. Error :: command '/usr/bin/iscsiadm --mode session --rescan' failed: exit code 21

can be fixed if you simply add iscsi storage backend in datacenter config from all hosts... wtf ?!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!