PureStorage FlashArray + Proxmox VA + Multipath

timansky

Renowned Member
Sep 15, 2016
10
10
68
Hi Proxmox enthusiasts!

I faced an issue with iSCSI fault tolerance in Proxmox. The current version of Proxmox lacks native support for iSCSI with multipath, which is critical for maintaining infrastructure reliability. To address this, I decided to create a custom plugin, which is now available for everyone on GitHub.

We use PureStorage FlashArray FA-X20R3 (datasheet) and Proxmox VE v8. This plugin has already proven helpful in solving many storage management and integration tasks.

What My Plugin Can Do:
Live migration – seamless and interruption-free.
Snapshot creation and copying – convenient for backups and rollbacks.
Creating VM disks as separate volumes – this unlocks the powerful features of PureStorage, such as automatic snapshot and backup policies.

Why It Matters:
By creating disks as separate volumes, you can leverage the benefits of PureStorage to ensure data reliability and fault tolerance. For example:
• Automating backups with Pure’s built-in features.
• Flexible management of storage policies.
• Supporting multipath connections, significantly improving system performance and resilience.

Technical Details:
• Storage: PureStorage FlashArray FA-X20R3 (FW 6.5.8).
• Tested Proxmox VE version: 8.2.*.

Where to Find and How to Install:
The entire plugin code is available in an open repository on GitHub. You can also find detailed installation and setup instructions there.

What’s Next:
I am open to feedback and suggestions for improving the plugin. If you have ideas, feature requests, or bug reports, feel free to contact me through GitHub or leave a comment on the project.


I hope this plugin will be useful for you as well!
 
Great news especially if we consider, that Pure at the moment doesn't provide a native plugin. Maybe they are now more open to do something about since there is already a base?
 
Сurrent documented implementation involves using LVM on top of multipath.

- LVM has overhead
- Shared LVM does not support snapshots
- There is performance issue with large installations (multiple disk, currently we have over 2k)
- LVM Thin migrates data over network - this is slow and expensive operation
- LVM does not allow the use of native Pure features (bandwidth, snapshots, replication ...)
 
  • Like
Reactions: Johannes S
Сurrent documented implementation involves using LVM on top of multipath.

- LVM has overhead
- Shared LVM does not support snapshots
- There is performance issue with large installations (multiple disk, currently we have over 2k)
- LVM Thin migrates data over network - this is slow and expensive operation
- LVM does not allow the use of native Pure features (bandwidth, snapshots, replication ...)

I didn't want to imply that your plugin doesn't make (more) sense for your setup / use case. just wanted to point out that using multipath is definitely possible already without it / with other storage appliances/iscsi targets.
 
  • Like
Reactions: Johannes S
Сurrent documented implementation involves using LVM on top of multipath.
Thank you for publishing this! I will take some time to read through your code, but if you will indulge me:
1. I am gathering you are not using lvm. Are you presenting luns directly as vm disks? if so, how are you integrating creation/modification/deletion from pve? or does that require manual intervention?
2. I assume you have a qm-freeze/qm-thaw integration for snapshots (that at least should be pretty straightforward to do)
3. Can you expand a bit on "There is performance issue with large installations (multiple disk, currently we have over 2k)"
 
I didn't want to imply that your plugin doesn't make (more) sense for your setup / use case. just wanted to point out that using multipath is definitely possible already without it / with other storage appliances/iscsi targets.
Note that for zfs over iscsi plugin, as we use libiscsi from qemu, we dont have support for multipath. I remember that I was painfull to manage kernel iscsi rescan and multipath dm mapper changes when you rollback or delete a volume. Seem that it s correctly manage here :)
 
Please also have a look here: https://github.com/dpetrov67/pve-purestorage-plugin

This is a fork of the original repository referenced above. The fixes-2 branch includes multiple fixes, but none of them made it into the original repository yet.

If anyone have Pure Storage Flash Array and is willing to test the plugin, I would appreciate any feedback.

So trying this out now. I'm having an issue trying to get the API token to work.

I have the storage showing in the GUI and confirmed multipath works
1736799658381.png

1736799663106.png

1736799667176.png

and when I look at the pvedaemon logs
journalctl -u pveproxy -u pvedaemon | tail 500

I see that it's reporting basically the same error
1736799719690.png
 
Please double check that the API token is correct (and not expired).

What worked for me is this:
1) Open Pure Storage array web interface. Go to Settings > Users and Policies > Users.
2) Create a user with (at least) Storage Admin privileges (or you can use an existing user).
3) Generate the API token through (the user) > Create API Token.

Also, please make sure you use the latest version. Note, this is a link to my repository which has various fixes and improvements compared to the original one. I hope that at some point all my fixes will be merged there.
 
Please double check that the API token is correct (and not expired).

What worked for me is this:
1) Open Pure Storage array web interface. Go to Settings > Users and Policies > Users.
2) Create a user with (at least) Storage Admin privileges (or you can use an existing user).
3) Generate the API token through (the user) > Create API Token.

Also, please make sure you use the latest version. Note, this is a link to my repository which has various fixes and improvements compared to the original one. I hope that at some point all my fixes will be merged there.

Perfect. That got me further for sure.

I can now see the API is creating a coresponding volume on the Pure array. However, when it's going to actually move it:

Code:
create full clone of drive scsi0 (purenfs:111/vm-111-disk-1.qcow2)
Info :: Volume "proxmox/vm-111-disk-2" created (serial=96622A35A07E4A3300012498).
Info :: Volume "proxmox/vm-111-disk-2" is added to host "der-pve3-pve".
Info :: Volume "proxmox/vm-111-disk-2" is removed from host "der-pve3-pve".
Info :: Volume "proxmox/vm-111-disk-2" deactivated.
Info :: Volume "proxmox/vm-111-disk-2" destroyed.
TASK ERROR: storage migration failed: Error :: Failed to run 'multipath -a 3624a937096622a35a07e4a3300012498'. Error :: command '/sbin/multipath -a 3624a937096622a35a07e4a3300012498' failed: exit code 1


Looks like multipath is working as it should.

1736861444814.png


And the volume in Pure
1736861487718.png
 
It says it could not run
Code:
/sbin/multipath -a 3624a937096622a35a07e4a3300012498

Could you try to run it manually? It should add a line into /etc/multipath/wwids.

Is multipath installed into /sbin folder? This is where the plugin expects it to be (the path is currently hardcoded).
 
It says it could not run
Code:
/sbin/multipath -a 3624a937096622a35a07e4a3300012498

Could you try to run it manually? It should add a line into /etc/multipath/wwids.

Is multipath installed into /sbin folder? This is where the plugin expects it to be (the path is currently hardcoded).

1736866144688.png

No error but when I check the wwids file, there is no new entry though.
1736866325445.png
 
Ahh okay, I had to add the WWID to the blacklist exception.

1736866853733.png

1736866833766.png

But still get the following error when moving a disk
Code:
create full clone of drive scsi0 (purenfs:111/vm-111-disk-1.qcow2)
Info :: Volume "proxmox/vm-111-disk-2" created (serial=96622A35A07E4A330001249C).
Info :: Volume "proxmox/vm-111-disk-2" is added to host "der-pve3-pve".
Info :: Volume "proxmox/vm-111-disk-2" is removed from host "der-pve3-pve".
Info :: Volume "proxmox/vm-111-disk-2" deactivated.
Info :: Volume "proxmox/vm-111-disk-2" destroyed.
TASK ERROR: storage migration failed: Error :: Failed to run 'multipath -a 3624a937096622a35a07e4a330001249c'. Error :: command '/sbin/multipath -a 3624a937096622a35a07e4a330001249c' failed: exit code 1
 
So, here's a question. How does the plugin determine what the portal is for the iSCSI target? Because, I'm half wondering if it's trying to use the management interface and not the specific 25GbE network setup for iSCSI.

1736871604200.png

Here is how you'd manually add the iSCSI in Proxmox.


On the Pure array, these two interfaces on each controller are dedicated to iSCSI
1736871691256.png
 
Last edited: