PureStorage FlashArray + Proxmox VA + Multipath

storage snapshot are not possible in any circumstance.

But is it possible to select the iscsi/lvm targets from pure in veeam for snapshots with this plugin?
Snapshots exist at multiple layers: storage, file system, OS, application, and virtualization.

Both PBS and Veeam use QEMU-based snapshots at the virtualization layer in PVE. Specifically QEMU NBD are used which are storage-related snapshots.

Since this technology is storage-independent, it is generally less efficient compared to native storage-level snapshots.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
@amulet Should we still use your fork as of today?
Thanks
You could, but since essentially all my patches are in the parent repository already, the plugin's code is functionally (if not byte-by-byte) identical.

There are some minor improvements in the pipeline, and some not yet published things.

One of the obstacles that is hard to address without some non-elegant wrappers is described here.
 
I have a question prior to trying this plugin out. I already have a host setup in Proxmox and is connected to my Pure storage array over Iscsi multipathing... what does this plugin do that I already am not doing?

Doesn't proxmox out of the box support HA/live migrations from host-to-host, as long as multipath is setup properly on the other host?

Thanks,
 
Without you basically only have two options:
1: Assign every VM Disk as Volume manually
2: create a volume with lvm on top.

Now you have following drawbacks:

Option 1:
- You don't have snapshots from within Proxmox.
- You have to manually create volumes on Pure for every disk.
- All volumes (disks) must be configured on all Proxmox nodes, so that live migration will work.

Option 2:
- You don't have snapshots from within Proxmox.
This changed in the new PVE 9 Beta, now you can "snapshot", with thick LVM on PVE layer. (still technical preview tho)
- You can't snapshot on Storage layer as all VMs lay on one volume

The Plugin basically automates the first option:
- Volume creation on Pure
- Multipath for every Volume will be handled by the Plugin
- Storage side data consistent Snapshots are supported
 
Last edited:
  • Like
Reactions: trentwall
I have a question prior to trying this plugin out. I already have a host setup in Proxmox and is connected to my Pure storage array over Iscsi multipathing... what does this plugin do that I already am not doing?

Doesn't proxmox out of the box support HA/live migrations from host-to-host, as long as multipath is setup properly on the other host?

Thanks,

Without you basically only have two options:
1: Assign every VM Disk as Volume manually
2: create a volume with lvm on top.

Now you have following drawbacks:

Option 1:
- You don't have snapshots from within Proxmox.
- You have to manually create volumes on Pure for every disk.
- All volumes (disks) must be configured on all Proxmox nodes, so that live migration will work.

Option 2:
- You don't have snapshots from within Proxmox.
This changed in the new PVE 9 Beta, now you can "snapshot", with thick LVM on PVE layer. (still technical preview tho)
- You can't snapshot on Storage layer as all VMs lay on one volume

The Plugin basically automates the first option:
- Volume creation on Pure
- Multipath for every Volume will be handled by the Plugin
- Storage side data consistent Snapshots are supported
That makes more sense, thanks for the elaboration! I will be installing this today on one of my hosts and will report back.

Thanks,
 
After testing, so far so good, minus live-migrating EFI disks.... here's the error I keep getting:

create full clone of drive efidisk0 (pure-vg:vm-118-disk-0)
Info :: Size is too small (528 kb), adjusting to 1024 kb
Info :: Volume "vm-118-disk-2" is created (serial=9A57B80970CE4B9900011ADE).
Info :: Volume "vm-118-disk-2" is connected to host "host1".
drive mirror is starting for drive-efidisk0
drive-efidisk0: Cancelling block job
drive-efidisk0: Done.
Info :: Volume "vm-118-disk-2" is disconnected from host "host1".
Info :: Volume "vm-118-disk-2" is deactivated.
Info :: Volume "vm-118-disk-2" is destroyed.
TASK ERROR: storage migration failed: block job (mirror) error: drive-efidisk0: Source and target image have different sizes (io-status: ok)

Once the VM is shut down, the migration works fine.

I am migrating from a manually setup Pure-array volume over ISCSI, to the new plugin's managed pure-volume.

Any insight in this issue would be great.

Thanks,
 
After testing, so far so good, minus live-migrating EFI disks.... here's the error I keep getting:

create full clone of drive efidisk0 (pure-vg:vm-118-disk-0)
Info :: Size is too small (528 kb), adjusting to 1024 kb
Info :: Volume "vm-118-disk-2" is created (serial=9A57B80970CE4B9900011ADE).
Info :: Volume "vm-118-disk-2" is connected to host "host1".
drive mirror is starting for drive-efidisk0
drive-efidisk0: Cancelling block job
drive-efidisk0: Done.
Info :: Volume "vm-118-disk-2" is disconnected from host "host1".
Info :: Volume "vm-118-disk-2" is deactivated.
Info :: Volume "vm-118-disk-2" is destroyed.
TASK ERROR: storage migration failed: block job (mirror) error: drive-efidisk0: Source and target image have different sizes (io-status: ok)

Once the VM is shut down, the migration works fine.

I am migrating from a manually setup Pure-array volume over ISCSI, to the new plugin's managed pure-volume.

Any insight in this issue would be great.

Thanks,
I would also like to report this warning as well:

2025-08-07 08:33:52 [host2] Plugin "PVE::Storage::Custom::PureStoragePlugin" is implementing an older storage API, an upgrade is recommended
2025-08-07 08:33:52 [host2] file /etc/pve/storage.cfg line 18 (section 'pure-array') - unable to parse value of 'timeout': unknown property type
2025-08-07 08:34:09 [host2] Plugin "PVE::Storage::Custom::PureStoragePlugin" is implementing an older storage API, an upgrade is recommended

2025-08-07 08:33:51 starting VM 105 on remote node 'host2'
2025-08-07 08:33:52 [host2] Plugin "PVE::Storage::Custom::PureStoragePlugin" is implementing an older storage API, an upgrade is recommended
2025-08-07 08:33:52 [host2] file /etc/pve/storage.cfg line 18 (section 'pure-array') - unable to parse value of 'timeout': unknown property type
2025-08-07 08:34:09 [host2] Plugin "PVE::Storage::Custom::PureStoragePlugin" is implementing an older storage API, an upgrade is recommended
2025-08-07 08:34:09 start remote tunnel
Plugin "PVE::Storage::Custom::PureStoragePlugin" is implementing an older storage API, an upgrade is recommended
2025-08-07 08:34:10 ssh tunnel ver 1
2025-08-07 08:34:10 starting online/live migration on unix:/run/qemu-server/105.migrate
2025-08-07 08:34:10 set migration capabilities
2025-08-07 08:34:10 migration downtime limit: 100 ms
2025-08-07 08:34:10 migration cachesize: 512.0 MiB
2025-08-07 08:34:10 set migration parameters
2025-08-07 08:34:10 start migrate command to unix:/run/qemu-server/105.migrate
2025-08-07 08:34:11 migration active, transferred 90.9 MiB of 4.0 GiB VM-state, 115.1 MiB/s
2025-08-07 08:34:12 migration active, transferred 197.5 MiB of 4.0 GiB VM-state, 149.6 MiB/s
2025-08-07 08:34:13 migration active, transferred 304.6 MiB of 4.0 GiB VM-state, 109.6 MiB/s
2025-08-07 08:34:14 migration active, transferred 412.0 MiB of 4.0 GiB VM-state, 113.3 MiB/s
2025-08-07 08:34:15 migration active, transferred 519.5 MiB of 4.0 GiB VM-state, 109.2 MiB/s
2025-08-07 08:34:16 migration active, transferred 625.7 MiB of 4.0 GiB VM-state, 105.9 MiB/s
2025-08-07 08:34:17 migration active, transferred 731.2 MiB of 4.0 GiB VM-state, 106.2 MiB/s
2025-08-07 08:34:18 migration active, transferred 837.5 MiB of 4.0 GiB VM-state, 108.8 MiB/s
2025-08-07 08:34:19 average migration speed: 457.0 MiB/s - downtime 28 ms
2025-08-07 08:34:19 migration completed, transferred 933.7 MiB VM-state
2025-08-07 08:34:19 migration status: completed