PureStorage FlashArray + Proxmox VA + Multipath

Hmm, I guess not, then again, I do not have Pure as an option for my snapshots.
View attachment 80863
i guess this is WIP from Pure // Veeam ... atm the Pure Plugin for Veeam does not suport proxmox as server either ... and i highly doubt they ever will ... like my partner account manger told me .. it is always a question of money behind a project .. when Proxmox is enterprise ready in term of 24/7 support - maybe the will change their minds..

cheers
 
also on the way testing this with purity 6.8.2

@timansky could you pls push to main and update .deb ?

Thanks a lot guys for your (from what i read) excellent work !

There was holliday's and vocation, unfortunately deb repo is not ready yet, it is on plan. I have not decided yet where to publish deb. Github can handle only file not a repo with history, but release is already has PR

during the creation of the pve hosts in veeam before the worker gets created, the veeam setup tries to trigger a snapshot to check if the storage subsystme is able to utilize HW snapshots. if i got this correct: when this fails it uses the snapshot machanism from proxmox to create snapshots and sends them to the backup storage afterwards
... did you see the pure perform snapshot tasks during backups ?

cheers

Christian
Unfortunately i do not have veam to test, according to log purestorage plugin does not support underscore in volume name.
Standart spashot relies on qemu api to create one with defined prefix vm-{vm_id}-disk-{disk_number}.snap-{snap_name}

I'm using proxmox backup server, also standart backup function is working

PS
If i could get access to veam i'll try t figure out how to make it compatible

PSS
proxmox backup differs from just disk snapshot. It creates vm config + disk snapshot + ram to disk save (if selected)
Standart function relies on qemu functions, storage layer is not important because it is just make disk copy on the fly
 
  • Like
Reactions: Johannes S
There was holliday's and vocation, unfortunately deb repo is not ready yet, it is on plan. I have not decided yet where to publish deb. Github can handle only file not a repo with history, but release is already has PR


Unfortunately i do not have veam to test, according to log purestorage plugin does not support underscore in volume name.
Standart spashot relies on qemu api to create one with defined prefix vm-{vm_id}-disk-{disk_number}.snap-{snap_name}

I'm using proxmox backup server, also standart backup function is working

PS
If i could get access to veam i'll try t figure out how to make it compatible

PSS
proxmox backup differs from just disk snapshot. It creates vm config + disk snapshot + ram to disk save (if selected)
Standart function relies on qemu functions, storage layer is not important because it is just make disk copy on the fly

If you want to try out Veeam, you can do two things.

Run Community Edition: https://www.veeam.com/products/free/backup-recovery.html?ad=downloads

Or get an NFR license: https://www.veeam.com/blog/how-to-get-free-nfr-key.html

Both are free
 
okay news:
Storage plugin works brilliant ! Thanks a lot for your time and effort !

I discovered that unfortunately no snapshot with pure is possible out of veeam :-/

Error Message:

Code:
TASK ERROR: Error :: PureStorage API :: Snapshot volume failed. => Trace: ==> Code: 400 ==> Message: {   "errors" => [     {       "context" => "snap-veeam_d04ef44c108e08e86b2bb0f71f",       "message" => "Snapshot suffix must be between 1 and 63 characters (alphanumeric and '-') in length and begin and end with a letter or number. The suffix must include at least one letter or '-'."     }   ] }
Some Ideas on this ?
See issue #8. I added a fix to this issue, but depending on how Veeam uses the snapshot it may or may not be enough. The fix is in this version or you can try all the latest fixes here (not yet merged to the kolesa-team repo).
 
Last edited:
Thank you very much for this, it is really great.

I had a question on how people are managing the volumes created by the plugin as they all get created with the vm-<id>-disk-0 format (assuming you don't append them with the vnprefix from the plugin). To me it just looks really messy in the Pure GUI
 
Thank you very much for this, it is really great.

I had a question on how people are managing the volumes created by the plugin as they all get created with the vm-<id>-disk-0 format (assuming you don't append them with the vnprefix from the plugin). To me it just looks really messy in the Pure GUI

The volume group makes it a lot easier to manage. You can select the volume group from the Pure GUI and see all the disks in there. Now, it would be nice to filter out the volume group from the volume listings in Pure, but I'm not sure if it can do that.
 
Thank you very much for this, it is really great.

I had a question on how people are managing the volumes created by the plugin as they all get created with the vm-<id>-disk-0 format (assuming you don't append them with the vnprefix from the plugin). To me it just looks really messy in the Pure GUI
Each volume is attached to mandatory volume group
 
Each volume is attached to mandatory volume group
I was able to create one without a volume group as the vgname is optional. My /etc/pve/storage.cfg looks like the below:

Code:
purestorage: pure
        address https://xxxxx
        token xxxxx
        content images



The volume group makes it a lot easier to manage. You can select the volume group from the Pure GUI and see all the disks in there. Now, it would be nice to filter out the volume group from the volume listings in Pure, but I'm not sure if it can do that.
So are you creating a volume group per VM (as a VM has multiple disks, plus state file) or just a volume group for all of Proxmox.
 
I was able to create one without a volume group as the vgname is optional. My /etc/pve/storage.cfg looks like the below:

Yes it is now possible. This was made to support various configuration. There is 3 option vnprefix vgname and podname. It is depends on you which configuration you need
 
Last edited:
@timansky Maybe you could post some changelogs to this thread to keep the thread afloat so others can see progress being made on plugin?

I know FC support has been added since then, that might be an important feature for some potential users.
 

HEEELOOOOOU!​


This Plugin is perfect (now hehehehe). Thanks. BUT....

Problem Description


The Proxmox plugin manages VM storage by assigning a Pure Storage volume (disk) to the host where the VM is running. This mechanism works fine under normal conditions.


  • When a VM is powered off, migration between hosts works correctly.
  • When a VM is running, live migration fails most of the time.

Observed Behavior During Migration


  1. The VM is running on HOST1, and its disk (volume) is assigned to it in Pure Storage.
  2. When a live migrationis initiated:
    • The plugin maps the volume to HOST2 (destination) while keeping it mapped to HOST1.
    • The migration process transfers CPU and memory to HOST2.
    • After migration completes, the plugin removes the volume from HOST1.

Error Encountered


The migration process fails due to a timeout when mapping the volume:

2025-03-07 14:10:10 starting migration of VM 100 to node 'proxmox01' (10.200.1.201)
2025-03-07 14:10:10 starting VM 100 on remote node 'proxmox01'
2025-03-07 14:10:18 [proxmox01] Error :: Timeout while waiting for volume "vm-100-disk-1" to map
2025-03-07 14:10:18 ERROR: online migrate failure - remote command failed with exit code 255
2025-03-07 14:10:18 aborting phase 2 - cleanup resources
2025-03-07 14:10:18 migrate_cancel
2025-03-07 14:10:22 ERROR: migration finished with problems (duration 00:00:13)
TASK ERROR: migration problems

The issue suggests that the migration process starts before ensuring that HOST2 has successfully mapped the volume and has access to it.




Workaround


A possible temporary fix is to introduce a short delay (e.g., 5 seconds) before proceeding with the migration. This would allow enough time for the storage volume to be properly mapped and detected by HOST2.




Task: Fix the Issue in the Plugin


To resolve this issue properly, the plugin should implement a validation step before continuing the migration:


  1. Verify that the destination host (HOST2) can access the volume before starting the migration.
  2. Add a check to confirm the volume is fully visible and accessible from HOST2.
  3. If necessary, introduce a short pause (e.g., 5 seconds) to allow time for the mapping process to complete before continuing.
  4. Modify the plugin logic to only proceed with migration when HOST2 has successfully mapped the volume.

I apologize in advance because I believe this might indeed be an issue with my infrastructure. I'm running Proxmox on top of an ESXi (things we love to do in a lab, hahaha).


I also apologize because I'm not a developer. BUT here are the code modifications:


1741368945170.png

Perl:
  # Wait for the device to appear with increased timeout for live migration

  wait_for( $path_exists, "volume \"$volname\" to map", 30, 0.5 );



  # Additional validation to ensure device is fully accessible

  my $device_accessible = sub {

    return -b $path && -r $path && -w $path;

  };



  # Wait for device to be fully accessible

  wait_for( $device_accessible, "volume \"$volname\" to be fully accessible", 30, 0.5 );


and


Perl:
  if ( !multipath_check( $wwid ) ) {

    print "Debug :: Adding multipath map for device \"$wwid\"\n" if $DEBUG;

    exec_command( [ 'multipathd', 'add', 'map', $wwid ] );

    

    # Wait for multipath to be fully established

    my $multipath_ready = sub {

      return multipath_check( $wwid );

    };

    wait_for( $multipath_ready, "multipath map for volume \"$volname\" to be ready", 30, 0.5 );

  }


Once again, thank you very much for sharing the plugin, and here is my contribution.


Best regards,
Rafael Carvalho.
 

HEEELOOOOOU!​


This Plugin is perfect (now hehehehe). Thanks. BUT....

Problem Description


The Proxmox plugin manages VM storage by assigning a Pure Storage volume (disk) to the host where the VM is running. This mechanism works fine under normal conditions.


  • When a VM is powered off, migration between hosts works correctly.
  • When a VM is running, live migration fails most of the time.

Observed Behavior During Migration


  1. The VM is running on HOST1, and its disk (volume) is assigned to it in Pure Storage.
  2. When a live migrationis initiated:
    • The plugin maps the volume to HOST2 (destination) while keeping it mapped to HOST1.
    • The migration process transfers CPU and memory to HOST2.
    • After migration completes, the plugin removes the volume from HOST1.

Error Encountered


The migration process fails due to a timeout when mapping the volume:

2025-03-07 14:10:10 starting migration of VM 100 to node 'proxmox01' (10.200.1.201)
2025-03-07 14:10:10 starting VM 100 on remote node 'proxmox01'
2025-03-07 14:10:18 [proxmox01] Error :: Timeout while waiting for volume "vm-100-disk-1" to map
2025-03-07 14:10:18 ERROR: online migrate failure - remote command failed with exit code 255
2025-03-07 14:10:18 aborting phase 2 - cleanup resources
2025-03-07 14:10:18 migrate_cancel
2025-03-07 14:10:22 ERROR: migration finished with problems (duration 00:00:13)
TASK ERROR: migration problems

The issue suggests that the migration process starts before ensuring that HOST2 has successfully mapped the volume and has access to it.




Workaround


A possible temporary fix is to introduce a short delay (e.g., 5 seconds) before proceeding with the migration. This would allow enough time for the storage volume to be properly mapped and detected by HOST2.




Task: Fix the Issue in the Plugin


To resolve this issue properly, the plugin should implement a validation step before continuing the migration:


  1. Verify that the destination host (HOST2) can access the volume before starting the migration.
  2. Add a check to confirm the volume is fully visible and accessible from HOST2.
  3. If necessary, introduce a short pause (e.g., 5 seconds) to allow time for the mapping process to complete before continuing.
  4. Modify the plugin logic to only proceed with migration when HOST2 has successfully mapped the volume.

I apologize in advance because I believe this might indeed be an issue with my infrastructure. I'm running Proxmox on top of an ESXi (things we love to do in a lab, hahaha).


I also apologize because I'm not a developer. BUT here are the code modifications:


View attachment 83383

Perl:
  # Wait for the device to appear with increased timeout for live migration

  wait_for( $path_exists, "volume \"$volname\" to map", 30, 0.5 );



  # Additional validation to ensure device is fully accessible

  my $device_accessible = sub {

    return -b $path && -r $path && -w $path;

  };



  # Wait for device to be fully accessible

  wait_for( $device_accessible, "volume \"$volname\" to be fully accessible", 30, 0.5 );


and


Perl:
  if ( !multipath_check( $wwid ) ) {

    print "Debug :: Adding multipath map for device \"$wwid\"\n" if $DEBUG;

    exec_command( [ 'multipathd', 'add', 'map', $wwid ] );

  

    # Wait for multipath to be fully established

    my $multipath_ready = sub {

      return multipath_check( $wwid );

    };

    wait_for( $multipath_ready, "multipath map for volume \"$volname\" to be ready", 30, 0.5 );

  }


Once again, thank you very much for sharing the plugin, and here is my contribution.


Best regards,
Rafael Carvalho.
Hi Rafael,
thank you for your input.

Never had this happen in my setup ?

Migration works flawlessly for VMs on my site ...

Do you have some more info about the vdisk types you are using ? Or could this be a network issue ?

I am running 5 nodes with 2x 25Gbit/s NIC each in MP against 4x 25Gbit/s in x50 R3

Cheers,

Chris
 
Hi Rafael,
thank you for your input.

Never had this happen in my setup ?

Migration works flawlessly for VMs on my site ...

Do you have some more info about the vdisk types you are using ? Or could this be a network issue ?

I am running 5 nodes with 2x 25Gbit/s NIC each in MP against 4x 25Gbit/s in x50 R3

Cheers,

Chris

Chris,

That's exactly why I mentioned that my environment is "different." You have no idea about the creativity of Brazilians. HAHAHAH


Basically, what I have is:


<ESXI<PROXMOX<VM>>> = it's nested virtualization.
Proxmox ends up using VMware's network via vSwitch.
The ESXi host is connected with 4x10GB iSCSI to a Nexus 5K.
The Pure Storage system is an X10R3 with 4x10GB.


I usually use this lab setup for simulations, and it works perfectly. However, of course, virtualization within virtualization can introduce a few milliseconds of delay, which might lead to this behavior.


Anyway, it’s functional.


Now, I'm thinking about improving the naming convention of objects within Pure because it's frustrating to see names like vm-100-disk-1 without knowing which VM they belong to. I believe this weekend I'll work on a script to use Proxmox TAGs to manage and control these issues. This approach is almost like VVOLs, as it directly maps block volumes to VMs.


I truly appreciate your concern, but I believe that since the plugin isn't widely adopted yet, someone might eventually face this issue. My contribution is more about helping enthusiasts quickly resolve these types of problems if they arise.


Best regards,
Rafael
 
Hello everyone,


I'm trying to set up a specific backup solution but can't figure out the proper way to achieve it using native tools. That’s why I’m reaching out here—I hope to find someone who has experience with this setup and can provide some guidance.

Current Setup:

  • Proxmox Cluster: VE 8.3.5
  • Storage: Pure Storage Cluster (connected via iSCSI to the Proxmox hosts)
  • Backup Software: Veeam 12.3.1
  • Current Backup Method: Using the Proxmox Veeam plugin for standard backups
What I Want to Achieve:

I’m looking for a way to leverage native storage snapshots to offload backup processing from the hypervisor. From what I understand, this should be possible with the Veeam plugin—can anyone confirm this?

Questions & Concerns:

  1. Storage Configuration – I’m attaching my storage.cfg file. Could someone review it and point out any issues?
  2. LVM on iSCSI? – Do I need to use LVM on top of the iSCSI target?
  3. VMWare – Is there a Way to integrate Storage Snapshots with these setup correctly and can someone explane the setps to achiev this?
Attached Files:
  • pve/storage.cfg
  • Output of multipath -ll
  • multipath.conf (without the Proxmox Pure Storage plugin installed)

Any insights or recommendations would be greatly appreciated. Thanks in advance
 

Attachments

Veam's implementation for PVE does not use storage snapshot offloading.
To add to that- I was initially very excited about Veeam integration until I actually deployed it. It is very much alpha level imo, and I ended up rolling up a pbs instance in my environment despite having a Veeam store already in place (We're a Veeam partner.) This may change with a 2.0 release, if it ever happens.