Erfahrungen mit LVM (thick) + iSCSI Multipath (Pure Storage) und Volume-Chain Snapshots unter Proxmox VE 9

Die Snapshot Option für shared LVM ist ja noch experimental, daher ist es schon möglich, dass da unbekannte Effekte auftreten.
Ich bleibe vorerst Produktiv bei nur LVM ohne Snapshotfunktion.
Genau dein geschildertes Problem, mit vergessen das da ein Snapshot drauf ist, ist nicht Cool. Daher bevorzuge ich lieber Snapshot Backups, die im Notfall auch Monatelang liegen bleiben können.
Gerade für Produktivbetrieb mache ich schon viel, aber bleibe gern bei stabilen Features. Mir kann ja auch keiner sagen ob das Snapshot Feature bei shared LVM auch Performance kostet und wieviel. Vermutlich schon etwas.
 
Die Snapshot Option für shared LVM ist ja noch experimental, daher ist es schon möglich, dass da unbekannte Effekte auftreten.
Ich bleibe vorerst Produktiv bei nur LVM ohne Snapshotfunktion.
Genau dein geschildertes Problem, mit vergessen das da ein Snapshot drauf ist, ist nicht Cool. Daher bevorzuge ich lieber Snapshot Backups, die im Notfall auch Monatelang liegen bleiben können.
Gerade für Produktivbetrieb mache ich schon viel, aber bleibe gern bei stabilen Features. Mir kann ja auch keiner sagen ob das Snapshot Feature bei shared LVM auch Performance kostet und wieviel. Vermutlich schon etwas.
Du hast recht
Floh hat mir bereits einen Beitrag geschickt, wonach das Problem tatsächlich mit dem TPM zu tun hat.

Ich habe heute das Plugin für Pure installiert und fahre aktuell zweigleisig. :cool:

1763047589120.png
 
Last edited:
  • Like
Reactions: Johannes S
I installed the plugin for Pure today and am currently running on two tracks. :cool:
It's a whole lot better to manage than doing manual iSCSI LUNs haha.

I'm hoping at the beginning of the next year I can start committing plugin improvements to match the TrueNAS plugin I have helped development:
https://github.com/WarlockSyno/TrueNAS-Proxmox-VE-Storage-Plugin

The Pure plugin is missing extensive documentation and tooling. Plus some nice to have like automatic portal login and pre-flight checks.

However, we have been running it in production for about 6 to 7 months now and have not had any issues.
 
  • Like
Reactions: j.gelissen
It's a whole lot better to manage than doing manual iSCSI LUNs haha.

I'm hoping at the beginning of the next year I can start committing plugin improvements to match the TrueNAS plugin I have helped development:
https://github.com/WarlockSyno/TrueNAS-Proxmox-VE-Storage-Plugin

The Pure plugin is missing extensive documentation and tooling. Plus some nice to have like automatic portal login and pre-flight checks.

However, we have been running it in production for about 6 to 7 months now and have not had any issues.
I’m currently still running into one issue: due to Security Mode on the Pure array, snapshots are automatically created per LUN.

What’s not working properly yet is that the LUNs created by the Proxmox Pure plugin are not automatically assigned to the Proxmox cluster host group, and therefore the hosts’ IQNs are not whitelisted by default.

Because of that, I still have to manually adjust the host group assignments after each LUN creation.

By the way, the TrueNAS plugin looks really great — very cleanly implemented and packed with useful features.
 
Last edited:
I’m currently still running into one issue: due to Security Mode on the Pure array, snapshots are automatically created per LUN.

What’s not working properly yet is that the LUNs created by the Proxmox Pure plugin are not automatically assigned to the Proxmox cluster host group, and therefore the hosts’ IQNs are not whitelisted by default.

Because of that, I still have to manually adjust the host group assignments after each LUN creation.

By the way, the TrueNAS plugin looks really great — very cleanly implemented and packed with useful features.

Thanks :)

As for the Pure plugin, how we have it configured:
- Create a host group in Pure
- Create hosts for each PVE node and add them to the host group (each entry with that hosts IQN)
- Create a volume group, then assign the volume group to the host group settings
- Create a protection group for that volume group or use the default pg policy

So in the end it'll make sub-volumes in under the volume group.

Ours is called "proxmox" and so all of the volumes are "proxmox/vm-100-disk-1"
1763055158256.png

1763055180021.png

This way it's all neatly organized instead of under the root LUN. You can then make other volume groups for other clusters on the same Pure if you so choose.

This will keep your Proxmox cluster segmented from the other volumes in Pure.

So in short:
Host Group has hosts assigned to it
Host Group is given access to a Volume Group
Volume Group keeps individual volumes in the same logical area in Pure
 
Thanks :)

As for the Pure plugin, how we have it configured:
- Create a host group in Pure
- Create hosts for each PVE node and add them to the host group (each entry with that hosts IQN)
- Create a volume group, then assign the volume group to the host group settings
- Create a protection group for that volume group or use the default pg policy

So in the end it'll make sub-volumes in under the volume group.

Ours is called "proxmox" and so all of the volumes are "proxmox/vm-100-disk-1"
View attachment 92741

View attachment 92742

This way it's all neatly organized instead of under the root LUN. You can then make other volume groups for other clusters on the same Pure if you so choose.

This will keep your Proxmox cluster segmented from the other volumes in Pure.

So in short:
Host Group has hosts assigned to it
Host Group is given access to a Volume Group
Volume Group keeps individual volumes in the same logical area in Pure
Thanks a lot! I’ll set it up on Monday then. The thing with the volume group is great! I don’t think it will cause any problems if I change it now, right? I also haven’t migrated any VMs yet.

Ah, one more thing came to mind: how can I make Multipath create a path? Everything worked fine with my test VM earlier, but the multipath didn’t establish itself. With my LVM LUNs, I always had to run multipath -a wwid and then multipath -r again.

But you only have five VMs? Are you sure you can run 70 VMs with that?
 
Last edited:
Hast du deine Pure gespiegelt oder nur eine?
Wenn gespiegelt, dann ist das mit iSCSI OK, aber bei einer Single Pure hätte ich statt iSCSI besser NVMeoF genommen.
Ist wesentlich einfacher zu konfigurieren als iSCSI und du brauchst kein Multipath konfigurieren, weil das schon im Protokoll drin ist. Von der deutlich besseren Performance brauchen wir nicht reden.
 
Thanks a lot! I’ll set it up on Monday then. The thing with the volume group is great! I don’t think it will cause any problems if I change it now, right? I also haven’t migrated any VMs yet.

Ah, one more thing came to mind: how can I make Multipath create a path? Everything worked fine with my test VM earlier, but the multipath didn’t establish itself. With my LVM LUNs, I always had to run multipath -a wwid and then multipath -r again.

But you only have five VMs? Are you sure you can run 70 VMs with that?

I don't think it should cause a problem, but not sure to be honest.

Multipath in Proxmox is kinda manual, hence why I want to update the Pure plugin to have autologin haha. Here's my notes on setting up Multipath:
This is also assuming you are using the same subnet for multiple interfaces, officially it is not best practice in Linux to use the same subnet on multiple interfaces, unlike ESXi.

____
Edit the iSCSI config to turn on iSCSI login at startup, changes the iSCSI timeout to a lower value, open 4 sessions per connection, and use equque depth 128. We chose 128

nano /etc/iscsi/iscsid.conf

Code:
node.startup = automatic
node.session.timeo.replacement_timeout = 15
node.session.nr_sessions = 4
node.session.queue_depth = 128

From here you can login to your Pure targets

iscsiadm -m discovery -t sendtargets -p IP-ADDRESS-OF-ISCSI-INTERFACE-ON-PURE

Install and edit multipath

apt install multipath-tools

nano /etc/multipath.conf

This is what we use for the Pure multipath config
Code:
defaults {
  polling_interval 2
  find_multipaths no
}

devices {
  device {
    vendor               "PURE"
      product              "FlashArray"
      path_selector        "queue-length 0"
      #path_selector        "service-time 0"
      hardware_handler     "1 alua"
      path_grouping_policy group_by_prio
      prio                 alua
      path_checker         tur
      user_friendly_names  no
      features             0
      alias_prefix         "pure"
      recheck_wwid         yes
      fast_io_fail_tmo     10
      dev_loss_tmo         60
      failback             immediate
      no_path_retry        fail
  }
}

blacklist {
  device {
    vendor  ".*"
    product ".*"
  }
}

blacklist_exceptions {
  wwid "3624a9370.*"
  device {
    vendor "PURE"
  }
}

On your Proxmox host, you'll need to bind iSCSI to specific interfaces. There's probably a better way to do this, but this is how we do it.
Bash:
# Tell iSCSI to use these two interfaces for iSCSI
iscsiadm -m iface -I ens2f0np0 --op=new
iscsiadm -m iface -I ens2f1np1 --op=new

# Bind the interface to a specific MAC address
iscsiadm -m iface -I ens2f0np0 --op=update -n iface.hwaddress -v bc:97:e1:78:47:60
iscsiadm -m iface -I ens2f1np1 --op=update -n iface.hwaddress -v bc:97:e1:78:47:61

# Tell the interfaces to scan the storage and create the paths



iscsiadm -m discovery -t st -p 10.10.254.50:3260 –interface=ens2f1np1 --discover --login
iscsiadm -m node --op update -n node.startup -v automatic

If you are using the same subnets between the iSCSI interfaces, you will have to edit your systectl config to tell Linux to only respond to requests destined for the interface with the MAC address of the interface.

nano /etc/sysctl.conf

net.ipv4.conf.all.arp_ignore = 1

Apply the change
sysctl -p /etc/sysctl.conf

If you do not set that and are using the same subnets, you will get ARP flapping.

Make sure your interfaces login to the Pure targets:

Bash:
iscsiadm -m discovery -t sendtargets -p 10.10.254.50 -I ens2f0np0
iscsiadm -m discovery -t sendtargets -p 10.10.254.50 -I ens2f1np1
iscsiadm -m node --op update -n node.startup -v automatic
iscsiadm -m discovery -t sendtargets -p 10.30.254.50 --login
 
  • Like
Reactions: waltar
Have you mirrored your Pure or just one?
If mirrored, then it's OK with iSCSI, but with a Single Pure I would have preferred NVMeoF instead of iSCSI.
It's much easier to configure than iSCSI and you don't need to configure Multipath because it's already in the protocol. We don't need to talk about significantly better performance.

This is something I need to look into as well, I had requested the maintainers of the Pure plugin to add NVMe/TCP, but I'm not sure if it went past the skeleton framework. It has been implemented in the TrueNAS Storage plugin, so it actually wouldn't be too hard to port these changes to the Pure plugin as well.
 
  • Like
Reactions: Falk R.
Hast du deine Pure gespiegelt oder nur eine?
Wenn gespiegelt, dann ist das mit iSCSI OK, aber bei einer Single Pure hätte ich statt iSCSI besser NVMeoF genommen.
Ist wesentlich einfacher zu konfigurieren als iSCSI und du brauchst kein Multipath konfigurieren, weil das schon im Protokoll drin ist. Von der deutlich besseren Performance brauchen wir nicht reden.
Naja ich habe es jetzt leider schon auf iscsi eingerichtet und es laufen bereits 30vms damit (eine Pure mit zwei Controllern). Bin auch noch nicht fertig und wollte jetzt nicht mittendrin auf NVMe wechseln. Grundsätzlich läuft auch alles stabil, ich muss nur einmalig neue LUN whitelisten und neu suchen im multipath.
 
Last edited:
I don't think it should cause a problem, but not sure to be honest.

Multipath in Proxmox is kinda manual, hence why I want to update the Pure plugin to have autologin haha. Here's my notes on setting up Multipath:
This is also assuming you are using the same subnet for multiple interfaces, officially it is not best practice in Linux to use the same subnet on multiple interfaces, unlike ESXi.

____
Edit the iSCSI config to turn on iSCSI login at startup, changes the iSCSI timeout to a lower value, open 4 sessions per connection, and use equque depth 128. We chose 128

nano /etc/iscsi/iscsid.conf

Code:
node.startup = automatic
node.session.timeo.replacement_timeout = 15
node.session.nr_sessions = 4
node.session.queue_depth = 128

From here you can login to your Pure targets

iscsiadm -m discovery -t sendtargets -p IP-ADDRESS-OF-ISCSI-INTERFACE-ON-PURE

Install and edit multipath

apt install multipath-tools

nano /etc/multipath.conf

This is what we use for the Pure multipath config
Code:
defaults {
  polling_interval 2
  find_multipaths no
}

devices {
  device {
    vendor               "PURE"
      product              "FlashArray"
      path_selector        "queue-length 0"
      #path_selector        "service-time 0"
      hardware_handler     "1 alua"
      path_grouping_policy group_by_prio
      prio                 alua
      path_checker         tur
      user_friendly_names  no
      features             0
      alias_prefix         "pure"
      recheck_wwid         yes
      fast_io_fail_tmo     10
      dev_loss_tmo         60
      failback             immediate
      no_path_retry        fail
  }
}

blacklist {
  device {
    vendor  ".*"
    product ".*"
  }
}

blacklist_exceptions {
  wwid "3624a9370.*"
  device {
    vendor "PURE"
  }
}

On your Proxmox host, you'll need to bind iSCSI to specific interfaces. There's probably a better way to do this, but this is how we do it.
Bash:
# Tell iSCSI to use these two interfaces for iSCSI
iscsiadm -m iface -I ens2f0np0 --op=new
iscsiadm -m iface -I ens2f1np1 --op=new

# Bind the interface to a specific MAC address
iscsiadm -m iface -I ens2f0np0 --op=update -n iface.hwaddress -v bc:97:e1:78:47:60
iscsiadm -m iface -I ens2f1np1 --op=update -n iface.hwaddress -v bc:97:e1:78:47:61

# Tell the interfaces to scan the storage and create the paths



iscsiadm -m discovery -t st -p 10.10.254.50:3260 –interface=ens2f1np1 --discover --login
iscsiadm -m node --op update -n node.startup -v automatic

If you are using the same subnets between the iSCSI interfaces, you will have to edit your systectl config to tell Linux to only respond to requests destined for the interface with the MAC address of the interface.

nano /etc/sysctl.conf

net.ipv4.conf.all.arp_ignore = 1

Apply the change
sysctl -p /etc/sysctl.conf

If you do not set that and are using the same subnets, you will get ARP flapping.

Make sure your interfaces login to the Pure targets:

Bash:
iscsiadm -m discovery -t sendtargets -p 10.10.254.50 -I ens2f0np0
iscsiadm -m discovery -t sendtargets -p 10.10.254.50 -I ens2f1np1
iscsiadm -m node --op update -n node.startup -v automatic
iscsiadm -m discovery -t sendtargets -p 10.30.254.50 --login
I don’t need most of that because I’m not using two NICs in the same subnet. I’m running an active-passive setup for redundancy. Other than that, my configuration looks very similar. I’ll double-check everything on Monday.
 
Naja ich habe es jetzt leider schon auf iscsi eingerichtet und es laufen bereits 30vms damit (eine Pure mit zwei Controllern). Bin auch noch nicht fertig und wollte jetzt nicht mittendrin auf NVMe wechseln. Grundsätzlich läuft auch alles stabil, ich muss nur einmalig neue LUN whitelisten und neu suchen im multipath.
Das macht ja nix, der Wechsel auf NVMe over TCP geht ja über die gleichen Adapter und dann einfach die VMs von der alten LUN auf den neuen Namespace moven. Ich habe schon mehrere Powerstore so umgebaut, dass die gleiche Hardware jetzt deutlich besser performt.
 
  • Like
Reactions: Johannes S
But you only have five VMs? Are you sure you can run 70 VMs with that?

Oh sorry, I didn't respond to this question. Nah, this is just our test cluster and volume group. We have about 75 VMs running on another and we're about to migrate two more sites which will make a total of 130ish VMs running on 4 different Pure arrays.

In comparison to the recommended setup with VMware to Pure we are getting about 2x the read/write performance but slightly lower IOPS. But it's possible that it's because we are using AURA for our iSCSI profile. We are maxing out our 2x25GbE connections at the moment in benchmarks, so we could actually move to 100GbE and see how well it scales past the 50GbE mark. Our VMware setup (on the same exact hardware) was not reaching near this speeds. I believe it's because of Linuxs more efficient multipathing compared to ESXi.