Poor performance with PURE FlashArray and Hitachi Open-V

Gilberto Ferreira

Renowned Member
HI.
Recently we move to a DC provider that give us a PURE FlashArray and Hitachi Open-V storage solution.
Both FC!
After configure it with LVM and ZFS as well, we found out poor performance, that not match our expectation.
The first result is with LVM using Hitch Open-V
lvm-hitachi-open-v.jpeg

This one is using Pure Flash
lvm-pure-flash.jpeg

As you can see, there is a difference with IOPS and speed.
I really dont know what is goinf on.
This is the multipath configuration. Don't know if there is some optimization to do or not!
/etc/multipath.conf
defaults {
user_friendly_names yes
polling_interval 2
path_selector "service-time 0"
path_grouping_policy multibus
path_checker readsector0
getuid_callout "/lib/udev/scsi_id -g -u -d /dev/%n"
rr_min_io 100
failback immediate
no_path_retry queue
}

blacklist {
wwid .*
}

blacklist_exceptions {
wwid "360060e8012bdd2005040bdd200000001"
}

multipaths {
multipath {
wwid "360060e8012bdd2005040bdd200000001"
alias LUN0
}
}

Thanks for any help!
 
Hi Gilberto,
Without further information on what benchmarks you are running and from where (inside the guest or on the host), it's difficult to help. That said, layering LVM and ZFS on top of those systems will surely hurt performance.

Here's my advice: start with qualifying the performance of the backend storage system:
  • using fio, measure QD1 IOPS random write 4K IOPS on the host. You should expect something like 50 microseconds (i.e., 20K IOPS). Use a 20-minute testing interval.
  • using fio, measure QD128 random write 4K IOPS on the host. You should expect something north of 250K IOPS. Use a 20-minute testing interval.
Report back on that, and we'll see what direction to go. In the meantime, you may find this tuning guide helpful. It talks about iothreads, virtual storage controllers, and the different aio modes: https://kb.blockbridge.com/technote/proxmox-aio-vs-iouring/


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
  • Like
Reactions: spirit and pvps1
Ok! Let's breakdown!

Here's the output of multipath -ll:
multipath -ll
LUN0 (360060e8012bdd2005040bdd200000001) dm-2 HITACHI,OPEN-V
size=3.0T features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 15:0:2:0 sdd 8:48 active ready running
|- 15:0:3:0 sde 8:64 active ready running
|- 16:0:2:0 sdh 8:112 active ready running
`- 16:0:3:0 sdi 8:128 active ready running
LUN100 (3624a93707e179d8e8be04b0e00012011) dm-10 PURE,FlashArray
size=3.0T features='0' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
|- 15:0:0:1 sdb 8:16 active ready running
|- 15:0:1:1 sdc 8:32 active ready running
|- 16:0:0:1 sdf 8:80 active ready running
`- 16:0:1:1 sdg 8:96 active ready running

I have this lvm-thin created:

pve01:~# pvs
PV VG Fmt Attr PSize PFree
/dev/mapper/LUN0 lun1 lvm2 a-- <3.00t 0 ===========> Using LUN0 (360060e8012bdd2005040bdd200000001) dm-2 HITACHI,OPEN-V
/dev/mapper/LUN100 vg-new lvm2 a-- <3.00t 0 ===========> Using LUN100 (3624a93707e179d8e8be04b0e00012011) dm-10 PURE,FlashArray
pve01:~# vgs
VG #PV #LV #SN Attr VSize VFree
lun1 1 4 0 wz--n- <3.00t 0
vg-new 1 2 0 wz--n- <3.00t 0
pve01:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
vm-100-disk-0 lun1 Vwi-a-tz-- 50.00g vm1 3.54
vm-101-disk-0 lun1 Vwi-a-tz-- 100.00g vm1 18.29
vm-101-disk-1 lun1 Vwi-aotz-- 100.00g vm1 100.00
vm1 lun1 twi-aotz-- 2.99t 3.92 2.18
lvnew vg-new twi-aotz-- 2.99t 0.51 1.07
vm-100-disk-0 vg-new Vwi-aotz-- 100.00g lvnew 15.69

Inside the VM 100 using lvnew which is over LUN100 PURE,FlashArray, I made tree consecutive tests with dd:
root@ubuntu:~# dd if=/dev/zero of=teste bs=1G count=1
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.48664 s, 722 MB/s
root@ubuntu:~# dd if=/dev/zero of=teste bs=1G count=1
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.288 s, 834 MB/s
root@ubuntu:~# dd if=/dev/zero of=teste bs=1G count=1
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.35184 s, 201 MB/s

So started with 722 MB/s and ends up with 201 MB/s!

Inside the VM 101 same tests:
dd if=/dev/zero of=teste bs=1G count=1
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.34644 s, 797 MB/s
dd if=/dev/zero of=teste bs=1G count=1
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.28664 s, 835 MB/s
dd if=/dev/zero of=teste bs=1G count=1
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.62806 s, 660 MB/s

So as you can see, I expected that PURE Flash would have better performance!
 
Gilberto,
Your use of dd as a benchmark is misleading. At a minimum, you should use ODIRECT ( "iflag=direct" and "oflag=direct") to bypass the buffer cache on the output file. There is no doubt that the performance is weak. To debug it, start with the basics mentioned above.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Gilberto,
Your use of dd as a benchmark is misleading. At a minimum, you should use ODIRECT ( "iflag=direct" and "oflag=direct") to bypass the buffer cache on the output file. There is no doubt that the performance is weak. To debug it, start with the basics mentioned above.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
hum... interresting.
I did again

root@ubuntu:~# dd if=/dev/zero of=teste bs=1G count=1 oflag=direct
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.17243 s, 916 MB/s
root@ubuntu:~#
root@ubuntu:~# dd if=/dev/zero of=teste bs=1G count=1 oflag=direct
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.809464 s, 1.3 GB/s
root@ubuntu:~# dd if=/dev/zero of=teste bs=1G count=1 oflag=direct
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.28839 s, 833 MB/s
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!