better option for storage

I just want to point out that if you have multiple pipes (which I take as a given, whats the point of multipathing without them?) any one lun can only use one path at a given time, leaving the other(s) idle. For an active/active config you want a minimum of 1 lun per host channel on your storage.

Separate VLAN's and load balancing. Its the recommended approach from my storage vendor. Using multipath also has some other advatages I prefer.
 
I just want to point out that if you have multiple pipes (which I take as a given, whats the point of multipathing without them?) any one lun can only use one path at a given time, leaving the other(s) idle. For an active/active config you want a minimum of 1 lun per host channel on your storage.

I'll show you what I have in multipath and see if we can improve

[4:0:0:10] disk HP HSV340 1120 /dev/sdj 783GB

[0:0:0:10] disk HP HSV340 1120 /dev/sdb 783GB
[0:0:0:11] disk HP HSV340 1120 /dev/sdc 783GB
[0:0:0:12] disk HP HSV340 1120 /dev/sdd 4.39TB
[0:0:0:13] disk HP HSV340 1120 /dev/sde 4.71TB

[0:0:1:10] disk HP HSV340 1120 /dev/sdf 783GB
[0:0:1:11] disk HP HSV340 1120 /dev/sdg 783GB
[0:0:1:12] disk HP HSV340 1120 /dev/sdh 4.39TB
[0:0:1:13] disk HP HSV340 1120 /dev/sdi 4.71TB

[4:0:0:11] disk HP HSV340 1120 /dev/sdk 783GB
[4:0:0:12] disk HP HSV340 1120 /dev/sdl 4.39TB
[4:0:0:13] disk HP HSV340 1120 /dev/sdm 4.71TB

[4:0:1:10] disk HP HSV340 1120 /dev/sdn 783GB
[4:0:1:11] disk HP HSV340 1120 /dev/sdo 783GB
[4:0:1:12] disk HP HSV340 1120 /dev/sdp 4.39TB
[4:0:1:13] disk HP HSV340 1120 /dev/sdq 4.71TB

multipath -ll
DFAST2 (36001438009b066860000600000690000) dm-2 HP,HSV340
size=730G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 0:0:0:11 sdc 8:32 active ready running
|- 0:0:1:11 sdg 8:96 active ready running
|- 4:0:0:11 sdk 8:160 active ready running
`- 4:0:1:11 sdo 8:224 active ready running
DFAST1 (36001438009b066860000600000650000) dm-1 HP,HSV340
size=730G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 0:0:0:10 sdb 8:16 active ready running
|- 0:0:1:10 sdf 8:80 active ready running
|- 4:0:0:10 sdj 8:144 active ready running
`- 4:0:1:10 sdn 8:208 active ready running
DLOW2 (36001438009b066860000600000720000) dm-4 HP,HSV340
size=4.3T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 0:0:0:13 sde 8:64 active ready running
|- 0:0:1:13 sdi 8:128 active ready running
|- 4:0:0:13 sdm 8:192 active ready running
`- 4:0:1:13 sdq 65:0 active ready running
DLOW1 (36001438009b0668600006000006d0000) dm-3 HP,HSV340
size=4.0T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 0:0:0:12 sdd 8:48 active ready running
|- 0:0:1:12 sdh 8:112 active ready running
|- 4:0:0:12 sdl 8:176 active ready running
`- 4:0:1:12 sdp 8:240 active ready running


I think they have active active.
 
Looks good, I don't see any issues. Just throw LVM on top (Don't create any LV's, just the VG). Then present the VG's to the proxmox hosts from the GUI.
 
rickygm said:
I'll show you what I have in multipath and see if we can improve

If I read this right, you have 4 seperate luns each with 4 paths. That seems perfectly balanced. The only POTENTIAL for performance improvements is to split your 4 luns into 8 but that only helps for specific use cases.
 
sorry, I did not know to explain, I have two hosts, but I have two HBAs per host
lspci | grep -i fibre
07:00.0 Fibre Channel: QLogic Corp. ISP2532-based 8Gb Fibre Channel to PCI Express HBA (rev 02)
0a:00.0 Fibre Channel: QLogic Corp. ISP2532-based 8Gb Fibre Channel to PCI Express HBA (rev 02)
 
I would merge the slow and the fast ones in one volume group, but besides that, it's ok.

e.g. like this

Code:
root@proxmox4 ~ > pvs
  PV                                       VG       Fmt  Attr PSize PFree
  /dev/mapper/EVA6400_PROXMOX_DATA_FAST_01 san-fast lvm2 a--  1,95t  66,00g
  /dev/mapper/EVA6400_PROXMOX_DATA_FAST_02 san-fast lvm2 a--  1,95t   1,88t
  /dev/mapper/EVA6400_PROXMOX_DATA_SLOW_01 san-slow lvm2 a--  1,95t 991,99g
  /dev/mapper/EVA6400_PROXMOX_DATA_SLOW_02 san-slow lvm2 a--  1,95t   1,20t