Security is important. Pehaps iSCSI is the better solution? I would like to also connect some non-blade 1U servers I have to the shared storage .. so although my blade enclosure probably has some unified storage connection, I think something networked may be better for me.
Lets say I have a budget of $3,000 for a shared storage solution .. any contenders in that range? Say I need to scale up to 100 Linux VMS . Some FreePBX/Asterisk, some OpenVPN servers, and some email servers. Nothing too high traffic and nothing too disk intensive.
Speed (latency) and redundancy/integrity of data are most important to me- can't afford to lose custom configurations for my customers VoIP systems etc.
I think $3k is a good start for a single SAN server. Another consideration is the network interfaces on your proxmox nodes and the networking equipment they are connected to. For example we had a lot of issues with QoS. Should SAN traffic be favored over SIP or RTP? What happens if you need to boot a VM while 80 other PBX VMs are handling live calls?
What kind of network interfaces do your Proxmox Nodes have? What is your networking vendor and backbone topography? You need to consider scalability and redundancy, as well as geo-redundancy. If you are willing to share more information about your setup, I can tell you how my firm solved our storage problem that costs MUCH less than $6k (assuming a redundant NAS setup @3k each) + upkeep overhead.
So for 'speed' the real metric is your network throughput, QoS, and your Proxmox node's NIC bandwidth.
for redundancy, there are open-source options like Ceph and zfs, and pNFS (I am assuming you intend to have multiple physical storage arrays for redundancy.)
As for integrity I don't know about any open-source linux tools that can accomplish this automatically. One issue with proxmox and using a shared-storage environment is the possibility of a split-brain condition occurring. This happens when corosync detects a node failure for whatever reason, and initializes the HA VMs on a quorate node, but the "failed" node has those VMs still running and reading/writing data to the storage system. Now you have two identical virtual identities reading and writing to the same file location, eventually destroying it. Proxmox does have some fencing support, for example with IPMI fencing. However if the IPMI interface becomes unresponsive on the failed node, Pacemaker uses an outdated OpenClusterFramework script that will hang forever waiting for IPMI response, effectively making your HA cluster non-HA. So decide if you want to riske Split-Brain for a single VM or have an entire proxmox and however many VMs running on it down until you get angry calls from 120 customers all at once.