Sry, i do not. As i said we do not use LXC at work, and Gluster only for experimental Lab stuff with kvm containers (different from your usecase)
Q: What connectivity do your proxmox-Nodes have ? 1G, 10G, infiniband ?
The reason i keep asking is as follows:
When ever you use a SAN, Ceph or Gluster you want to go with a separate Storage network, or a single network, that is properly sized and properly managed by QOS.
For Gluster this specifically is because of the following:
http://blog.gluster.org/2010/06/video-how-gluster-automatic-file-replication-works/
Basically, when ever you write a file to Gluster, your bandwith gets devided by the Number of "SAN's with Gluster" on top.
So lets say you have a 1G pipe and 2 Sans, your left with 0.5G or 62.5 MB/s of Bandwith outgoing from your Proxmox-Node, when you write from it. That is shared on 50 CT's. Thats why i asked earlier if you have any metrics to share on your current usage of your storage-subsystem.
It is also important the other way around. Lets say you have 2 Gluster Nodes, each with a single dedicated 1G pipe, And you have 3 Proxmox-Nodes attached to them. When only one Proxmox-Node is reading a large amount of files, you statistically end up with 2G worth of bandwith (or 250 MB/s for 50 CT's), but if all 3 proxmox-Servers are using the Gluster-Storage, you are looking at 2G/3=0.66G or 83 MB/s for 50 CT's. which is btw 1.6 MB/s per CT.
Not sure that will work, but thats why knowing your current metrics is important.
I'd self-build a node with Gluster before i'd go an buy a ready-Made san (and maybe set up Gluster ontop of it), or go with something like netapp.
A lot cheaper. The reason is not just base cost, but also running cost.
This is because you can leave out all the redundancy features, and spec it to exactly your needs all you need is case+mainboard+cpu+ram+psu+Disks/Flash+nic(s). You size em exactly as you need em for your use-case
Then just setup your favourite Linux + ZFS + Gluster and your done.
Need more redundancy ? just add another Gluster-node.