We use the DCS3510's in some of our servers. We did have one go bad after two weeks. But besides that they've been great.Go for Intel DC S35xx (cheap) or Intel DC S37xx (more expensive but better performance and durability)
We use the DCS3510's in some of our servers. We did have one go bad after two weeks. But besides that they've been great.Go for Intel DC S35xx (cheap) or Intel DC S37xx (more expensive but better performance and durability)
We've used both FreeNas and Nas4Free. Both have been solid. I also helped a buddy deploy an IXSystems solution in a DC and those guys were great to work with.I'm personally partial to FreeNas, mainly due to there being a commercial company behind it, that pays a larger amount of developers, the larger community (altho they have a large amount of anti-social members), and seem to have a healthier commit rate.
Whats the reasoning behind all SSD ? is it just reliability ? or does Speed factor into this as well ??
No. UPS is enough for me.
For redundancy check this out: http://www.znapzend.org/That seems to be the common consensus. Obviously we'll build the NAS to be as reliable as possible, but it seems like the component that would be the single point of failure to the entire cluster should be redundant.
Checked it out. Not a lot of documentation. And no one on the IRC channel. Has anyone ever been successful getting OpenVZ/LXC working on Gluster?For redundancy check this out: http://www.znapzend.org/
Sry, i do not. As i said we do not use LXC at work, and Gluster only for experimental Lab stuff with kvm containers (different from your usecase)
Q: What connectivity do your proxmox-Nodes have ? 1G, 10G, infiniband ?
The reason i keep asking is as follows:
When ever you use a SAN, Ceph or Gluster you want to go with a separate Storage network, or a single network, that is properly sized and properly managed by QOS.
For Gluster this specifically is because of the following:
http://blog.gluster.org/2010/06/video-how-gluster-automatic-file-replication-works/
Basically, when ever you write a file to Gluster, your bandwith gets devided by the Number of "SAN's with Gluster" on top.
So lets say you have a 1G pipe and 2 Sans, your left with 0.5G or 62.5 MB/s of Bandwith outgoing from your Proxmox-Node, when you write from it. That is shared on 50 CT's. Thats why i asked earlier if you have any metrics to share on your current usage of your storage-subsystem.
It is also important the other way around. Lets say you have 2 Gluster Nodes, each with a single dedicated 1G pipe, And you have 3 Proxmox-Nodes attached to them. When only one Proxmox-Node is reading a large amount of files, you statistically end up with 2G worth of bandwith (or 250 MB/s for 50 CT's), but if all 3 proxmox-Servers are using the Gluster-Storage, you are looking at 2G/3=0.66G or 83 MB/s for 50 CT's. which is btw 1.6 MB/s per CT.
Not sure that will work, but thats why knowing your current metrics is important.
I'd self-build a node with Gluster before i'd go an buy a ready-Made san (and maybe set up Gluster ontop of it), or go with something like netapp.
A lot cheaper. The reason is not just base cost, but also running cost.
This is because you can leave out all the redundancy features, and spec it to exactly your needs all you need is case+mainboard+cpu+ram+psu+Disks/Flash+nic(s). You size em exactly as you need em for your use-case
Then just setup your favourite Linux + ZFS + Gluster and your done.
Need more redundancy ? just add another Gluster-node.
Yes, we use it as iSCSI solution. I think it also supports FC.Hi Nils. Open-E looks interesting. Do you connect to it over ISCSI?
We use essential cookies to make this site work, and optional cookies to enhance your experience.