Hi,
what kind of SAN do you use?
Ultimately storage will be tiered 7 ways, on & off-line.
This has only just this week reached a barely functional stage.
So far there's 2 PowerEdge servers, PVE on the 2950 & Debian Squeeze running nfs-kernel-server on the R210.
High speed connectivity is via 2 SFP+ cables run directly between the 2 servers. Each is equipped with a dual-port Intel 82598 10Gb NIC bonded in balance-rr mode.
All these are Dell parts.
Low speed access to the R210 is through a Netgear GS105 unmanaged Gb switch connected via 2 Cat6 cables to a dual-port Broadcom NetXtremeII BCM5716 1Gb NIC bonded in balance-rr mode.
On the R210 there are 3 speeds available:
25GB on a Kingston SSDNow E-Series via SATA II
Dual 2TB Seagate Barracuda XT's in software RAID1 via SATA III
Six 1TB virtual drives provided by a Drobo-S full of 2TB SATA II drives, via an eSATA (III) cable.
It's barely functional, and nothing's working properly yet.
One VM's database is supposed to be mounted on that SSD, and it doesn't work unless I install the entire server on a RAW image living on it as provided through PVE's NFS storage.
The connections are supposed to be brokered by Vyatta in a VM, and PVE's still using the 1Gb connection to reach the switch that has the 2Gb connection to the server.
There's still alot to sort out.
This because among other things apparently I'm slow to understand routing, and the learning curve's a hell of a thing.
Plus the Drobo is awfully slow. I suspect a faulty cable, replacing that & monitoring the effect is on the to-do list.
Currently I'm only able achieve 100 GB / hour during backups.
This in itself is an enormous improvement from how it once was, when I had NFS over a 1Gb SheevaPlug with an eSATA drive.
What once took almost 18 hours now completes in less than 6 and it's only the first draft.
The project was discussed in
this thread over the summer, I've learned alot since.
I hope to have the 20Gb link sorted by Friday, but that's what I said last week too- lol.
iScsi with Myricom-NIC, or Dolphin? How are your experience with that (througput)?
Udo
I don't know what any of that is, sounds like hardware?
What I do know is that I wasted some time developing a sane naming convention back when I was considering iSCSI.
The decision to go with NFS came about when I learned an ext4 volume can be converted to btrfs once that's ready.
In order to make use of it's deduplication, it made more sense to have the server rather than the client manage the filesystem.
Therefore, with the exception of the Drobo, whose sparse provisioning only understands ext3, all the storage volumes are btrfs-ready ext4.