My Situation: I've got 3 Xenserver hosts in a datacenter that's overkill for my needs, and I'm tired of paying for a full rack. So I'm going to host out of my house - gigabit fiber and generator backup are already here, so it just makes sense. It'll be a step down as far as uptime, but for the money I'm saving I'm fine with it. (I think.)
Anyway, as part of the move I'm doing away with Xenserver because, well, it's Citrix, and I don't particularly like the company and I really don't like the direction they're taking Xenserver. Proxmox is the platform I'll migrate to if I can make it work. So with that in mind I've got the following coming:
So, my question: are there any particular numbers that would be most useful? Any other tests I should run while I'm at it?
* I say "at least" because the server I ordered last week never got confirmed, and Dell's site didn't have it listed. So I called Dell and got a nice Indian woman who told me there was no order by that number and that my card wasn't charged. So I ordered another one, and paid with Paypal. Paypal did get charged, but no order listing. With lots of cursing I ordered the same thing from an Amazon reseller. Now I've got something arriving via UPS tomorrow (the original arrival date) from a company that offers "the largest excess storage supply chain network across the globe." So, yeah. I hope I like the box because I probably just committed to a HA cluster of the things...
Anyway, as part of the move I'm doing away with Xenserver because, well, it's Citrix, and I don't particularly like the company and I really don't like the direction they're taking Xenserver. Proxmox is the platform I'll migrate to if I can make it work. So with that in mind I've got the following coming:
- At least* one Dell T30 server. Nothing fancy - E3-1225, 64G max ram.
- A Logitech SATA card that I can flash into IT mode.
- Some Toshiba 4TB drives
- Samsung SM863a drive for SLOG and maybe L2ARC use
- 64G RAM from Crucial.
- Install as RAIDZ-1 using the on-board SATA controller, install a VM, and measure drive performance.
- Move the drives from the motherboard SATA to the SATA card, and redo the test. I'd like to know if the third-party card outperforms the on-motherboard controller in any measurable way as some on the Internet claim.
- Add the SSD as a SLOG, and retest the same way.
- Tweak kernel.sched_migration_cost and kernel.sched_autogroup_enabled because why not? I probably won't see much change on a single VM, but it caught my eye today and if the results are noticeable I'll post something about it.
- I'll probably install Ceph, migrate the testing VM's storage to that, and test again.
- Once I migrate my servers and vacate my rack, I'll install the QNAP rack-mount server in my house, add NFS and iSCSI stores from it, and test those as well. How does Ceph perform versus file- and block-based shares from a separate device? Can you even notice the difference over a 1 gbit connection?
So, my question: are there any particular numbers that would be most useful? Any other tests I should run while I'm at it?
* I say "at least" because the server I ordered last week never got confirmed, and Dell's site didn't have it listed. So I called Dell and got a nice Indian woman who told me there was no order by that number and that my card wasn't charged. So I ordered another one, and paid with Paypal. Paypal did get charged, but no order listing. With lots of cursing I ordered the same thing from an Amazon reseller. Now I've got something arriving via UPS tomorrow (the original arrival date) from a company that offers "the largest excess storage supply chain network across the globe." So, yeah. I hope I like the box because I probably just committed to a HA cluster of the things...