Thanks again, all. So yeah; old hardware is old. At least now that I have a better sense what I'm looking at I'll be able to "give it to them straight" when their lofty goals come up. As for keeping some lit up, I actually like the horse and buggy picture... in today's world, at most they might...
LnxBil, Andrew Hart: Yep. Ceph is a bad fit for this old hardware. Thanks. No SSDs. Two 1 gig nics. Little ram. Horrible firmware/hardware support for... anything.
I will be learning more and floating some ceph tests with some more spendy orgs though. I'm gathering that ceph is flat-out...
Thanks for the replies!
I should have mentioned that it's unlikely I'll get to keep them together geographically. If future meetings on their needs go the way I anticipate, the boxes will end up in split into groups of one to six nodes, on sites with external bandwidth too poor for any sort of...
I was allocated a stack of decommissioned HP servers that only ever saw extremely light usage. Fun! But they belong to a non-profit, and my budget for this side-project is precisely $0. Almost all are single-chip, four-core Xeons. They do not have SSDs nor hardware RAID. I would still like to...
Hi, Alwin.
I frankly expected a petty response to my petty post :) Thanks for not just deleting the thread outright. I'm hopeful there's progress to be made.
To be fair to the documentation, it's pretty good! Everything I've seen there was accurate and up to date. My "badly outdated" comment...
Enthusiastic new proxmox use here. My experience in the past two hours, discrete problems in bold...
Find some holes in the documentation. I get it. Not a huge deal, plus I see there's a wiki!
The two wiki articles I'm depending cover the documentation gap, but are are badly outdated.
I figure...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.