Should I Not?

You could watch other youtubers. The channels I watch build clusters using proper refurbished rackable enterprise hardware fitting the ceph/ZFS/cluster requirements ;)

For your 2100$ you could build two servers each 32 core, 512GB RAM, 4x 1TB raid10 Enterprise SSD, 40Gbit NIC. Not new, quite outdated, very power hungry, very loud but tons of bang for the buck and well suited for server workloads.
 
Last edited:
  • Like
Reactions: J-dub
lol just what I need!

3 screaming full depth servers, gobs of redundant storage networking, slap in a AC unit for the new "server" room, and start chewing through $200 a month in electricity... But wow did the "Barbie" movie load fast eh?

I think I've traveled far enough down this rabbit hole :)
 
3 screaming full depth servers, gobs of redundant storage networking, slap in a AC unit for the new "server" room, and start chewing through $200 a month in electricity... But wow did the "Barbie" movie load fast eh?

I think I've traveled far enough down this rabbit hole
You could transplant the hardware in quiet 4U cases with 120mm fans. Or even in your default uATX RGB gaming towers. Then they are silent enough that I can sleep in the same room. And one of them is idling at 20W. Server hardware doesn't mean it has to be loud and power inefficent. My GamingPC is way louder and more power hungy than my biggest server.
 
Last edited:
You could watch other youtubers. The channels I watch build clusters using proper refurbished rackable enterprise hardware fitting the ceph/ZFS/cluster requirements ;)

For your 2100$ you could build two servers each 32 core, 512GB RAM, 4x 1TB raid10 Enterprise SSD, 40Gbit NIC. Not new, quite outdated, very power hungry, very loud but tons of bang for the buck and well suited for server workloads.

I woud just add that the "outdated" 32 core servers will also happen to be about the level of Coffee Lake i7 "consumer" thing with 1/10th of the TDP and like you wrote, vacuum cleaners. The only part that would be probably more economical is the RAM (especially it's ECC). Electricity bill will be paying you a Minisforum every few months. There's a reason the gear is sold off at those low prices.
 
I woud just add that the "outdated" 32 core servers will also happen to be about the level of Coffee Lake i7 "consumer" thing with 1/10th of the TDP and like you wrote, vacuum cleaners. The only part that would be probably more economical is the RAM (especially it's ECC). Electricity bill will be paying you a Minisforum every few months. There's a reason the gear is sold off at those low prices.
It's not all about CPU performance. Most of the time my 32 thread Xeon is idling at below 2% load. What I need are the 40 PCIe lanes for the 7 PCIe slots that are all equipped as well as lots of ECC RAM (all 8 DIMM sockets equipped and running out of space again...). Only things that are always running out here are RAM, PCIe slots and 2.5/3.5" bays. All stuff a MiniPC can't offer even if that new 4 core mobile CPU is faster than my 16 core Xeon.
 
Last edited:
I actually used to have 2 PowerEdge R710 servers and 1 PowerEdge R620 that I used in my old MSP business. I gave the rack away full of the servers, hdds, rails, etc. and network stack to some poor, tall, good looking kid (must be a hard life) on Facebook Market place. Only kept the UPS. Was the first thing I did. I hated those blades. So loud, super power hungry. Took up tons of space. Ugly too.

I went to the mini PCs for $, size, power usage, noise and surprising ability. I had full intentions on thundbolt-net and CEPH (at first). Just didn't work out the way I planned!
 
It's not all about CPU performance. Most of the time my 32 thread Xeon is idling at below 2% load. What I need are the 40 PCIe lanes for the 7 PCIe slots that are all equipped as well as lots of ECC RAM (all 8 DIMM sockets equipped and running out of space again...). Only things that are always running out here are RAM, PCIe slots and 2.5/3.5" bays. All stuff a MiniPC can't offer even if that new 4 core mobile CPU is faster than my 16 core Xeon.

I understand the bays and RAM, but care to share what the PCIe slots hold (for a homelab if we are talking still)?

I noticed also the youtubers picking up all sorts of 2Us from 10+ years ago then wondering about dead iLOs (Dell also had issues like that, I just forgot what it was) and one ends up with hardware with no warranty but that did cost like the regular more modern (not necessarily last year's even) non-server little kit that, sure, has no ECC RAM or all the bays and RAID controllers (that lots swap for HBA), but this is indeed offset by running a tiny cluster, even just 2 (+1QD) nodes.

Don't get me wrong, if that works for you, none of my business, but even the youtubers then go around giving tips how to "turn off their extra capacity" at night because electricity bills. Even if I did not care for that, the noise .. and the hacks people end up doing for that - ouch.
 
I actually used to have 2 PowerEdge R710 servers and 1 PowerEdge R620 that I used in my old MSP business. I gave the rack away full of the servers, hdds, rails, etc. and network stack to some poor, tall, good looking kid (must be a hard life) on Facebook Market place. Only kept the UPS. Was the first thing I did. I hated those blades. So loud, super power hungry. Took up tons of space. Ugly too.

I went to the mini PCs for $, size, power usage, noise and surprising ability. I had full intentions on thundbolt-net and CEPH (at first). Just didn't work out the way I planned!

I mean I understand the 7xx, but the 1U for home is always noisy, also it's zero resale value because the density can't be the same as in modern 2U with 4 "blades" within. If someone gets it for FREE because company cleaning up, sure it's fun experience, but at the end of the day it's also fun to run multitude of Raspberry Pis with k8s. And many drive bays might be a thing, but the capacities of 3.5" spinning ones are decent nowadays. For SSDs, I understand the datacentre ones have high TBWs, but I always look at the price, then look at what I get when I match it (the price) with a consumer thing, often you realise you get 4x capacity consumer SSD for that and that one suddenly does not have bad TBW anymore. It is of course higher likely that it fails (unlike the datacentre SSD), but then what do we have the redundancy for?

I actually like the consumer SSDs, especially when they keep failing (within specs), every 2 years new SSD... the economies of scale have it, that most people never reach the declared TBWs on those consumer drives, so they never get to get RMA'd, if mine do, it's alright for everyone. Once I reach e.g. high % used on a drive, I would just "retire it" - make it some OS boot drive somewhere where it will last forever (or age out).
 
I buy those screeming 1U Supermicro pizza boxes people want to get rid of for cheap, put them in some big cases with big slow spinning fans, add a tower cooler, use a quiet ATX PSU. Won't help with proprietary blade servers but if you look for the rights servers (Asrock rack, Asus, Supermicro) you get the proper server mainboards/RAM/CPUs/Chipsets with normal uATX/ATX/E-ATX form factors and normal standardised connectors compatible with all that normal Silent Gaming Stuff. If you really want you could even DIY some RGB ECC RAM (don't ask why I know that... ;) ).
I understand the bays and RAM, but care to share what the PCIe slots hold (for a homelab if we are talking still)?
10Gbit fibre NICs for faster backups and faster access of my virtualized NAS. Quadport-NICs for my virtualized routers in addition to the onboard NICs I use for management/corosync. Multiple HBAs for passthrough to the NAS VMs as you want lots of drives for proper redundancy while still not wasting too much capacity for redundancy. M.2 carrier card. One GPU per VM that needs some hardware accelerated video encoding (mediaserver) or decoding (WinVM for doing desktop stuff) or in case I want to have some physical display output (HTPC VM to TV). So right now it's 10 PCIe cards just for running homeserver stuff. I could think of way more stuff I could do if I would have more PCIe slots available other than being able to add more disks or being able to run more VMs at once as some of them share a GPU so only one can be running at a time. Maybe passing through a soundcard for music playback so I don't need to connect another machine to my audio receiver. Maybe an AI accelerator for offloading object detection of CCTVs. Passthrough of a USB controller card as USB passthrough isn't that reliable nor fast. Or a TV tuner card for DVR. Or a capture card for taking footage of my retro consoles/PCs or digitalizing old tapes. Maybe a GPU for remote gaming via parsec or moonlight.

Some of the stuff you could also be achieve via USB/thunderbolt. But once you add all those external raid enclosures, external capture cards, external soundcards, thunderbolt GPUs, ... the power efficiency or compactness of a MiniPC doesn't really matter anymore if the external stuff is consuming way more power and space than the MiniPC itself. Especially if you also count all the other infrastructure like the UPS, managed switches, multiple routers and Wifi APs and so on. My biggest server for example was idling at 30W before adding PCIe cards and disks. With that added it's idling at 110W. 40W for the UPS and network infrastructure. So the server itself is actually only consuming 30W + 120W for the other stuff it is using which also would be added to a MiniPC. The MiniPC then would maybe idle at 10W + 120W for all the periphery and network infrastructure. 130W vs 150W then doesn`t make a big difference and I prefer the more reliable and better cooled internal hardware.

Don't get me wrong, if that works for you, none of my business, but even the youtubers then go around giving tips how to "turn off their extra capacity" at night because electricity bills. Even if I did not care for that, the noise .. and the hacks people end up doing for that - ouch
With the recent doubling of the electricity cost I also changed my homelab a bit. I for example bought an even bigger server as 1 big server is more power efficient than 2 smaller servers. And I moved some guests around and specialized my servers so not all of them need to run 24/7 anymore. Not running a homeserver, when not needed, is still the best way to save power.
One of my servers is just for backup purposes and runs my TrueNAS replication target, PBS, Veeam and so on. It only needs to run for a few hours a week to pull the backups so electricity cost isn't really a problem (average consumption while running is 74W). Then I got a weaker 4U server that is running all the stuff that really needs to run 24/7. This server got an average consumption of 35W. Then there is the powerful 4U server that got an average consumption of 118W that is running all the stuff I don't need to run 24/7 so this server is only running on demand when I really need it. When I'm not at home I don't need to access my media server, NAS and so on. And if I decide I want to stream some media or access some files on the go I can power up and boot the server remotely.
 
Last edited:
I buy those screeming 1U Supermicro pizza boxes people want to get rid of for cheap, put them in some big cases with big slow spinning fans [...]

:oops: How does it work with auto fan-speed regulation and all that?

If you really want you could even DIY some RGB ECC RAM (don't ask why I know that... ;) ).

It's been a power saving measure, I am sure ...

Quadport-NICs for my virtualized routers

Fair enough on this one.

Multiple HBAs [...] M.2 carrier card [...] One GPU per VM [...] right now it's 10 PCIe cards [...] I could think of way more stuff I could [...]

A-ok. o_O

Some of the stuff you could also be achieve via USB/thunderbolt.

Yeah, no, I did not mean that. Except the thunderbolt-net use case, I am not a fan there. I will just HUMBLY submit the newer NUCs have an extra B-key M.2 slot. ;)

My biggest server for example was idling at 30W before adding PCIe cards and disks. With that added it's idling at 110W.

Yeah it's a bit different end of spectrum that the Raspberry Pi clusters ... :)

I prefer the more reliable and better cooled internal hardware.

Without the extras, if it's just about having CPU+RAM+SSD, I would just submit a cluster is more reliable with ordinary hardware than any single piece of server gear, old or new.

One of my server is just for backup purposes and runs my TrueNAS replication target, PBS and so on. It only needs to run for a few hours a week [...]

I would push these off-site, but in your case it's probably cost prohibitive for the amount of data to store.

Then I got a weaker 4U server that is running all the stuff that really needs to run 24/7. This server got an average consumption of 35W.

Got you, finally! So in the end, the really necessary stuff does not need all the PCIe-s. :)

Thanks for the comprehensive reply, we just have very different use cases, but that's fine, OP has his own mind anyhow.
 
:oops: How does it work with auto fan-speed regulation and all that?
I wrote a fan controller script that uses ipmitool to read the various onboard temperatures and dynamically adjusts the fan speeds via ipmi. So it's now temperature controlled between 30 and 100% PWM for the CPU and 40-80% PWM for the case fans even if the BMCs firmware only allows you to set the fans at fixed 100%, 75% or 50% PWM.
It's been a power saving measure, I am sure ...
If I turn off the RAMs RGB, yes. That saves a few watts :D
But I of cause only bought them because of the additional cooling by the heatspreaders... ;)

Fair enough on this one.
Yes, but now there are 4 and 8-port NIC MiniPCs that are quite tempting. But I personally won't buy anything without ECC anymore after having 3 times non-ECC RAM slowly failing corrupting hundreds of GBs of data which I always only recognized after weeks when the system got unstable...
I want to directly know when my machine is screwing up all my data and not weeks later when its too late. Luckily I keep backups for months without overwriting them. But still annoying if you lose weeks of data as you won't know what file is healthy or or corrupted as even ZFS checksumming won't help if the data got corrupted in RAM before writing it to disk...had to overwrite/restore every file that got modified the last weeks...

Without the extras, if it's just about having CPU+RAM+SSD, I would just submit a cluster is more reliable with ordinary hardware than any single piece of server gear, old or new.
Yes, I wouldn't run a single server. All the important stuff is redundant on different servers. OPNsense with HA via pfsync, Piholes via keepalived and gravity-sync, TrueNAS with replication of all data (this is where it gets expensive and power inefficient if you need to buy 18 disks to only get the usable capatity of 3 disks...), PBS with sync jobs, ...
But as downtime isn't that important (still a homelab where it doesn't really matter if it's offline for some hours) I could always remotely spin up one of the powered down servers and restore some daily backups. The server I use just for backups that is powered down most of the time for example is quite overkill for what is is doing. The idea was that in case one of the other servers fails I could simple restore a bunch of important VMs there. And its hardware is also compatible with the big on-demand server. So I could always move some PCIe cards, CPU, disks and RAM to the backup server if I need to.

I would push these off-site, but in your case it's probably cost prohibitive for the amount of data to store.
Yes. I would also prefer that but that's not affordable now. I've got a third PBS and USB disks offsite for some important data and also a forth PVE server with a third TrueNAS VM in the basement. Not offsite and won't help much in case of lightning, fire or water damage but at least a bit of protection against theft and cats or drunken visitors knocking over the rack ;)

ot you, finally! So in the end, the really necessary stuff does not need all the PCIe-s.
Exactly. That one is stripped down as much as possible to save power. Doesn't need a GPU, not a problem if file transfers/backups take a bit longer with only Gbit instead of 10Gbit NICs, two mirrored SSDs for the virtual disks instead of dozens of SSDs and HDDs for cold data, backups, ... that I don't need to access all the time anyway. With less disks I also don't need the additional HBAs (which consume 10W each doing nothing) and HDDs that don't run 24/7 will also last longer.
Most important stuff here are webservers, reverse proxy, Nextcloud for syncing contacts/calenders/boommarks/todo lists and password safe, wireguard for VPN, OPNsense for routing, dokuwiki, Zabbix for monitoring, Graylog for central log collection, Pihole with Unbound as DNS, Homeassistant for smarthome, guacamole + semaphore + custom orchestrator VM for management stuff, firefly for finance management, paperless-ngx for digital document management, Wazuh for SIEM. All stuff that isn't that demanding on a "small" homelab scale as long as you got some durable and fast SSDs for the endless sync writes to the DBs.


Thanks for the comprehensive reply, we just have very different use cases, but that's fine, OP has his own mind anyhow.
Yes. There is a big spectrum of homelabbers. For some, it's a hobby and the more they learn and selfhost and bigger the homelab will grow and the higher the standards and expectations will become. On the other side of the spectrum, some people don't really care and it's just the annoying thing you have to do to get your smart home working. You run some oneliners to install turnkey appliances and forget it for years until PVE stops working because the disk got filled up or the hardware fails.
That is one of the great things about Proxmox products. You can run proper productive clusters, that companies rely on, but it is also a great entry for beginners who simply want to tinker around a bit without spending much.

Not that I wouldn't love to run a ceph cluster at home...it's just not economically reasonable if you want to seriously make use of it instead of just tinkering around for some learning purposes or proof-of-concept. And that's from a person that hoarded a rack of servers and is still complaining that he is still running out of ressources ;) . Then I need to run 3 or better 5 nodes 24/7. I would need to buy dozens of additional disks as ceph wants 3 copies of everything on multiple OSDs per host with always enough spare capacity to compensate a failing node. I would want to upgrade my NICs from 10 to 40GBit or at least 10Gbit NICs for every node. I would want to get rid of every single-point-of-failure so stacked switches, every NIC redundant, ...

Not sure where on this spectrum the OP is, but planning to run a reliable ceph cluster with usable performance for daily use is targeting at the peak of the scale.

You need weak hardware without a bunch of power hungry enterprise disks, NICs and switches to make it reasonable to run multiple ceph nodes 24/7 without needing to sell some organs to pay the electricity bills but on the other hand you really need all that enterprise stuff to get something that won't bottleneck your VMs by slow IO and that you can rely on and won't catastrophically fail if you do something stupid or some hardware fails. I personally still don't see how this should work if you want more than something just for learning.
 
Last edited:
  • Like
Reactions: esi_y and UdoB
It's not all about CPU performance. Most of the time my 32 thread Xeon is idling at below 2% load. What I need are the 40 PCIe lanes for the 7 PCIe slots that are all equipped as well as lots of ECC RAM (all 8 DIMM sockets equipped and running out of space again...). Only things that are always running out here are RAM, PCIe slots and 2.5/3.5" bays. All stuff a MiniPC can't offer even if that new 4 core mobile CPU is faster than my 16 core Xeon.
I am running just that in a 4u with quit Noctua fans I know the hardware old but gets the work done, not the most energy efficient but works
 

Attachments

  • Screenshot 2024-01-02 at 6.32.51 PM.png
    Screenshot 2024-01-02 at 6.32.51 PM.png
    72.4 KB · Views: 5

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!