Hardware buying advice needed

ProfessionalN00b

New Member
May 21, 2025
23
4
3
Estonia
Hello

I've run a single tower-ish server (build in PCPartPicker) for many many years and I want to get rid of it. It's sitting on top of my rack in a closet, and I don't like it. I live in an apartment.
I want two 1U servers or capable mini PCs that fit into 1U. Can be used.

So far I've looked here: https://www.bargainhardware.co.uk (they sell refurbished equipment).
And I also briefly looked at the Minisforum MS-A2.
Briefly, because a single MS-A2 + 96GB memory (max) and NVMe sticks would cost a few hundred euros more than a used actual server with better specs from the other site.

The reason I made this post is that there's a chance is too real that I'll go way overboard with this project.

With my initial new setup I thought I'd have just the OS disk and run VMs on a Synology iSCSI "share" (per host), now I think I'd have 2x2TB local SSDs/NVMe's instead and back up the VMs/containers to Synology.
Just finished watching a Ceph tutorial and it looks like having local disks is the correct way to go.. local disks to a Ceph pool and backups to NAS.

Keep in mind that I started with Proxmox.. about two weeks ago, I think. So it's all new to me.
I like the "real" 1U servers, but I fear they will be too loud and consume too much power?

Key things I'm looking for:
  • Plenty of overhead for expansion
  • Relatively quiet (My Synology RS3618xs makes ~50dB, according to specs, and it's too loud)
  • Should be energy-efficient (Syno NAS running 24h the whole month vs. shutting it down at night is about a 40-50€ difference on the bill)
  • TPM
  • 10+Gb network support. Either built-in or via an expansion card. +1 NIC for normal usage. I would want a high-speed connection between the hosts and I would connect to hosts via Gb NIC. If it makes sense. If not, I'm willing to make changes.
My goal is to create a two-node cluster with a QDevice for the quorum, so I can easily migrate compute resources between hosts when I need to reboot one host for maintenance.

I have currently 3 Windows VMs and a few Linux VMs. I'm going to add a ton of containers and the setup should be able to easily handle a few more Windows VMs.
 
Go for something like the Minisforum, the power consumption of older, high-end hardware will soon outweigh the lower cost.
As for the NVMEs: Do not use consumer-grade one to save money: https://forum.proxmox.com/posts/775082/
10 GbE networking takes ~2 Watts per port, i.e. one link will take 4 Watts. If the machines are not too far apart, go for SFP+ and DAC cables, they use almost nothing. Also, when you connect to a switch with SFP+ slots, 10 GbE transceivers get really hot.
 
  • Like
Reactions: UdoB
How is Proxmox's support for DACs?
I wouldn't mind using these at all. The machines would be next to each other. Either stacked if full 1U servers or side-by-side if I go with mini PCs, on a 1U shelf.

I read somewhere in these forums that disabling SWAP would significantly reduce the drain on the device? Someone was asking if they could use a USB stick as boot media for Proxmox.
Enterprise disks cost a lot more, too much more. And where I live, I see many SSDs with RAM caches and a few NVMes.
 
Usually better compatibility than for any transceiver, but if at all, that is not a Proxmox, but a hardware/driver issue.
And do yourself a favor and refrain from consumer SSDs - especially QLC ones - they wear out at an amazing rate. You will want to use CEPH or ZFS, which are copy-on-write file systems, so swap will be the least of your problems.
When you follow my link, you will get an idea of what to buy (Samsung 980 Pro, e.g. - not 990 Pro).
 
Last edited:
I don't think I understand. In the other thread, you say
The older Samsung 980 Pro had a RAM cache and was by far the fastest drive, the newer 990 Pro is slower.
But the 990 Pro also has a cache, at least according to the Samsung specs. I'm looking at 2TB 980 Pro and 990 Pro specs and at least feature-wise, they look identical.
 
The actual measured results prove their claims wrong or maybe misleading. The 980 Pro is faster, period. That may be different when you compare linear speed on huge blocksizes, because of higher PCIe standards, but for the specific test, the results are clear.

It is for a reason that the older 980 Pro model is still available and costs more than the 990 Pro. The same is true for different generations of WD Black drives.
 
Last edited:
Anyway... here are two builds I've come up with. One 1U server and one Minisforum one.

Here's the build of the 1U server. This includes memory, two enterprise SSDs, and a 10Gb SFP+ NIC I've heard only good things about. There are Host Bus Adaptors and Host Channel Adaptors. Since I can't tell the difference, I sent an email for clarification. As I understand, a DAC would be preferable over SFP.
I chose a CPU with 10+ cores with the lowest TDP.
The build doesn't include an OS disk, which doesn't matter.
(Multiply the cost of this setup by 2. The Minisforum table includes two units + additional hardware for each)
https://www.bargainhardware.co.uk/r...system-sr630-7x02-configure-to-order?c_b=9336

And here's the list of Minisforum build
AmountItem€/pieceTotal
2Minisforum MS-A2689€1366€
2Crucial DDR5 96GB (2x48GB) 5600 CL46 SODIMM196,5€393€
4Samsung 980 PRO 2TB M.2 (VMs, containers)180€720€
2Intel 600p 256GB M.2 2280 (OS, images)156,14€312,28€
2791,28€
This one costs almost 1k more, so I'm inclined to go with the Lenovo one.

I know, I'm comparing used hardware to new, but it's not like I have an abundance of choices to choose from :p.

Edit:
Just noticed that the Minisforum PC doesn't have TPM.
 
Last edited:
You actually do not need the OS disks. Also, do you really need a cluster? If you really need high availability, you would need at least 3 nodes for a quorum, and also, there usually is no real smooth failover anyway. Given the higher risk of messing up a HA setup, I never opted for one.

If you have specific needs for a setup that you want to build, O.K., but for just a nice homelab, I would save half of that cost.
 
The cluster can be done with 2 nodes and a qdevice, which I'm going to use. 3rd full node is too much for me.
I want a two-node setup because with the single-host Hyper-V solution I had before, all my infrastructure went down when I had to install updates to the host and restart it.
I know the reboot requirement on Linux is nowhere as severe, but I still don't want my stuff to go down. And to waste time gracefully shutting everything down.

And like pretty much everything else in my setup, I use it to learn new things, including clustering and networking in this instance.

So, ideally, with a two-node setup with shared storage, I should be able to easily push VMs from one node to another and do a staggered restart on the hosts.
 
There are two routes you can take here:

1. If you strictly have a requirement for "no downtimes", then sure, do it like that. That being said, Promox updates are usually done live and an update is only needed when a kernel update is considered due. That is - if you really need it for security reasons. Up to that point, you can do any update and not reboot, so the kernel stays the same until the next reboot.

2. If you purely want to use it for the learning effect for HA and clustering, I would recommend buying 4 TByte SSDs and putting Proxmox VMs on that.
You can do all the learning you want by having a virtualized Proxmox cluster, which works just the same as a hardware cluster at a fraction of the cost.

Been there, done that.

If you really decide you need that cluster physically later on, you can still buy a second system.

And as for the used hardware: Just calculate the break-even point at your local electrity rate. BTW: That is half the cost with only one physical machine, as well....

What a fine example for the applicability of Occam's razor.
 
Last edited:
Anyway... here are two builds I've come up with. One 1U server and one Minisforum one.

Here's the build of the 1U server. This includes memory, two enterprise SSDs, and a 10Gb SFP+ NIC I've heard only good things about. There are Host Bus Adaptors and Host Channel Adaptors. Since I can't tell the difference, I sent an email for clarification. As I understand, a DAC would be preferable over SFP.
I chose a CPU with 10+ cores with the lowest TDP.
The build doesn't include an OS disk, which doesn't matter.
(Multiply the cost of this setup by 2. The Minisforum table includes two units + additional hardware for each)
https://www.bargainhardware.co.uk/r...system-sr630-7x02-configure-to-order?c_b=9336

And here's the list of Minisforum build
AmountItem€/pieceTotal
2Minisforum MS-A2689€1366€
2Crucial DDR5 96GB (2x48GB) 5600 CL46 SODIMM196,5€393€
4Samsung 980 PRO 2TB M.2 (VMs, containers)180€720€
2Intel 600p 256GB M.2 2280 (OS, images)156,14€312,28€
2791,28€
This one costs almost 1k more, so I'm inclined to go with the Lenovo one.

I know, I'm comparing used hardware to new, but it's not like I have an abundance of choices to choose from :p.

Edit:
Just noticed that the Minisforum PC doesn't have TPM.


Stay away from Minisforum, biggest pile of hype out there.

From the sound of things you are in the UK so energy costs is a factor indeed especially considering the UK is continuously raising its rates like over here in Europe.

Also it doesn't sound like you are after a number-crunching machine for AI/ML work so that makes things a little easier to price comfortably.

I would recommend grabbing yourself a couple of Xeon E-2236 based computers, used Dell/HP workstations with ECC ram, cheap and chearfully quiet with plenty room inside the towers for upgrades.

Keep it simple.

* Dont go micro-tiny, because upgrading kicks you in the arse, well you can't upgrade beyond what can fit in those tiny things.
* Dont go old-hat, V3/V4 gen tech, they suck juice, though the V4's can be run with less power demand but just not as good as the E-21xx/E-22xx variety.

Also the beauty of those workstations is that they actually sell very quickly, so and upgrade would be as easy as selling a tower and just buying a newer model when the time comes.

If noise isn't an issue, then indeed you can go rack, but prices start rising rapidly for anything newer than 1st gen Scalables and Epyc Romes etc... but then again noise can also be managed with thought and extra work, but then you're talking a greater investment in sourcing quieter fans and tweaking till you go green in the arse.

My 2-pence, safe yourself the hassle, buy 2 workstation towers dirt-cheap, hide them away under the stairs or behind the couch, job done.

Good luck!

oh just a note, a friend has 4 towers behind his couch in his lounge, come winter, the room is nicely warm without any extra heating bills, on the flip-side though in the summer I'm guessing that lounge is a wonderful sauna loool
 
Last edited:
1U rack units will be loud because you have a dozen or more 1” fans that have to remove potentially 2-3kW worth of heat. There are loads of Intel NUC and other similar sized computers that work perfectly well, print a rack mount if you need it, even an i7 6th gen is more than enough and people are offloading them because of Windows 11 requirements. Use some decent SSD, SATA would suffice for most home office workloads. Don’t use Synology, bad hardware, bad company, noisy garbage, just share an SSD over the network, you can probably get sufficient amount of storage in 1 SSD these days, or build a little NAS from an external USB or SAS/SATA enclosure.
 
Stay away from Minisforum, biggest pile of hype out there.

From the sound of things you are in the UK so energy costs is a factor indeed especially considering the UK is continuously raising its rates like over here in Europe.

Also it doesn't sound like you are after a number-crunching machine for AI/ML work so that makes things a little easier to price comfortably.

I would recommend grabbing yourself a couple of Xeon E-2236 based computers, used Dell/HP workstations with ECC ram, cheap and chearfully quiet with plenty room inside the towers for upgrades.

Keep it simple.

* Dont go micro-tiny, because upgrading kicks you in the arse, well you can't upgrade beyond what can fit in those tiny things.
* Dont go old-hat, V3/V4 gen tech, they suck juice, though the V4's can be run with less power demand but just not as good as the E-21xx/E-22xx variety.

Also the beauty of those workstations is that they actually sell very quickly, so and upgrade would be as easy as selling a tower and just buying a newer model when the time comes.

If noise isn't an issue, then indeed you can go rack, but prices start rising rapidly for anything newer than 1st gen Scalables and Epyc Romes etc... but then again noise can also be managed with thought and extra work, but then you're talking a greater investment in sourcing quieter fans and tweaking till you go green in the arse.

My 2-pence, safe yourself the hassle, buy 2 workstation towers dirt-cheap, hide them away under the stairs or behind the couch, job done.

Good luck!

oh just a note, a friend has 4 towers behind his couch in his lounge, come winter, the room is nicely warm without any extra heating bills, on the flip-side though in the summer I'm guessing that lounge is a wonderful sauna loool
I know, I bought a Minisforum unit a few years ago and the whole experience was riddled with problems and the support sucked. TL;DR I returned it. But the MS-A2 got so much hype on YouTube that I thought maybe I'd give them another chance, although seeing the price of the complete system, I don't think so.

I'm from the EU, this UK shop is the closest I know that offers enterprise hardware with pretty nice prices, it seems.

My workload is a bunch of Windows VMs with Microsoft services and a buttload of Linux-based containers and maybe VMs for daily services and stuff I want to learn.
Due to the recent hostile decisions Synology has taken toward home users, I'm going to move my services off of it and to containers that I maintain. So the whole thing doesn't have to be super fast, but it can't be a potato either. I've grown to have a very low tolerance against slow stuff:p.
I know I have to make compromises somewhere.

Noise is absolutely an issue, more so than power draw. My Synology RS3618xs is in a closet in my apartment, and I can hear it in other rooms. Due to the RAID level I have chosen, it doesn't allow me to change fan speeds down from Full-Speed Mode.
I'm waiting for Unifi to add multi-pool and iSCSI support to their NAS, and I'm considering buying one and keeping the loud Synology box as cold storage that wakes up sometimes for backup archival purposes.

I don't like towers, I'm looking to get rid of mine. I have a rack, I want the new servers to be in it, not on or around it.
 
If noise is an issue, and you have no plans to upgrade, as in just shove a single M.2 and a sata drive, and are happy with 1G or even 2.5G via usb3 for networking, then the HP pro/elitedesk 8th gen micros will work just fine. Set you back about a hundred odd euros/pounds each, and you can upgrade to 64gb ram each for more guests/containers. Racking mounting wise, you can buy one of those dual micro mounting brackets for racks that you can buy from ebay which will hold 2 micros side by side and job is done.

If you plan to upgrade with more onboard directly attached drives and/or want 10G plus networking, and want it all in a rack, and want it quiet, then you're going to have to custom build one up. For example you can buy a supermicro 2u chassis, if sticked with supermicro motherboards then using ipmi scripts you can control the fans based on systems temps you can read via sensors all wrapped up in a task running under crontab. The supermicro chassis you'd want would support the ATX motherboards, then you can just shop around for bargains on consumer boards/cpu/ram and build a couple up yourself if you dont want enterprise motherboard, but controlling fans from linux isn't easy with consumer boards because of the various non-standard methods that consumer board vendors use to control fans. Indeed all can be done fairly cheaply, but time is a killer, tweaking time, making the damn things as silent as you need yet ventilated enough so that you don't overheat something. Just make sure you get the chassis with 'SQ' designated power supplies (super-quiet I believe it stands for).

Now I'm not sure about all the different workstation towers, but some of them actually can be laid flat, so in theory if you really need peace and quiet and least amount of headaches, you could source workstation towers that can fit on a rack shelf, typically 4u in height. Just throwing that out there as an idea, because these vendor made enterprise workstations really are the bees knees in power-efficiency and noise management, specifically designed with those goals in mind.

Whatever you decide, I'm sure it will be exactly what you need, but when it comes to noise, thats a major thorn, and I for example count 'man-hours tweaking' as part of the overal cost of doing a new build, personally I would take the simplest least headache route and just make it 'fit'.

If noise is a priority, stay away from enterprise vendor servers, they will never idle to anything below 45dbs unless you live in the arctic.
 
Last edited: