new server build; hardware compability

debruijnsteven

New Member
Sep 22, 2023
10
3
3
Hi All,

In our office, we have an big heavy win10 machine running, which we call our server. There are a few VM's running inside virtual box, and some file sharing, etc.
The hardware is already a bit old, and win10 already showed a few bluescreens, so before our whole office is down, i would like to replace everything.

My plan is to build a complete new system, running on proxmox.
I've googled a lot, but find it difficult to findout if my selected hardware is compatible with proxmox.

The setup that i selected is as follows:

- Intel I7-14700k processor
- ASRock Z790 Pro RS/D4 moherboard
- 4x Kingston 32GB DDR4-3200 KF32C16BBK4/128
- Corsair RM850x powersupply
- 2x WD black SN770 M2 SSD 500GB (ZFS mirrored, for proxmox installation only)
- 2x WD red SA500 2.5" SSD 2TB (ZFS mirrored, for VM and LXC storage)
- 2x WD red plus 3.5" HDD 6TB (ZFS mirrored, for data storage, SMB)
- 2x 4TB HDDs (for old backups. will be taken from existing server, when this one is up and running)

Can you guys help me with this?
Is this hardware compatible? for example the motherboard. I found a list of compatible motherboard, but i believe that list is very outdated?

If any of you have recommondations on this hardware, please let me know!

Thank you in advance!
 
- ASRock Z790 Pro RS/D4 moherboard
- 4x Kingston 32GB DDR4-3200 KF32C16BBK4/128

For a business (ie you have some budget), you should really should be buying ECC ram, and a motherboard that'll use it properly.

That ram is not ECC ram, and the motherboard itself (specifications page) doesn't know how to use the ECC part of that ram anyway.

You'll need to go find something better than those two. I'm sure other people here will have suggestions. :)

- 2x WD black SN770 M2 SSD 500GB (ZFS mirrored, for proxmox installation only)
- 2x WD red SA500 2.5" SSD 2TB (ZFS mirrored, for VM and LXC storage)
The SA500 might technically work, but it's very far from optimal. The SN770 just plain isn't suitable for use with ZFS (many problem reports).

If you're wanting to use SATA SSDs with Proxmox, look for the enterprise line of things as they have much higher endurance flash in them, and specialised circuitry to cope with power outages:
 
Last edited:
Hello,

Just to add to the previous comment, we highly recommend [1] SSDs with power-loss protection for ZFS (and Ceph). The difference in performance is quite noticeable in virtualization workflows. The WD Red disks are marketed as Enterprise, but some of them do not have power-loss protection so please double check.

[1] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_system_requirements
 
  • Like
Reactions: justinclift
For a business (ie you have some budget), should really should be buying ECC ram, and a motherboard that'll use it properly.

To be honest, untill now i did not know what ECC means. Just googled it for a few minutes and i definetly want to spend some more money to have ECC ram;)
So i will need to select another motherboard and memory. I found these:
- ASUS pro WS W680-ace motherboard
- Kingston fury 4x32GB KF556R36RBK4-128

For as far as i can see, this motherboard should be compatible with proxmox right?

Hello,

Just to add to the previous comment, we highly recommend [1] SSDs with power-loss protection for ZFS (and Ceph). The difference in performance is quite noticeable in virtualization workflows. The WD Red disks are marketed as Enterprise, but some of them do not have power-loss protection so please double check.

[1] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_system_requirements
So you advice not to have 2 ssd's mirrored?
I wanted this because two years ago we had an SSD drive which failed, and caused us to reinstall everything. server was down for 3 full days...
So i figured to install two in mirrored mode, so if one fails, the server keeps running.
But, herhaps our broken SSD was bad quality, not the correct one for the job? i dont know..
If i select a single decent suitable m.2 SSD, what are the changes that it fails? Or is the quality these days so good, i dont have to worry about that?

About your advice; is that only for the m.2 ssd with the proxmox installation? or also the 2.5" ssd's which contains the VM's and containers?

Thank you very much for your advice guys!
 
  • Like
Reactions: justinclift
i definetly want to spend some more money to have ECC ram
Good move. :)

So you advice not to have 2 ssd's mirrored?
Nah, that's just a simple misunderstanding of the syntax @Maximiliano used.

Where @Maximiliano said this:
we highly recommend [1] SSDs
... the [1] there just means "please refer to link 1 at the bottom of my post". It's a fairly common way that people with academic (and maybe other?) backgrounds write stuff. :)

So, @Maximiliano is really saying "we highly recommend SSDs, please read (this link) for more details". Nothing at all about the quantity of SSDs you should be using. Mirrored is fine. :)
 
Last edited:
So you advice not to have 2 ssd's mirrored?
This is actually one setup that I would recommend, two SSDs mirrored with ZFS.

Enterprise disks have better performance and lower chances of failing. Eventually, they will stop working like any storage solution, the main question is when, and hence we recommend redundancy on multiple levels.

About your advice; is that only for the m.2 ssd with the proxmox installation? or also the 2.5" ssd's which contains the VM's and containers?
I would say both. Note that the OS itself does not require much space, 256GB could be more than enough depending on your needs. I would recommend one (preferably two) ~256GB enterprise SDD over one (or two) consumer grade 1TB NVMes for the OS.
 
  • Like
Reactions: justinclift
ASUS pro WS W680-ace motherboard
I tend to steer people away from ASUS products these days, as over the years they've become pretty hit or miss quality wise + their support became fraudsters. :( :( :(

There was a big media incident about it recently (#1, #2, #3). Although they've promised to make improvements, I reckon people should take a "wait and see" approach to see what happens.

Maybe take a look at the ASRock Rack stuff and see if it'd fit your budget?
They even have barebone systems (ie chassis + motherboard + power supply all in one) which could help simplify things:
Keep the noise factor in mind though, as it sounds like your existing system might be a tower PC of some sort that's been called on for "server" duties.

Rack mount stuff (like the "Barebone" link above) tends to operate in "jet engine" mode most of the time, and isn't suitable for placing in the same room as people who need to concentrate. ;)



A different potential direction you could go is something like the Dell Tower Servers:

https://www.dell.com/en-us/shop/dell-emc-poweredge-tower-servers/sr/servers/poweredge-tower-servers

HPE, Lenovo, etc all have equivalent offerings too, if you have a different preference. :)



If that's all a bit overwhelming, then this is what I'm running as my personal development system (which runs Proxmox):
  • Motherboard ASRock B550M Pro4 (specs)
  • Ryzen 5950X. 16 cores, 32 threads. (specs)
  • 64GB ECC RAM <-- you'd definitely want 128GB rather than 64GB though
    • This motherboard uses Unbuffered, DDR4, ECC ram. Stuff like this (that's my regular PC shop) would be fine.
It'd be completely fine as a 24/7 server, and it'll probably become one when I eventually move to a new development system. :)
 
Last edited:
In our office, we have an big heavy win10 machine running, which we call our server.
I keep on meaning to ask... is this a big company with it's own network department, or is it more a small business that has the network sort of just growing organically.

Asking because you might want to take this opportunity to look at how the server is connected to your network, so it's not bottlenecked by a 1GbE interface to the switch. Totally optional of course. :)
 
Ah okay, so that [1] did really put me on the wrong path then ;)

So if i understand correctly; the way i want to set it up with the mirrored drives is fine, only i need to select enterprise drives, with power loss protection.

About this power loss protection, our server cabinet is completely powered by an large UPS. Do we still need powerloss protection in that case?

Anyway, I'm already in the mood of spending money so why not ;) I've selected these:
- 2x Kingston DC1000B 240GB nvme drive (ZFS mirrored, for proxmox OS)
- 2x Kingston DC600M 1.92TB 2.5" SSD (ZFS mirrored, for VM and LXC storage)

2 questions on my mind;
  • I would like to keep the proxmox installation on seperate disk. just to prevent that the promox OS suffers from for example any mistake we make in our vm's or containers. Is that even neccesary? Or is that just a waste of money and can we just install proxmox OS on the 2.5" SSD's, and create seperate storages on the same SSD's for th vm's and containers?
  • I could not find any enterprise NVMe drives with powerlossprotection on PCI Express 4.0 x4. The DC1000B is PCI Express 3.0 x4. Just to make sure, that is compatible right?
 
I would like to keep the proxmox installation on seperate disk. [...] Is that even neccesary? Or is that just a waste of money
Using a separate set of mirrored drives for the OS is how people used to do things "back in the day", but it doesn't really serve a useful purpose when you're using Proxmox with ZFS for the boot drives. ;)

The way that Proxmox does things with ZFS (as Proxmox boot drives), is it installs a copy of the Proxmox OS on the start of the chosen drives then automatically sets up the remaining space for use by VMs and containers (and other stuff too).

So, personally speaking I'd probably just look for two super sized NVMe drives (the more storage the better) - definitely with power loss protection - and forget about the 2.5" ones. That's kinda challenging for those Kingston drives though, as they only go up to 480GB. :(

The DC1000B is PCI Express 3.0 x4. Just to make sure, that is compatible right?
Yep, PCIe is backwards and forwards compatible. So plugging in PCIe 3.0 stuff into a PCIe 4.0 slot will work fine. It'll just work at PCIe 3.0 speed, which is likely plenty fast enough. :)
 
Last edited:
About this power loss protection, our server cabinet is completely powered by an large UPS. Do we still need powerloss protection in that case?

For a disk to have power-loss protection (PLP) it needs some very chunky capacitors (among other things), these not only provide PLP but higher and more stable performance, e.g. consumer SSDs can reach similar speeds in short write/read burst, but then become slower than a HDDs once their cache is filled. So yes, you want Enterprise disks with PLP even if actual PLP is not a concern.

I would like to keep the proxmox installation on seperate disk. just to prevent that the promox OS suffers from for example any mistake we make in our vm's or containers. Is that even neccesary? Or is that just a waste of money and can we just install proxmox OS on the 2.5" SSD's, and create seperate storages on the same SSD's for th vm's and containers?

Both setups are fine, but I personall find it is a bit easier to manage if they are on different disks.
 
  • Like
Reactions: justinclift
Okay, about the harddrives, i think that is totally clear now, thanks for the advise!


I tend to steer people away from ASUS products these days, as over the years they've become pretty hit or miss quality wise + their support became fraudsters. :( :( :(

There was a big media incident about it recently (#1, #2, #3). Although they've promised to make improvements, I reckon people should take a "wait and see" approach to see what happens.
Thank you for pointing this out. No ASUS for us i guess.

The asrock racks you showed looks awesome, but i think a bit overkill for us ;)
I found this one as alernative from supermicro:
https://www.supermicro.com/en/products/motherboard/x13sae-f

Supermicro states here that they support max 4400mt/s... the ones i selected was 5600mt/s... will that be an problem? I couldnt find any ddr5 ECC ram with 4400mt/s. What am i missing here?

I keep on meaning to ask... is this a big company with it's own network department, or is it more a small business that has the network sort of just growing organically.

Asking because you might want to take this opportunity to look at how the server is connected to your network, so it's not bottlenecked by a 1GbE interface to the switch. Totally optional of course. :)
About me and my company; we started 15yrs ago with two persons, and a simple desktop with WinXP as server. we have grown a bit since then, and now we are with 12 persons, and have an semi pro 19" rack which houses some managed switches, ups and our server. We build that server 5 years ago, and used win10 as a base for the server. Only reason for that is that we knew how to do the things we want in windows, and never used proxmox or even linux systems before.
4years ago, i discovered linux for some business cases, and started to automate my house with an raspberry pi. 2 years ago i ditched the raspberry, and installed proxmox on a small pc, with home assistant, frigate, and some other stuff.
I'm still amazed by how stable this works. So now our server needs replacement, i wanted proxmox as the base!

So to answer your question, its a small growing business, and so is our network. our existing server has a single 1gbE interface to our main switch. This supermicro has 3 network ports, including a 2.5gbE one. So i think that should be fine for us!
 
  • Like
Reactions: justinclift
Supermicro states here that they support max 4400mt/s... the ones i selected was 5600mt/s... will that be an problem?
Faster ram (ie 5600mt/s) will work fine, it'll just run at the slower 4400mt/s speed.

Cool, good choice. Supermicro tends to make good stuff, and that particular one seems quite decent. :)

It sounds like you've got a pretty good handle on things now. Hopefully it all works out properly and provides a good foundation for everyone to work on things from. :D
 
  • Like
Reactions: debruijnsteven
Just a remark on the memory, in case someone want to order this configuration;

The memory i selected above did not fit on the motherboard. So i ended up with 500eur memory which i couldn't use. luckily the supplier was willing to take them back.

Anyway, problem was in the ECC.
The onces i selected had registered ECC, and the motherboard only supports unregistered ECC. basilically the diff between Rdimm and Udimm.

I ended up ordering 4 of these:
Kingston 32GB KSM48E40BD8KM-32HM
 
  • Like
Reactions: justinclift
Ahhh. Unbuffered ECC is what my Ryzen desktop/workstation uses as well. Yeah, that's different to Registered ECC.

That's an unfortunate learning experience ("oh shit!" (etc)). Not a mistake you'll probably ever repeat though. ;)
(I have my own collection of likewise "oh shit!" moments too, you're not alone in the slightest.)

Good thing the supplier was willing to take them back. Sounds like one of the better suppliers. :)
 
Last edited:
  • Like
Reactions: debruijnsteven

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!