Supermicro SuperServer Components

SoCalPilot

New Member
Sep 13, 2022
13
1
3
Hello, first post on the forum but looking forward to reading and learning as much as possible. I'm looking to invest around $10,000 - $14,000 on my first home server and I wanted to go a bit overboard on the specs to future proof it for the next 5-10 years if possible. This is not going to be a mission critical server use in a business environment but we really want something badass that's going to still be badass in a few years. Our server usage will be for media storage (100TB+), streaming, transcoding (4K Plex), and running various scripts and dev test environments in docker and of course this continues to expand each month.

I've been looking into the Supermicro SuperServer 620P-TRT based on some YouTube videos I've watched about ProxMox and have been playing around with some pricing configurations such as ITCreations and ServerSimply and I've come up with a configuration that I thought I'd share with the forum first and ask for some feedback or guidance in case I've made some beginner mistakes.

Processors 2 x 20 Core Intel® Xeon® Scalable Processor 4316 2.30GHZ 30MB L3 Cache TDP 150W - Silver
RAM Memory: 2 x 64GB PC4-25600 DDR4-3200 Registered ECC Memory Module
M.2 SSD Storage: 1.00TB M.2 NVMe SSD Read 7000 MB/s Write 5000 MB/s Client PCIe 4.0 x4 ( 2280 ) Samsung 980 PRO
Optical Drive: Super Multi DVD+/-RW with M-DISC (SATA) LG 24 x
Hot Swap HD: 8 x 20TB Seagate Enterprise Exos X20 Helium 7.2k RPM 3.5 inch 512e SATA Hard Drive
VROC Module: Intel VROC Standard Hardware Key supports NVMe RAID 0/1/10
Chipset: SATA3 ( 6Gbps) RAID 0/1/10/5 (Intel RSTe), NVMe (16GT/s) RAID 0/1/5/10 ( Intel C621A )
LAN: 2 x 10GbE port(s) via Intel® X550 Ethernet Controller + 1 x 1GbE dedicated BMC LAN port
OS: Ubuntu Linux 21.10 LTS Server Edition (No Media) (Community Support) (64-bit)
Power Supply: 2 x Redundant 80 Plus Platinum Power Supplies with PMBus 1200W
Mounting Rails: Rail set included (MCP-290-00053-0N)
Warranty: 5 Year NBD Warranty Parts Replacement

The thought here was to run ProxMox on the 1TB NVMe with the blazing fast speeds (7000 MB/s Write 5000 MB/s) and then run Raid 5 for the 8 x 20TB Exos drives however it sounds like my single point of failure would be that NVMe drive. Also for best performance should I be adding another 1-2 SSD in Raid 1 for caching? Or should I consider something like 2 x Intel SSD D3-S4610 1.92TB in Raid 1 for the Main OS then use the 1TB Samsung 980 PRO as a caching drive and Raid 5 for the 8 x 20TB Exos storage drives. I just want to ensure Im not making a major mistake on the component selection and drive planning for optimal performance & reliability.

Looking forward to hearing your input...
 
Some thoughts:

RAM Memory: 2 x 64GB PC4-25600 DDR4-3200 Registered ECC Memory Module

Single channel is never good. The CPUs have 8 memory channels each, therefore 16 memory modules overall would be the best. With your only 2, you have the worst case.
Also the CPUs only support a memory speed of max. 2666 MHz per specification. You want to check, if the mainboard lets you set it to 3200 MHz or what settings the bios/UEFI provides regarding the memory. Otherwise it can become a pain in the butt and you might end up with memory running at e.g. only 2133 MHz.

M.2 SSD Storage: 1.00TB M.2 NVMe SSD Read 7000 MB/s Write 5000 MB/s Client PCIe 4.0 x4 ( 2280 ) Samsung 980 PRO

Forget those consumer SSDs! Neither for VM-storage, nor for caching. Would not go with them for system disks on such a system either. Go for enterprise SSDs with PLP.
And of course, definitely go with at least a raid1!
Caching-SSDs depends on the use case/workload and what kind of raid you plan to use (see question below), I would say.

Hot Swap HD: 8 x 20TB Seagate Enterprise Exos X20 Helium 7.2k RPM 3.5 inch 512e SATA Hard Drive

With that amount of capacity, it is generally recommended to go with a raid6.

Sidenote:
1TB NVMe with the blazing fast speeds (7000 MB/s Write 5000 MB/s)

Blazing fast can become blazing fast slow as a snail with ZFS on consumer SSDs, for example. :p

  • How do you want to realize the different raids?
  • How have you planned to make your data, especially on the HDD-raid, accessible to the different services/VMs?
 
Thanks Neobin for the insightful response which really helps a beginner like me make smarter choices as I continue to learn. Taking you advise into consideration I think I've made some tweaks to my proposed server built spec sheet that should hopefully address all the concerns you pointed out.
Single channel is never good. The CPUs have 8 memory channels each, therefore 16 memory modules overall would be the best. With your only 2, you have the worst case.
Also the CPUs only support a memory speed of max. 2666 MHz per specification. You want to check, if the mainboard lets you set it to 3200 MHz or what settings the bios/UEFI provides regarding the memory. Otherwise it can become a pain in the butt and you might end up with memory running at e.g. only 2133 MHz.
You're definitely right on this point, looks like the Xeon Gold 6326 is the cheapest Xeon that works with 3200 MHz ram. I also went with 8 x 8GB Ram instead of 2 x 64GB as originally specified.

Go for enterprise SSDs with PLP.
And of course, definitely go with at least a raid1!
I looked into this further and I think I've picked out a good combination for my boot drive that has PLP which would be two of the 1.6TB Micron 7400 MAX Series PCIe 4.0 SSD which I would configure in Bios to Raid 1.

With that amount of capacity, it is generally recommended to go with a raid6.
I agree and adjusted accordingly.

Blazing fast can become blazing fast slow as a snail with ZFS on consumer SSDs, for example. :p
I understand (somewhat) what you mean with this. I've taken out the Samsung 980 PRO from my spec sheet and tried to come up with some better alternatives.


So this is where I am currently at with my desired build specs, only downside is it's quoting me $17,800 which is a wee-bit more than I wanted to spend but if this is really going to be a kickass server that's going to last 5+ years then that price point is not fully "off the table"

Supermicro SuperServer 620P-TRT
Intel Xeon Gold 6326 Processor 16-Core 2.9GHz 24MB Cache
(Qty 16) 8GB PC4-25600 3200MHz DDR4 ECC RDIMM
(Qty 2) 1.6TB Micron 7400 MAX Series PCIe 4.0 SSD (Raid 1 - Boot Drive)
MegaRAID 9580-8i8e RAID Controller - 8GB Cache
CacheVault Flash Cache Protection Module for 9580
Qty 8 -- 20TB Seagate Exos X20 Series SAS 3.0 12GB/s Drives (Raid 6 - Volume 2)
Dual 10-Gigabit Ethernet
1200W 1+1 Redundant
Supermicro Trusted Platform Module - TPM 2.0
5 Year Advanced Parts Replacement Warranty and NBD Onsite Service
ThinkMate Quoted $17,278

Another almost identical quote (Different Boot Drive + 2nd Raid Controller)
Supermicro SuperServer 620P-TRT
2 x Intel Xeon Gold 6326 Processor
16 x 8GB DDR4-3200 ECC REG
BCM 3408 NVMe RAID Controller
2 x 1TB Intel SSD DC P4511
BCM 3908 SAS 12Gb/s RAID Controller
CacheVault Kit
8 x 20TB Seagate Exos X20
Intel 10Gb/s ETH X550 (2x RJ45)
Vertical TPM 2.0
Cable management arm
5 year Express Warranty
ServerSimply Quote: $13,876

So for drive planning I was thinking the two Micron 7400 MAX SSDs would be set in bios to Raid 1 and I would create two volumes on them, one solely for Proxmox and another volume to run all the VMs. Then I would use MegaRAID Controller to set Raid 6 on the eight Seagate Exos 20TB SAS Drives and create 2-3 different partitions on there for various high quantity media storage needs. From there when I create a new VM in Proxmox I would be able to create the VM on the SSD and then map one of the huge volumes from the Raid 6 pool into that VM for easy access? Again, I wholly admit that I am a newbie to this part and maybe I'm missing something really important to consider.


  • How do you want to realize the different raids?
  • How have you planned to make your data, especially on the HDD-raid, accessible to the different services/VMs?
The shitty part to admit is that I don't fully know how to answer this and that sorta scares myself. I hope/think/pray in the drive planning paragraph above that sorta answered these two questions and hopefully I was close to the correct or shall I say proper methodology however I'm totally open to advice from someone with experience, I don't want to spend 15-20K on something that ends up running like crap.
 
Last edited:
Wouldn't it be better to just get some old second hand server just for testing? You get complete and working servers with for example 24 cores + 128GB RAM for 300-400$. That probably fast/big enough for everything you want to do and great for testing stuff and getting experience. You can always later buy a new server and maybe you then really know what you need and want. Your first server setup will suck anyway, because you are always doing so much stuff wrong that you will have to setup everything from scratch again and again, getting better each time.
So wouldn't hurt to get some knowledge with cheap old hardware first.
 
Wouldn't it be better to just get some old second hand server just for testing? You get complete and working servers with for example 24 cores + 128GB RAM for 300-400$. That probably fast/big enough for everything you want to do and great for testing stuff and getting experience. You can always later buy a new server and maybe you then really know what you need and want. Your first server setup will suck anyway, because you are always doing so much stuff wrong that you will have to setup everything from scratch again and again, getting better each time.
So wouldn't hurt to get some knowledge with cheap old hardware first.

I definitely understand that it's best to start on a cheaper used server, learn the ropes and see what really matters the most before spending an ungodly high amount on a brand new "home server". Right now Im coming from a QNAP TVS-671 running Raid 5 with 50TB and a bunch of docker containers and Im at 85% usage and the i3 CPU is really struggling so the biggest thing that made me go down this journey was the need for more storage, I figured 100TB+ was my next step up. Looking at a few preconfigured solutions like 45Drives or even higher end QNAPS they were all 25-60K price range and seemed to have much lower specs than some of the specs listed above for 13-17K. My fear is spending say 2-3K on a used system that doesnt have enough storage and while yeah I probably learn some valuable skills with server admin and linux, Im still sorta running over extended and out of storage and thus still need that upgrade. I guess maybe I need to see if eBay has some large storage servers that I can possibly pickup for a deal, thats worthy of checking out and didnt think of that before (Ive only looked at used barebone setups)
 
Since this is such a big topic with so many factors and ways how you can do it; I will only throw my loose thoughts and opinions in and how I personally would possibly do it. So some brainstorming. ;)

  • Do/Will you ever really need two CPUs?
  • I would not go with so small memory sticks. If you go with 16 modules, like in your configuration overview, it is not expandable without (partly?) replacing the existing ones. And 128 GB, especially on such a system, is nothing!
  • I would not go with hardware raid nowadays. Would go with ZFS instead, but that loves memory, needs an HBA (in IT-mode) instead of an raid controller and currently you can not expand a existing raidZ2 (raid6 equivalent), but that is worked on. With ZFS you also can easily add mirrored SSDs as a metadata (and small files) cache, a so called special device, to speed up your HDD-pool. But be aware: If the special device is gone, all your data on the HDD-pool is also gone/useless!
  • That gives the next question: Do you really need SSD-caching for your Linux-ISO-collection on the HDD-pool? (Cold data, mostly? big files, read/write sequential)
  • I miss the GPU in your listing for transcoding. Be aware: What I can see, only low profile cards fit in your chosen server.
  • You can not simply map your storage from the host into a VM. Only thing that makes sense here, is to use network shares. If the storage is on the host, you will have to install the Samba-/NFS-server on the host (or in a LXC and use bind-mounts, but that is even more annoying) and manage all over the CLI. Not that elegant.
  • What about backups???
  • What about a sufficient strong UPS??
  • What about energy costs? No problem for you respectively in your country?

Let's assume you want to go with the dual-socket system; here is what I personally would do/go for:
  • Go with at least 8 memory modules and at least 256 GB overall. At least 32 GB modules. (Depending on expansion needs/wishes in the future.)
  • Go with ZFS for all drives.
  • Go with at least 2 decent and not too small enterprise NVMes in ZFS raid1 for PVE and as VM-storage. (Easy expandable later on.)
  • Go with a decent 8-port HBA with your HDDs on it and PCIe-passthrough it to a TrueNAS-VM with a good amount of memory and use them with a ZFS raidZ2.
  • Go with a decent GPU that fits in the server and PCIe-passthrough it to a Plex-/Emby-/Jellyfin-/Whatever-VM.
  • Maybe get reasonable scratch disks for Linux-ISO downloading/torrenting. (Depends if needed and if so, need to think what interface type, in which system/VM to mount and therefore possibly how to passthrough to a VM...)

Disclaimer: I have not intensive checked/verified if all the above is easily possible with your chosen server!

All the questions above are mostly meant to ask them yourself! ;)

I am sure, I have forgotten something...

Keep in mind: Such a setup is absolutely not comparable with your current QNAP-NAS; not performance-wise (obviously :cool:), but also not maintenance-wise! There will be no easy-click-through setup assistant and then forget the whole thing the next half year until you login again, perhaps clicking the update-button, wait 10 minutes and forget it again the next half year. :D I mean, it could be (partly) like that, if you never update and do anything to it, once it works; but who does not want to play with it, right. :cool:

Your first server setup will suck anyway, because you are always doing so much stuff wrong that you will have to setup everything from scratch again and again, getting better each time.

Absolutely, definitive, so true! :eek:
Spend myself several thousand euros, only to realize after a short time, that "half of it" is not how I would like to have it now. o_O :confused:
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!