Hardware Feedback - Homelab single node

war_horse

New Member
Aug 10, 2024
3
1
3
I’ve been successfully running Proxmox on a Dell Optiplex 9020 paired with a Raspberry Pi as an NFS server for over three years. Now, I’m at a point where I want to upgrade my current setup to improve performance.

To provide more context, I’ve outlined my primary goals for this new build, hoping this will assist those who kindly offer feedback:

1. Energy Efficiency and Low Noise: Since this is a homelab, the server will be housed in a rack located in my studio.
2. Plex 4K Transcoding: I currently need support for no more than a couple of streams simultaneously.
3. Managing About 20 LXC Containers: I plan to install applications I use daily, primarily the ARR stack, TrueNAS for data management, and Paperless-ngx.

I’d prefer to keep the components I already own to minimize the total cost, but I’m open to making changes if necessary. Below is a list of the hardware:

To Purchase:

• CPU: Intel Core i5-12400 6 Core 2.5GHz
• MOBO: ASUS TUF Gaming B660M-PLUS WIFI D4
• CPU Cooler: NH-U9S
• 2x WD_BLACK SN770 1TB M.2 2280 (OS + VMs)
• 3x 4TB 3.5 HDD 7200 NAS (likely WD RED) for TrueNAS RAIDZ1

Owned:

• Crucial Pro RAM DDR4 64GB Kit (2x32GB) 3200MHz, Intel XMP 2.0
• Corsair AX860 Platinum PSU
• Samsung SSD 850 EVO 500GB (possibly to add to RAIDZ1 for caching)
• 4U Rackmount Case

Despite my experience with Proxmox on the Optiplex, I still consider myself a newbie, especially when it comes to choosing the right hardware. I’d greatly appreciate any feedback on my planned build or hardware choices.

Thanks a lot!
 
  • Like
Reactions: war_horse
I’d prefer to keep the components I already own to minimize the total cost, but I’m open to making changes if necessary.

Enterprise SSDs with PLP give much better IOPS and fsync/s for VMs (and wear much slower): https://forum.proxmox.com/search/7556950/?q=PLP&o=date

This is questionable piece of advice for the deployment suggested by the OP and their workload. PLP SSDs will typically offer lower (not higher) IOPS, especially in this deployment. The statement is only true for constant load in a datacentre and it makes sense with U.2/U.3 SSDs, not SATA. The OP is going after NVMe M.2 SSDs in which case he has very limited choice for PLP anyways (Micron 7400 and 7450 and now newly released Kingston DC2000B, they all end up at 1TB in 2280 form factor which the OP is likely limited to).

When compared with non-PLP cousins, they are more expensive, offer lower (not higher) TBW at the same price (this is because you can purchase double capacity non-PLP one) and if not under constant load, have actually lower throughput on burstable workloads. They also do rely on high velocity fans (come with smaller heatsinks) that a server chassins would have.

Full-text search for PLP on this forum comes back with handful of people suggesting the same all over again with no tangible evidence other than:

1) ZFS is a bad idea on NVMe drives, due to write amplification.
2) PVE has sub-optimal handling on writing into its backend database which holds the in-memory cluster file-system copy, especially due to HA stack, which can be turned off. This is not as much a concern in a non-cluster setup.

Suggesting a PLP drive for a homelab is like suggesting ECC RAM. The latter one has more sense, but can't be fitted.

Some WD Red drives use SMR with is terrible with ZFS; make sure you get CMR (or Red Plus): https://forum.proxmox.com/search/7556955/?q=SMR&o=date

There is absolutely nothing special about "NAS" drives other than they are overpriced thanks to the marketing effort and the target demographics. It is true that SMR drives for a RAID deployment are a bad idea. It is also true that any cheap hardware will do, all drives suffer failures and so the redundancy needs to be built into the setup. WD Blues or Seagates (not Barracudas - they are all typically SMR, that's why they are so cheap) will do just fine. Exos is typically cheaper than Ironwolf and is a better choice. I don't think OP cares, but at certain capacities, helium drives (currently ~12TB+) will be more silent when idle.

WARNING: WD Blues need tweaking through "idle3" so they stop parking their heads million times a year in Linux machines, otherwise they are cheap and silent.

Finally, RAIDZ-anything is always trickier choice than a mirror. If possible, I would go with e.g. 2 mirrored drives instead of 3+ in RAIDz. For large capacities, mirrored stripes work well, at these capacities deploying 3 relatively small drives makes no sense to me. It would make sense e.g. 2 for a mirror and third (double capacity) for backups. Later on can buy another one and retire the 2, rinse and repeat.

NOTE: I know some members here do not like my opinions on things they disagree on. I respect their opinions, but they do not work for me for home deployment. This forum is partially based on some topics (PLP). The OP should do a wide web search, not just this forum.
 
Last edited:
  • Like
Reactions: war_horse
• 2x WD_BLACK SN770 1TB M.2 2280 (OS + VMs)

This will be another unpopular opinion, but mirroring the OS pool with PVE does two things:

1) Forces you to have ZFS which is a bad idea for NVMes.
2) It sort of offers no resiliency since if they are exactly same drives, purchased from the same lot, they will likely both go at the same time anyways.

Instead, frequent backup is a better idea, I suppose with the suggested hardware, availability is not an issue. If availability was a concern, redundant drives are always worse than redundant everything (i.e. cluster) - I would rather put the second OS drive into a second node.

Consider if you have faulty RAM, you will be mirroring corrupt data on 2 NVMEs, unnoticed for a while.
 
Last edited:
  • Like
Reactions: war_horse
Thank you for your valuable feedback. It has given me a much clearer perspective on the next steps, particularly in terms of setting goals and configuring the setup.

Revised OS + VM Storage Setup:
Based on your suggestions, I’ve decided to shift my approach to a single SSD for the OS. I’ll be avoiding the ZFS mirroring for the OS pool and instead will use a single WD_BLACK SN770 1TB NVMe drive for my Proxmox VE installation and VMs. Rather than relying on a mirrored setup, I’ll focus on frequent backups of both the OS and VM data.

Revised Data Storage Setup:
For the main storage, I’ve opted for two 4TB HDD drives in a RAID 1 configuration. This setup provides redundancy without the added complexity of RAIDZ, making it more manageable in a homelab environment. To stay within budget, I plan to go with 2 x Seagate Ironwolf 4TB 5,900 RPM drives.

Larger HDD for Regular Backups:
I’ll also be adding a third, larger HDD for periodic backups of the mirrored drives. This drive will be used to store snapshots, archives, and other important data. Given its role, I’ve chosen the Seagate Exos 8TB 7,200 RPM for its reliability.

Samsung SSD 850 EVO 500GB (Owned):
Instead of using this SSD in a RAIDZ1 setup, I’m considering it for additional storage for non-critical data or VM backups. It might also serve as a cache or for storing less frequently accessed data.

Thanks again for taking the time to provide such great advice. I really appreciate it!
 
Samsung SSD 850 EVO 500GB (Owned):
Instead of using this SSD in a RAIDZ1 setup, I’m considering it for additional storage for non-critical data or VM backups. It might also serve as a cache or for storing less frequently accessed data.

A drive like this can be a good candidate for zfs cache vdev - L2ARC (i suppose you mean ZFS mirror when you say RAID1) - this gives you the ability to e.g. even use spinning drives ZFS pool as backend for VMs, when in fact they would be reading most of what they need off the SSD. Alternatively, you can even considering other caching options (bcache or dm-cache). Note some people do not like L2ARC as it increases your RAM consumption at the same time and if possible, adding RAM is preferred.

One thing I would not choose this drive for are backups. Even though, when these go, they usually first become read-only, which is the preferred failure mode.

PS Do not forget to have backup testing automatically done with those single drive backups - you do not want to find out they are failing when you need them. Some people even go 3-2-1 backup system, so one backup is not enough.

NB I am a bit surprised Ironwolf HDDs are (at any capacity) cheaper than Exos in your marketplace.
 
Last edited:
  • Like
Reactions: war_horse
Oh and one more thing ...

Revised Data Storage Setup:
For the main storage, I’ve opted for two 4TB HDD drives in a RAID 1 configuration. This setup provides redundancy without the added complexity of RAIDZ, making it more manageable in a homelab environment. To stay within budget, I plan to go with 2 x Seagate Ironwolf 4TB 5,900 RPM drives.

Let me bet the devil's advocate here. If you have backups and data lost from e.g. one hour ago is not an issue, you can even just do a stripe (or no raid, single drive double capacity is cheaper). If your filesystem is doing checksums, you will know you got a problem early on, then you have to revert to a backup. If this is NAS and you are in the habit of accessing data there to mostly read (they change infrequently), then having an hourly ZFS send/receive gives you quite good disaster recovery options. That all with slightly better performance (on the striped setup) and well, lower cost.

BTW The WD Blues are very very quiet, but some are SMR, others are not, you need to check:
https://documents.westerndigital.co...duct-brief-western-digital-wd-blue-pc-hdd.pdf

Something you would not worry about with Exos, but then those are noisier when the heads work.
 
  • Like
Reactions: war_horse
A drive like this can be a good candidate for zfs cache vdev - L2ARC (i suppose you mean ZFS mirror when you say RAID1) - this gives you the ability to e.g. even use spinning drives ZFS pool as backend for VMs, when in fact they would be reading most of what they need off the SSD. Alternatively, you can even considering other caching options (bcache or dm-cache). Note some people do not like L2ARC as it increases your RAM consumption at the same time and if possible, adding RAM is preferred.
Correct i mean the ZFS mirror. Once i have all the necessary components, i'll explore which caching option to implement, considering your advice about RAM.
PS Do not forget to have backup testing automatically done with those single drive backups - you do not want to find out they are failing when you need them. Some people even go 3-2-1 backup system, so one backup is not enough.
I plan to implement the 3-2-1 backup strategy. I’m considering converting the Dell Optiplex into a Proxmox Backup Server. Additionally, I use an external HDD for backups and Duplicati for encrypted cloud backups.
NB I am a bit surprised Ironwolf HDDs are (at any capacity) cheaper than Exos in your marketplace.
It was a pleasant surprise for me as well. They offer the best balance between noise, reliability and cost
 
  • Like
Reactions: esi_y

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!