Optimal Storage Configuration for a Low-Power Server

hwole

New Member
Apr 4, 2023
8
0
1
Germany
hwole.de
Hi Guys,
Soon I want to build a Low-Power Server with one of those Erying Motherboards. Since I live in Germany and Energy is expensive here, I'm kind of compromised in the Storage Setup. This Setup is only used for an SMB Share that's connected via a 10Gig SFP+ Card. I will add a list at the bottom what drives are available. My Plan would be to add 1–3 HDDs with a ZIL and a ZFS Special Device on a NVMe (2 partitions, 512 GB NVMe). Since the list of drives I could use is very long, my plan is to spin those drives down. For Example: Setup: 3×1 TB Raid Z1 + 512 GB NVMe as ZIL and a ZFS Special Device: I write to the cache only and once a Day I move the Cache onto the Drives, and they spin up once a day or so or until the Cache is full, or they would have to spin up otherwise. With that set up, my Drives wouldn't wear that much. Also, my IOPS would be lots higher. What I noticed now is: Wouldn't it be much more practical to just write to the NVMe and have a Replication Task for the Drive Pool: e.g.: SMB Share Data gets written to NVMe and gets moved Nightly (To ZFS RaidZ1 Array, and deleted on the NVMe). With that, I would have a limitation to 512 GB, but that should be okay. Problem with that is: What do I do when I want to access old data that's already on the RaidZ1 Array? In that case they would have to spin up again right? What Setup would you use? My plan would be to spin down those drives so they only spin up 3-5 times a day since they consume 3-8W each.

I have the following Drives:
256 GB NVMe (Used for Boot and VM Storage)
3x 1 TB HDD (I could use 1-3 of those in a RaidZ1)
512 GB NVMe (Used for ZIL and ZFS Special Device)
2TB HDD (1st Backup, I have 2 more Backups)
500 GB SATA SSD (Could be used)
2x 500 GB SATA HDDs (I could use those, but I don't want to add too many drives)
 
Last edited:
For Example: Setup: 3×1 TB Raid Z1 + 512 GB NVMe as ZIL and a ZFS Special Device: I write to the cache only and once a Day I move the Cache onto the Drives

This is not how any ZFS cache and/or special device works.

Wouldn't it be much more practical to just write to the NVMe and have a Replication Task for the Drive Pool: e.g.: SMB Share Data gets written to NVMe and gets moved Nightly (To ZFS RaidZ1 Array, and deleted on the NVMe).

You would need to configure and maintain all this, including writing scripts, on your own on the CLI.

This all is not really a Proxmox-thing/-topic.

Would suggest to have a look at Unraid; it can do what you want (spin down of the HDD-array, tiered storage (in the sense of: use a/the SSD(s) first/mainly and move the data only once a day to the HDDs) and network shares) and more like having mixed disk sizes in one pool/array, VMs, Docker and what not, out of the box through the GUI.
 
I've now decided to just let the drives spun up (since they're laptop drives, it's ok) and add a mirrored ZIL (1 Partition on the NVMe boot drive and one on an extra NVMe). That would just be the optimal setup for me. Hardware wise that would be:
  • A undervolted 11900H “Erying”-type motherboard with 32 GB of RAM, Consumes ~12W when Idle
  • 2 NVMe SSDs, 1 being for Boot and 1 Being for VM-Storage and Cache ~ 2 x 2W when Idle
  • 3 x 1TB 2,5" HDDs that consume about 3W each when in an ZFS Idle ~ 3 x 3W when Idle
  • A Cisco VIC 1225 10Gig Card, 2 passive DACs Connected ~5W
  • 3 Fans that shouldn't consume more that a Watt each
I plan with around 30W of Idle Power, do you think thats realistic? What could be optimised in that Setup?
 
My Plan would be to add 1–3 HDDs with a ZIL and a ZFS Special Device on a NVMe (2 partitions, 512 GB NVMe).
Lose the special device and all data on the HDDs is lost. I use a Raid1 for everything, including the special device SSDs...even on the low power nodes. Not using mirroring is saving on the wrong end. And I wouldn't put a zil on the SSD that you use as a special device. Sync writes are the thing that cause most of the SSD wear and that is all the ZIL is doing. So with a ZIL on your special device SSDs these SSDs will fail way earlier.

I plan with around 30W of Idle Power, do you think thats realistic? What could be optimised in that Setup?
Dont forget all the other devices. My minimal 24/7 setup consumes around 70W where the PVE node is only using 22-25W. The other 45W are the routers, Wifi AP, Managed Switch, UPS and so on.
Every server (even a low power home server) should be connected to a UPS and even an online UPS will idle with 10+W.

What really helps here is scripting. All devices are connnected to Smart Plugs I can turn on/off using API from scripts. And stuff like servers for backups or NAS are only running as long as needed where I boot them using IPMI and shut them down using PVE/PBS/TrueNAS API. You can also dynamically start and stop VM when not needed to reduce idle power consumption.
Most power efficient hardware is hardware that is only running as long as needed.
You only do backups once a day? Get a dedicated server for that and only let it run for some minutes per day when you are doing your daily backups.
You only watch a single movie per day? Get a dedicated server that stores your movie collection, runs a Plex/Emby/Jellyfin VM with a GPU for transcoding. If you then want to watch a movie, boot that server up when turning on the TV and shut it down when you finished watching. That way those HDDs for your movies, the GPU for transcoding and so on aren't adding to your 24/7 idle power consumption.
That reduced the average idle power consumption of my whole homelab from ~300W to ~90W.
 
Last edited:
Dont forget all the other devices. My minimal 24/7 setup consumes around 70W where the PVE node is only using 22-25W. The other 45W are the routers, Wifi AP, Managed Switch, UPS and so on.
Sadly yes and I totally agree with everything you say. I also battle this .... my new plan is to add a PowerStation and load it up at day via solar and use the stored power at nights.
 
Sorry for replying that late, but I've just seen the reply now.
Lose the special device and all data on the HDDs is lost. I use a Raid1 for everything, including the special device SSDs...even on the low power nodes. Not using mirroring is saving on the wrong end. And I wouldn't put a zil on the SSD that you use as a special device. Sync writes are the thing that cause most of the SSD wear and that is all the ZIL is doing. So with a ZIL on your special device SSDs these SSDs will fail way earlier.
Thought about it again and decided to just use two 50G partitions on the two NVMe drives in a mirror as a write cache. In the end, that means there is no special device, since the pool is just going to be used as an SMB-share. Maybe I'll also add another 1 TB drive, just because of pool size.

Dont forget all the other devices. My minimal 24/7 setup consumes around 70W where the PVE node is only using 22-25W. The other 45W are the routers, Wifi AP, Managed Switch, UPS and so on.
Every server (even a low power home server) should be connected to a UPS and even an online UPS will idle with 10+W.
Yes, I already measured all the other devices, but since I live in my parent's house, and it's my dad's switch and AP's, I don't have to pay for power. But I definitely want to keep power consumption as low as possible. For me, it'll be just a playing-around-server for a couple VMs and containers. But since it's around 3x the performance of the other server, I'll offer my dad to migrate some services to that server or something like that. There's also another server in the house that's running 24/7. When I bought the hardware (Mother Board and RAM) I'll also set up Wake on LAN for that server and use scheduled shutdown.

Future upgrades will maybe higher capacity drives or something like that, but a GPU isn't planned for now since I can use the 11th gen iGPU for that.

Thanks for all the tips
Ole
 
Thought about it again and decided to just use two 50G partitions on the two NVMe drives in a mirror as a write cache. In the end, that means there is no special device, since the pool is just going to be used as an SMB-share. Maybe I'll also add another 1 TB drive, just because of pool size.
There is no write cache in ZFS. If you mean SLOG, you won't need 50 GB for that, more like 5G. You WILL want special devices. This ist the most performance-related addition thing to vdevs that you can do to a hard disk pool
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!