First Proxmox homelab build - sanity check on hardware, service layout, and storage

halfdan1988

New Member
Apr 6, 2026
2
0
1
Hi everyone,
TL;DR: Planning my first Proxmox home lab build and looking for a sanity check on the hardware and service layout before I commit.

I've been researching a lot, and I'm a bit torn between official documentation - which I assume mostly targets professional setups - and what homelabbers actually do. I'm not looking to cut corners for the sake of it: I want to get as close to a "proper" setup as I reasonably can. At the same time, I don't have the resources to roll out the full enterprise recommendation list, and I'm not sure if that would even be the right call for a home lab. So I'd appreciate input on where it makes sense to invest and where consumer-grade is fine.

Proxmox Host:
  • CPU: Intel i3-14100
  • RAM: G.Skill Ripjaws V 32GB (2×16GB) DDR4 3200 MHz (non-ECC)
  • HDDs: 2× Seagate IronWolf Pro 16TB

PBS (Proxmox Backup Server):
  • Model: Dell Optiplex 7060 Micro
  • CPU: Intel i3-8100T
  • RAM: 2×4GB (planning to upgrade to 2×8GB, or 2×16GB if it makes sense)

Spare SSDs on hand:
  • Crucial BX500 240GB (new, originally planned as host OS disk - now considering it for PBS OS instead)
  • Corsair Force MP510 480GB (NVMe, from an old build)
  • Samsung 860 EVO 1TB (SATA, from an old build)

My Plan regarding services looks roughly like this:

VMs:

arr-vm (docker compose)
  • Sonarr
  • Radarr
  • Jellyseerr
  • Prowlarr
  • Bazarr
  • SABnzbd

media-vm (docker compose, iGPU passthrough)
  • Jellyfin
  • Immich

apps-vm (docker compose)
  • OwnTracks Recorder
  • OwnTracks Frontend
  • Linkding

nextcloud-vm
  • Nextcloud

LXCs:
  • Samba
  • Caddy
  • AdGuard Home
  • Beszel
  • Uptime Kuma
  • Scrutiny
  • Mosquitto
  • Vaultwarden
  • Kavita

Questions:

Service layout
  • Is the general split between VMs and LXCs reasonable, or would you run some of these differently?

Hardware
  • What should I change or supplement?
  • Is a mirror for the host OS only useful for availability, or are there other reasons to do it? (I can live with some downtime.)
  • Where should I source enterprise-grade SSDs (for OS and/or VM storage)? Any specific models you'd recommend?
  • Can I reuse any of the old disks I listed, or would you discourage that?

Storage / ZFS config
  • Is a special vdev (mirrored) recommended for the HDD pool and/or PBS?
  • For PBS: assuming it's used for backups/snapshots of the VMs and LXCs above, how much storage should I plan for? Any rule of thumb for calculating this based on my setup?

Power
  • Do I need a UPS for this build? If so, what should I look for, and how should I wire up graceful shutdown for the host and PBS?

Thanks in advance!
 
Hi halfdan1988,

just a few incomplete thoughts on the setup:

Service distribution VMs/LXCs seems reasonable
Just save yourself the pain ato run docker (and equivalents) in VMs instead of LXC.
Currently most oft my workload is imported from OCI images as LXCs - that saves overhead and I can use prebuild formats for most Software. - but this may heavily depend on the Software.
E.g. for Filesharing I gave in on keeping Nextcloud clients and server version sin sync - so we switched to SFTPGo on LXC with WebDAV+sshfs support. - Works very smooth so far.

With zfs you should be at least aware of the the ECC-RAM topic. BTRFS with RAID1 could be a alternative. With BTRFS you might also like to look into a bcache setup - which also enables hot and cold data tiering.

Depending.on your downtime.and restore requirements, you could basically install PBS an PVE on the same host and use external disks as removable data stores, for offsite copies.
You should use the PBS, with the dedup feature your backups will save a certain amount of disk space over time used with immich and data sharing. The initial dedup rate will depend on the data structure.

I word recommend a (overhauled) enterprise server board (maybe passive cooled) in favour of the alibaba ones, for the peace of mind. They usually ship with enough sata, network and PCI ports, too. The Power consumpt of Board and CPU should be suitable for the environment.

I am using a simple USV, which saved me at least one time. There are different models, with different functions. On the one hand they are supposed to deliver power, during a power loss on the other hand they can help to stabilize the grid frequence, if it deviates. Depending on the local grid, that might be useful.

The family is considering the home server as very useful, we also run the home automation on it.
For proper segregation and configuration the ISP router basically got reduced to an uplink modem and routing for internal is done via a vyos, former OpenWRT, vm linked to WiFi Access Points and enables granular VPN configuration. This also saved several headeaches on discussions with ISP during uplink outtakes.

Concerning the disk hardware, monitoring might be of interest. Gotify ships with a CLI, enabling the usage in notification scripts for alertings, e.g. the SMART Values of ssds. Reliable recommendations for disks are difficult, in times of product piracy. I am using regular WD blue, which is work fine so far.

BR, Lucas
 
Hey Lucas, thanks for taking the time to reply!
Good to hear the VM/LXC split and the "Docker in VMs" approach sound reasonable. The OCI-as-LXC workflow and the SFTPGo tip are both new to me; I'll park them for now but keep them in mind once I've got the basics running. The removable-disk idea for offsite PBS copies is something I'll definitely work in.

Noted on the ZFS/ECC topic — though I've already got the board and non-ECC RAM sitting here, so ECC isn't really on the table for this build anyway. I'll have a look at the BTRFS+bcache alternative before committing.

Appreciate the input!
 
Noted on the ZFS/ECC topic — though I've already got the board and non-ECC RAM sitting here, so ECC isn't really on the table for this build anyway. I'll have a look at the BTRFS+bcache alternative before committing.

Please note that the need of ECC-RAM for ZFS is quite overblown:
https://jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/
This blog post is quite long (but still good and highly recommended!) reading, the most important point imho is that it r
references a comment by ZFS developer Matthew Ahrens:
There's nothing special about ZFS that requires/encourages the use of ECC RAM more so than any other filesystem. If you use UFS, EXT, NTFS, btrfs, etc without ECC RAM, you are just as much at risk as if you used ZFS without ECC RAM. Actually, ZFS can mitigate this risk to some degree if you enable the unsupported ZFS_DEBUG_MODIFY flag (zfs_flags=0x10). This will checksum the data while at rest in memory, and verify it before writing to disk, thus reducing the window of vulnerability from a memory error.

I would simply say: if you love your data, use ECC RAM. Additionally, use a filesystem that checksums your data, such as ZFS.

https://arstechnica.com/civis/threa...esystem-on-linux.1235679/page-4#post-26303271

With other words: ECC-RAM is always a good idea, but if you don't have it you still can use ZFS especially in a homelab.
With ZFS you profit from it's features ( see https://forum.proxmox.com/threads/f...y-a-few-disks-should-i-use-zfs-at-all.160037/ ) and it's good integration in ProxmoxVE, BTRFS is still technology preview and known to loose data in RAID3/5/6 setups. A btrfs mirror or striped mirror should be fine though. Please also note that ZFS RAIDZ (which is the ZFS equivalent to RAID3/5/6) is ok for bulk data store but not a good fit for hosting VMs (see here: https://forum.proxmox.com/threads/fabu-can-i-use-zfs-raidz-for-my-vms.159923/ )
At the moment this shouldn't concern you with your existing hardware but something you should keep in mind for future upgrades.

Personally I would prefer to have a mirror of my backups on the PBS but with a Mini-PC this might be difficult to implement. So for now I would go with one of your ssds as OS for the PBS and buying another HDD for your PBS. The performance won't be great through if your amount of data reaches a higher treshhold than 1 TB.

For the ProxmoxVE I see two issues:
- Your HDDs won't give you great performance for VM and the 16 TB are way to large for an OS install.
- None of your SSDs has power-loss-protection. Since ProxmoxVE writes a lot of logging and configuration data their lifetime will be greatly reduced if you use it for the OS install and hosting VMs.

In theory you could combine the hdds as a mirror and then add two ssds as special device mirror. But the special device then will only get the capacity of the smallest ssd. And if the special device gets lost, everything on the hdds will be lost too.
So for this reason I would recommend that you get a at least another 1GB SSD or (better) or two used 1 GB SSDs with powerloss protection. Install the ProxmoxVE os on a ZFS mirror out of two 1 GB SSDs. ZFS will allow you to use the remainder of the SSD for VM data.
Alternatively you could install the ProxmoxVE OS to a mirror out of the two HDDs, then add two 1 GB vms as special device mirror and use the setup described by @LnxBil to use the resulting zfs pool as combined OS/bulk data/VM storage.
Or third possibility (and the one I would recommend): Get a used 1 GB SSD with powerloss protection for VM storage and one or two of the cheapest used enterprise ssds (something like 16 or 32 GB would be enough, 240 GB or 480 would fit your existing ssd though). Then build three pools:
- First mirror out of your smallest SSDs for the ProxmoxVE OS
- Second mirror out of the 1GB ssds for hosting VM data
- Third mirror on the HDDs for bulk data

I hope I havn't lost you in my ramblings ;)