First Proxmox homelab build - sanity check on hardware, service layout, and storage

halfdan1988

New Member
Apr 6, 2026
4
0
1
Hi everyone,
TL;DR: Planning my first Proxmox home lab build and looking for a sanity check on the hardware and service layout before I commit.

I've been researching a lot, and I'm a bit torn between official documentation - which I assume mostly targets professional setups - and what homelabbers actually do. I'm not looking to cut corners for the sake of it: I want to get as close to a "proper" setup as I reasonably can. At the same time, I don't have the resources to roll out the full enterprise recommendation list, and I'm not sure if that would even be the right call for a home lab. So I'd appreciate input on where it makes sense to invest and where consumer-grade is fine.

Proxmox Host:
  • CPU: Intel i3-14100
  • RAM: G.Skill Ripjaws V 32GB (2×16GB) DDR4 3200 MHz (non-ECC)
  • HDDs: 2× Seagate IronWolf Pro 16TB

PBS (Proxmox Backup Server):
  • Model: Dell Optiplex 7060 Micro
  • CPU: Intel i3-8100T
  • RAM: 2×4GB (planning to upgrade to 2×8GB, or 2×16GB if it makes sense)

Spare SSDs on hand:
  • Crucial BX500 240GB (new, originally planned as host OS disk - now considering it for PBS OS instead)
  • Corsair Force MP510 480GB (NVMe, from an old build)
  • Samsung 860 EVO 1TB (SATA, from an old build)

My Plan regarding services looks roughly like this:

VMs:

arr-vm (docker compose)
  • Sonarr
  • Radarr
  • Jellyseerr
  • Prowlarr
  • Bazarr
  • SABnzbd

media-vm (docker compose, iGPU passthrough)
  • Jellyfin
  • Immich

apps-vm (docker compose)
  • OwnTracks Recorder
  • OwnTracks Frontend
  • Linkding

nextcloud-vm
  • Nextcloud

LXCs:
  • Samba
  • Caddy
  • AdGuard Home
  • Beszel
  • Uptime Kuma
  • Scrutiny
  • Mosquitto
  • Vaultwarden
  • Kavita

Questions:

Service layout
  • Is the general split between VMs and LXCs reasonable, or would you run some of these differently?

Hardware
  • What should I change or supplement?
  • Is a mirror for the host OS only useful for availability, or are there other reasons to do it? (I can live with some downtime.)
  • Where should I source enterprise-grade SSDs (for OS and/or VM storage)? Any specific models you'd recommend?
  • Can I reuse any of the old disks I listed, or would you discourage that?

Storage / ZFS config
  • Is a special vdev (mirrored) recommended for the HDD pool and/or PBS?
  • For PBS: assuming it's used for backups/snapshots of the VMs and LXCs above, how much storage should I plan for? Any rule of thumb for calculating this based on my setup?

Power
  • Do I need a UPS for this build? If so, what should I look for, and how should I wire up graceful shutdown for the host and PBS?

Thanks in advance!
 
Hi halfdan1988,

just a few incomplete thoughts on the setup:

Service distribution VMs/LXCs seems reasonable
Just save yourself the pain ato run docker (and equivalents) in VMs instead of LXC.
Currently most oft my workload is imported from OCI images as LXCs - that saves overhead and I can use prebuild formats for most Software. - but this may heavily depend on the Software.
E.g. for Filesharing I gave in on keeping Nextcloud clients and server version sin sync - so we switched to SFTPGo on LXC with WebDAV+sshfs support. - Works very smooth so far.

With zfs you should be at least aware of the the ECC-RAM topic. BTRFS with RAID1 could be a alternative. With BTRFS you might also like to look into a bcache setup - which also enables hot and cold data tiering.

Depending.on your downtime.and restore requirements, you could basically install PBS an PVE on the same host and use external disks as removable data stores, for offsite copies.
You should use the PBS, with the dedup feature your backups will save a certain amount of disk space over time used with immich and data sharing. The initial dedup rate will depend on the data structure.

I word recommend a (overhauled) enterprise server board (maybe passive cooled) in favour of the alibaba ones, for the peace of mind. They usually ship with enough sata, network and PCI ports, too. The Power consumpt of Board and CPU should be suitable for the environment.

I am using a simple USV, which saved me at least one time. There are different models, with different functions. On the one hand they are supposed to deliver power, during a power loss on the other hand they can help to stabilize the grid frequence, if it deviates. Depending on the local grid, that might be useful.

The family is considering the home server as very useful, we also run the home automation on it.
For proper segregation and configuration the ISP router basically got reduced to an uplink modem and routing for internal is done via a vyos, former OpenWRT, vm linked to WiFi Access Points and enables granular VPN configuration. This also saved several headeaches on discussions with ISP during uplink outtakes.

Concerning the disk hardware, monitoring might be of interest. Gotify ships with a CLI, enabling the usage in notification scripts for alertings, e.g. the SMART Values of ssds. Reliable recommendations for disks are difficult, in times of product piracy. I am using regular WD blue, which is work fine so far.

BR, Lucas
 
Hey Lucas, thanks for taking the time to reply!
Good to hear the VM/LXC split and the "Docker in VMs" approach sound reasonable. The OCI-as-LXC workflow and the SFTPGo tip are both new to me; I'll park them for now but keep them in mind once I've got the basics running. The removable-disk idea for offsite PBS copies is something I'll definitely work in.

Noted on the ZFS/ECC topic — though I've already got the board and non-ECC RAM sitting here, so ECC isn't really on the table for this build anyway. I'll have a look at the BTRFS+bcache alternative before committing.

Appreciate the input!
 
Noted on the ZFS/ECC topic — though I've already got the board and non-ECC RAM sitting here, so ECC isn't really on the table for this build anyway. I'll have a look at the BTRFS+bcache alternative before committing.

Please note that the need of ECC-RAM for ZFS is quite overblown:
https://jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/
This blog post is quite long (but still good and highly recommended!) reading, the most important point imho is that it r
references a comment by ZFS developer Matthew Ahrens:
There's nothing special about ZFS that requires/encourages the use of ECC RAM more so than any other filesystem. If you use UFS, EXT, NTFS, btrfs, etc without ECC RAM, you are just as much at risk as if you used ZFS without ECC RAM. Actually, ZFS can mitigate this risk to some degree if you enable the unsupported ZFS_DEBUG_MODIFY flag (zfs_flags=0x10). This will checksum the data while at rest in memory, and verify it before writing to disk, thus reducing the window of vulnerability from a memory error.

I would simply say: if you love your data, use ECC RAM. Additionally, use a filesystem that checksums your data, such as ZFS.

https://arstechnica.com/civis/threa...esystem-on-linux.1235679/page-4#post-26303271

With other words: ECC-RAM is always a good idea, but if you don't have it you still can use ZFS especially in a homelab.
With ZFS you profit from it's features ( see https://forum.proxmox.com/threads/f...y-a-few-disks-should-i-use-zfs-at-all.160037/ ) and it's good integration in ProxmoxVE, BTRFS is still technology preview and known to loose data in RAID3/5/6 setups. A btrfs mirror or striped mirror should be fine though. Please also note that ZFS RAIDZ (which is the ZFS equivalent to RAID3/5/6) is ok for bulk data store but not a good fit for hosting VMs (see here: https://forum.proxmox.com/threads/fabu-can-i-use-zfs-raidz-for-my-vms.159923/ )
At the moment this shouldn't concern you with your existing hardware but something you should keep in mind for future upgrades.

Personally I would prefer to have a mirror of my backups on the PBS but with a Mini-PC this might be difficult to implement. So for now I would go with one of your ssds as OS for the PBS and buying another HDD for your PBS. The performance won't be great through if your amount of data reaches a higher treshhold than 1 TB.

For the ProxmoxVE I see two issues:
- Your HDDs won't give you great performance for VM and the 16 TB are way to large for an OS install.
- None of your SSDs has power-loss-protection. Since ProxmoxVE writes a lot of logging and configuration data their lifetime will be greatly reduced if you use it for the OS install and hosting VMs.

In theory you could combine the hdds as a mirror and then add two ssds as special device mirror. But the special device then will only get the capacity of the smallest ssd. And if the special device gets lost, everything on the hdds will be lost too.
So for this reason I would recommend that you get a at least another 1GB SSD or (better) or two used 1 GB SSDs with powerloss protection. Install the ProxmoxVE os on a ZFS mirror out of two 1 GB SSDs. ZFS will allow you to use the remainder of the SSD for VM data.
Alternatively you could install the ProxmoxVE OS to a mirror out of the two HDDs, then add two 1 GB vms as special device mirror and use the setup described by @LnxBil to use the resulting zfs pool as combined OS/bulk data/VM storage:
Or third possibility (and the one I would recommend): Get a used 1 GB SSD with powerloss protection for VM storage and one or two of the cheapest used enterprise ssds (something like 16 or 32 GB would be enough, 240 GB or 480 would fit your existing ssd though). Then build three pools:
- First mirror out of your smallest SSDs for the ProxmoxVE OS
- Second mirror out of the 1GB ssds for hosting VM data
- Third mirror on the HDDs for bulk data

I hope I havn't lost you in my ramblings ;)
 
Last edited:
Hi Johannes,
thanks a lot for the detailed write-up - that actually clears up a lot for me.

Good to know the ECC/ZFS thing isn't as dramatic as it's sometimes made out to be. I'll definitely keep the RAIDZ-for-VMs caveat in mind for later upgrades. That article was quite interesting!
Good to hear the PLP concern confirmed, I was already leaning towards enterprise SSDs for the VM storage, but I hadn't planned to extend that logic to the OS disk. That makes sense to reconsider.
Based on your option 3, here's what I'm currently leaning towards:

OS mirror: 2× used enterprise SSDs (~120–240 GB) with PLP — retiring the BX500 for this role
VM storage mirror: 2× used enterprise SSDs (~1–2 TB) with PLP
Bulk data mirror: the 2× 16 TB IronWolf Pros as planned
Optional: a mirrored special vdev on the HDD pool with 2 smaller enterprise SSDs - is that worth it in your view for a homelab, or overkill at this scale?

For PBS I'll go with one of my existing SSDs as the OS disk and add a dedicated internal HDD for the PBS backup target, as you suggested. Separately, I already have an 8 TB external USB HDD that I'll use for regular file-level backups of the bulk data pool. I'll also bump the Optiplex to 2×8 GB.

One follow-up: do you (or anyone else on the forum) have concrete recommendations for which used enterprise SSD models to look out for, and a rough price range I should expect in Europe? I've seen names like Intel S4510/S4610, Samsung PM883/SM883 and Micron 5200/5300 thrown around, but I'm not sure which are actually worth hunting for used and what's a fair price vs. getting ripped off. Any preferred sources (eBay, specific shops) would also be appreciated.
Thanks again!
 
You are welcome:
I noticed that I missed to link the thread where lnxbil described his way to use special_small_blocks, so I edited my post to link it:


ood to know the ECC/ZFS thing isn't as dramatic as it's sometimes made out to be. I'll definitely keep the RAIDZ-for-VMs caveat in mind for later upgrades. That article was quite interesting!
Good to hear the PLP concern confirmed, I was already leaning towards enterprise SSDs for the VM storage, but I hadn't planned to extend that logic to the OS disk. That makes sense to reconsider.
Based on your option 3, here's what I'm currently leaning towards:

OS mirror: 2× used enterprise SSDs (~120–240 GB) with PLP — retiring the BX500 for this role

Please note that a dedicated OS mirror is helpful but not always needed. It can also be a valid strategy to have everything on a combined OS+VM mirror. You won't have much storage for VMs on a 120/240 GB mirror though ;) While it's nice to seperate OS and VMs (since it might save you some time on restore) it's not needed if you backup your host configuration too and have recent VMs/LXc backups to restore from. Obviovusly you should test said restores from time to time. If you want to save some money I also don't see any problem in contiuing to use one of existing SSDs in combined mirrors with enterprise SSDs. Obviouvsly you won't get the same durability, IOPS and performance as with PLP-only mirrors but it will save you some money at the moment ;) And you can always replace a failed SSD at some later point.

VM storage mirror: 2× used enterprise SSDs (~1–2 TB) with PLP
Bulk data mirror: the 2× 16 TB IronWolf Pros as planned
Optional: a mirrored special vdev on the HDD pool with 2 smaller enterprise SSDs - is that worth it in your view for a homelab, or overkill at this scale?

Depends on the workload. If your workloads don't actually write a lot to the files but need to access their metadata a lot (e.G. last modification date) they are definitively worth it. You can also configure them to act as storage for small files by configuring a fitting block size. You need to be careful though, if you do something wrong it might happen that your special device will run out of space resulting any new data (including small files and metadata) will be written to hdds and thus render your special device useless.
The rule of thumb for special device is, that it should have around 0,02% to 2% or of the HDD capacity depending how you configure the parameter for small_blocks so depending on your wished capacity this will add to your costs. You can also add a special vdev at some later point. You will need to rewrite the old data to profit from it though.
One example: The BackupServer garbage collection job mainly reads- and updates the metadata of the files so it will profit a lot from a special device (yes I know you don't plan to use a special vdev for your Optiplex (it doesn't have room for it anyhow, it just an example ;) ). A verify job on the other hand needs to read actually all data (including the ones on the HDD) so it will still profit a little but not much. See following thread for more information: https://forum.proxmox.com/threads/speed-up-backupserver-by-adding-zfs-special-device.85668/
Same is true for similiar workloads where a lot of small files and their metadata gets accessed a lot. In the last ZFS version the ZFS developers also introduced a feature that the special device will also act as log device if you didn't configure a dedicated one. A log device is for speeding up sync writes like the ones used in databases. In most usecases a log device won't do much though.

For PBS I'll go with one of my existing SSDs as the OS disk and add a dedicated internal HDD for the PBS backup target, as you suggested. Separately, I already have an 8 TB external USB HDD that I'll use for regular file-level backups of the bulk data pool. I'll also bump the Optiplex to 2×8 GB.

If you are on a budget I would invest the money for the 2x8 GB in something else. The RAM of the PBS will mainly be used as cache to speed up file access. This is great if you plan to have the PBS running 24/7. But if you only want to start if up for backups to save on energy costs, it won't do much (since the RAM and thus the cache will be lost with every shutdown/reboot). You mentioned that you have 2x4GB of RAM in your Optiplex so a total of 8GB. This is more than enough for your needs as you can see in the system requirements:
Memory: minimum 4 GiB for the OS, filesystem cache and Proxmox Backup Server daemons. Add at least another GiB per TiB storage space
https://pbs.proxmox.com/docs/system-requirements.html

The GiB per TB rule is more a rule of thumb than a fixed recommendation and you won't need it right from the start. Since at the beginning you won't use all of your storage space, 2x4 GB will be plenty for PBS. If you want to invest in RAM do it on the PVE host since overprovisoning RAM doesn't really work (even with Kernel-Samespace-merging or balooning). At least in my homelab I run out of RAM on the PVE hosts before anything else ;)

One follow-up: do you (or anyone else on the forum) have concrete recommendations for which used enterprise SSD models to look out for, and a rough price range I should expect in Europe?


Basically any enterprise SSD with powerloss protection will do. Mixed or write-intensive will have a higher endurance than read-intensive but even read-intensive will be a lot better than regular SSDs. I bought my last SSDs before the current AI hype so I'm not quite sure my whether that prices are still valid. I just looked up some shops for used server hardware and ebay, there used server ssds with PLP go for 60 Euro for the smaller models and 100-200 for models with 1 TB or more. But I didn't invest much time just to answer your question, better look for your self ;) But these prices matches what I payed (I might have payed a little bit less but not much). Ebay also tends to be cheaper than dedicated web shops for used enterprise hardware. On the other hand with these shops I would expect that they can be more trusted that their offerings are not totally junk.
If this is a little bit much investment for your current budget it's an absolutely valid approach to create a mirror out of one of your existing SSDs and an enterprise SSD. In fact this is what I did myself in my mini-pcs since they can only host one M2-8880-NVME-SSD and one SATA-Disc. Since used M2-NVME-SSDs are hard to find and I couldn't afford the prices for a new one (often enough they also don't fit into 8880 formfactor since they have their own especially for larger capacities) . Now my two ProxmoxVE-Mini-PCs have a ZFS mirror (one consumer NVME, one SATA-ServerSSD with PLP) which host VMs AND the OS. The wearout level on the NVMEs is of course higher (at the moment around 20-40%) but that's a tradeoff I can live with. If one of them fails I will have enough time to replace it with a new one.
For example you could just buy one used 240 or 480 GB SSD with PLP (depending which of your discs you want to use as OS disc for your PBS) to combine it with your existing SSD of the same size for the OS and another 1 TB/960 GB (yes server ssds usually have less capacity than their consumer equivalent) SSD with PLP as VM storage. The remaining disc space on the OS discs could then be used as scratch space for disc images, vm/lxc template and temporary data (e.g if one of your vms needs a virtual disc which doesn't needs to be backed up since it only stuff like web proxy caches or other temporary files).

HTH ;)
 
  • Like
Reactions: UdoB
Proxmox Host:
  • CPU: Intel i3-14100
  • RAM: G.Skill Ripjaws V 32GB (2×16GB) DDR4 3200 MHz (non-ECC)
  • HDDs: 2× Seagate IronWolf Pro 16TB
You are asking a lot from a relatively modest host. its doable, but you'll need to temper your "performance" expectations. on that subject-

Just save yourself the pain ato run docker (and equivalents) in VMs instead of LXC.
That will multiply the "performance expectations issue" substantially. In addition, doing a passthrough of an igpu to a vm is not "saving pain," it is orders of magnitude more difficult then passing it to lxc. VMs make the most sense if you'll be deploying a cluster with live failover- which wouldnt work for the one vm that has the igpu passthrough anyway.

In truth, your best results would probably be not to bother with proxmox at all, and just run docker containers natively on a debian server, as your entire load can be docker driven.

I also noticed that your payload storage (2x16TB) is larger then your pbs store; I am guessing you dont intend to back up the entire payload?
 
  • Like
Reactions: Johannes S
Please note that a dedicated OS mirror is helpful but not always needed. It can also be a valid strategy to have everything on a combined OS+VM mirror. You won't have much storage for VMs on a 120/240 GB mirror though ;) While it's nice to seperate OS and VMs (since it might save you some time on restore) it's not needed if you backup your host configuration too and have recent VMs/LXc backups to restore from. Obviovusly you should test said restores from time to time. If you want to save some money I also don't see any problem in contiuing to use one of existing SSDs in combined mirrors with enterprise SSDs. Obviouvsly you won't get the same durability, IOPS and performance as with PLP-only mirrors but it will save you some money at the moment ;) And you can always replace a failed SSD at some later point.
Fair, but I think I will stick with a dedicated OS mirror anyway. Not for availability, I can live with downtime, but because it keeps reinstalls cheap and isolates PVE log churn from VM I/O. The cost delta for two small used enterprise SSDs is low enough that I would rather pay it.

Depends on the workload. If your workloads don't actually write a lot to the files but need to access their metadata a lot (e.G. last modification date) they are definitively worth it.
Makes sense. Since the HDD mirror will be pure bulk data (media, Samba shares, archives) and not VM storage, I will skip the special vdev for now. Metadata latency on a Jellyfin library is not something I will notice, and the silent fill-up-and-spill-back failure mode sounds like exactly what I do not want to debug later. Easy to add afterwards if the workload changes.

But if you only want to start if up for backups to save on energy costs, it won't do much (since the RAM and thus the cache will be lost with every shutdown/reboot).
Good point, I had not thought about the cold-cache angle.

I just looked up some shops for used server hardware and ebay, there used server ssds with PLP go for 60 Euro for the smaller models and 100-200 for models with 1 TB or more. But I didn't invest much time just to answer your question, better look for your self ;) But these prices matches what I payed (I might have payed a little bit less but not much).
Matches what I am seeing. My shortlist ended up being S4510/PM883 for the OS mirror (read intensive is fine, mostly logs) and S4610/SM883/Micron 5300 MAX for the VM mirror (mixed use, around 3 DWPD, which is where ZFS write amplification actually bites).

Thanks again for the detailed input.

You are asking a lot from a relatively modest host. its doable, but you'll need to temper your "performance" expectations. on that subject-
Fair point on paper, but thinking through the actual usage pattern it's pretty light. It's me, my girlfriend and her mother on Immich, and the expensive ML jobs only really run when new photos come in, idle the rest of the time. Jellyfin is mostly me, at home, sometimes my GF, so direct play with no transcoding; the iGPU is really just there for the occasional remote stream. Nextcloud sees maybe a handful of files a week. SABnzbd is spiky but only when something's actually downloading, and the arr stack is basically cron jobs hitting APIs. The concurrent worst case, everyone importing photos while I'm remote-transcoding while SAB is repairing a big release - just isn't going to happen in a 3-person household. At least that's my assessment. Am I wrong?

So the plan is: watch RAM pressure via monitoring once it's running, and if Immich or anything else actually starts complaining, grab another 32 GB kit or replace it with 2x32 Dimms. Hardware's already here apart from the enterprise SSDs I'm still sourcing, so CPU/RAM is what it is at this point - does that match your experience, or do you see a specific bottleneck I'm underestimating?
That will multiply the "performance expectations issue" substantially. In addition, doing a passthrough of an igpu to a vm is not "saving pain," it is orders of magnitude more difficult then passing it to lxc. VMs make the most sense if you'll be deploying a cluster with live failover- which wouldnt work for the one vm that has the igpu passthrough anyway.

In truth, your best results would probably be not to bother with proxmox at all, and just run docker containers natively on a debian server, as your entire load can be docker driven.
This is the part I'm most torn on. The iGPU-to-LXC argument I fully buy - bind-mounting /dev/dri is obviously simpler than fiddling with VM passthrough on a 14th gen iGPU. But the mainstream advice I keep seeing is "Docker in VMs" for isolation, clean snapshots, and avoiding the rough edges with Docker's networking and cgroups inside LXC. Could you expand a bit on how you'd actually lay it out?
  • Would you put the media stack (Jellyfin + Immich, needing the iGPU) in an LXC with Docker and keep the others as VMs, or push everything to LXC?
  • Any long-term gotchas running Docker inside LXC (upgrades, networking, storage drivers)?
  • On the "just run Debian on the metal" suggestion: wouldn't I lose the ability to do PBS-level snapshot backups of individual workloads, which was a big reason I wanted Proxmox in the first place?

I also noticed that your payload storage (2x16TB) is larger then your pbs store; I am guessing you dont intend to back up the entire payload?
Correct, not backing up the full 16 TB. The precious stuff is maybe 5 TB (Nextcloud, Immich originals, Paperless, Perforce depots, personal files), plus the VM/LXC root disks which are small and live on the SSD mirror anyway. Media is out of scope, ZFS mirror is its redundancy, and I have an 8 TB external USB HDD for a file-level copy of the subset I'd actually miss. Everything Sonarr/Radarr can re-fetch doesn't get backed up at all.

But your reply made me question whether I even need PBS for this. If I'm running Docker in LXCs (or on bare Debian like you suggested), the main things PBS buys me are dirty-bitmap incremental VM backups and granular file restore from VM images. With ~4 small VMs at most, neither feels like it's worth running a second machine 24/7 for.

So I'm now considering dropping PBS entirely and going with:
  • vzdump from the PVE host for VMs/LXCs, written to a local dataset
  • Restic/borg on the PVE host backing up /tank/{precious datasets} plus the vzdump output to the USB HDD
  • Optiplex repurposed, maybe for offside,

Questions:
  1. Does this match how you'd actually run it, or do you still see value in PBS at this scale?
  2. If I skip PBS, is there anything vzdump does badly that I should know about before committing? I've seen complaints about it reading full VM disks every run, but on a small SSD mirror with ~150 GB of VM disks total, that seems fine.
  3. Any preference between Restic, Borg and Kopia for the file-level side? I don't have strong feelings, just want something boring and reliable.
Thanks again.
 
  1. Does this match how you'd actually run it, or do you still see value in PBS at this scale?
  2. If I skip PBS, is there anything vzdump does badly that I should know about before committing? I've seen complaints about it reading full VM disks every run, but on a small SSD mirror with ~150 GB of VM disks total, that seems fine.
  3. Any preference between Restic, Borg and Kopia for the file-level side? I don't have strong feelings, just want something boring and reliable.
Before answering the question (its really just the one) I encourage you to really consider the implications of an all docker workload. docker images are immutable and live in a repo; the only things you MIGHT want to keep a copy of is the docker-compose and container config, and those should really live in a repo as well. everything else is ephemeral. There is really no point in backing up containers, only payload.

Depending on your host file system and how you're differentiating between for-backup and not-for-backup datasets, and the destination filesystem- zfs snapshot/send is likely the absolute quickest and most efficient method; for basic file based backups probably any that you mentioned would work just fine, as well as a host of others- or just rsync. thats what I mostly use anyway.
 
This is the part I'm most torn on. The iGPU-to-LXC argument I fully buy - bind-mounting /dev/dri is obviously simpler than fiddling with VM passthrough on a 14th gen iGPU. But the mainstream advice I keep seeing is "Docker in VMs" for isolation, clean snapshots, and avoiding the rough edges with Docker's networking and cgroups inside LXC. Could you expand a bit on how you'd actually lay it out?
When you say "mainstream advice" its good to note the background, reasoning, and authority of whoever you are quoting. In an enterprise environment with security policies, or on a dirty (shared) hypervisor the advice certainly applies- but its a private homelab; I wholeheartedly recommend you read and understand the security and performance implications, and then realize which trump for you. I simply cut to the chase ;)

with regards to snapshots, they wouldn't really have anything to do with docker. see the above post. you'd snapshot the dataset, not the container.

as for networking rough edges- I have answer and a question in return. question- what rough edges? and the answer- so don't use it. docker works fine with network: host.

as for idmap management (thats what I assume you mean; if not, explain what you mean by "cgroups inside lxc") its a bit of a learning curve but if you have basic reading comprehension it makes sense and not particularly challenging. I'll be honest though, I would not want to do this for an officeful of uids and use a vm instead. you and your girlfriend is very manageable assuming you even care to have separate users- but coming back full circle you wouldnt have any of this admin overhead running docker on metal.
 
Depending on your host file system and how you're differentiating between for-backup and not-for-backup datasets, and the destination filesystem- zfs snapshot/send is likely the absolute quickest and most efficient method; for basic file based backups probably any that you mentioned would work just fine, as well as a host of others- or just rsync. thats what I mostly use anyway.

That is what can be done with different virtual disks on different mount points and marking the non relevant partition as not included in the backup. The value depends on how you setup the environment, by hand vs. (container) registries and git versioned install scripts.

For a backup solution, I still recommend the Proxmox Backup server. It can verify verify the integrity of your backup chunks.

As mentioned before you can install it alongside the PVE on the same host (https://pbs.proxmox.com/docs/installation.html#install-proxmox-backup-server-on-proxmox-ve) as long as you have enough disks for first copy and second copy available.
The 3rd copy can than be made from the second via sync to the external USB-device.
From my experience in environments with 2PB of Backup data per month, media and copies will fail from time to time. Therefore it is useful to have multiple archive versions available. With most homelab solutions, it requires additional technologies and configuration to achieve that.

@Johannes S: Maybe I was a little bit unspecific on the zfs storage technology, sorry. :)
It wasn't a strict advise against using ZFS without ECC for labbing. It was more to point in the direction of being aware that there are opinions about, before starting investing into one of the technologies. Similar to RAID5/6 vs RAID0/1 with BTRFS. Thank you for specifing details.

@halfdan1988
In case you use dedicated disks with an parallel PVE+PBS installation, you should maybe consider the usage of a simple FS like ext4 or xfs underneath. During talking about the ssh-libzma-vulnerability a colleague made me aware of technology/configuration based risks.
Which means in therms of storage technology, with a malconfiguration or disfunctional update, there are good chances to loose both datacopies to the same bug. Very unlikely with stable technologies, but not equal zero. A PBS with two syncing datastores based on a different storage technology other than the main storage technology, should basically work as fine as using the main storage technology with RAID1.

Docker in LXC should work, if you disable appamour. It is not the way, that will be tested within the regular product spec, and neither respected during further development - with network: host directive it might be at least halfways safe against issues with routing + firewalls.

Therefore you need to find a sweet point of effort, that you want to maintain on the long therm.
One thing I like the most about PVE: They are mostly a full blown Debian with the Ubuntukernel, so the possibility of customisation
are very broad and easy to implement. For the sake, that you might end up in a situation, where you need to maintain them.

For this reason I also like the OCI image import feature a lot. It is an officially maintained feature, so I don't expect need to maintain it my own on the longrun.
The only point e.g. with applications like Immich is that the instruction point to Docker Compose, which would require to manually build the configuration. In case you use the OCI Images, using dedicated virtual disks for the config + data is very recommendable for maintaining
the easy going path of upgrade like with containerd (docker/podman/nerdctl) based solutions.
If you want to dig a little bit deeper in the compatibility layer you might read through the blogpost I wrote, before the offical support to PVE was added. The technology underneath didn't change :) (https://www.cloudandheat.com/nutzun...mages-als-linux-containern-lxc-in-proxmox-ve/)

Concerning PVE vs. Docker: From my perspective PVE provides the ability to scale technology with modest effort once it is necessary e.g. by running software in VMs if necessary. Of course you can also run kvm processes in docker containers on baremetal and that might be not documented as good as PVE based solutions. The second point is, the infrastructure will very likely, become essential for your family once set up and providing convenience. So uptime is important and you should use a technology you feel familiar with.
KISS and flexibility in design will save you pain. ... And with a upcoming tax declaration deadline, suddenly fixing the printer/scanner paperlessNG backend will become very urgent. :D

Concerning enterprise hardware: I would prefer enterprise grade hardware.
If you are cheap on budget, a disk powerloss protection feature similar mechanism could be the use of a UPS.
Anyways that might not protect you from a faulty powersupply,
but might protect you from high voltage e.g. by lightning strokes. (Lost a tower to that one).
High voltage protection as standalone is available for ~ 20€.

Tested enterprise hardware is usually fine. The semiconducters usually perform well for years, as long as they are not run constantly in an overheated situation. I learned from the colleagues that worked on and designed watercooled systems, the usual temperature specs of servers are much higher than the usual temperatures many companies use for their datacenters.
Anyways cooling as well as noise might become an issue depending on where you run your system and how much performance it has.
So you should be aware of that, depending on the abilities of your home.
For that reason I only run an integrated AtomCPU on a supermicro serverboard.

BR, Lucas
 
  • Like
Reactions: Onslow
That is what can be done with different virtual disks on different mount points and marking the non relevant partition as not included in the backup.
In OPs case, many if not all the containers are accessing the same data, which would be passed as a mountpoint. It wouldnt really make sense to do that, since there's nothing to be gained backing up the container itself.
 
:) I think we have the same in mind. - I will try to explain my intention more detailed.

Backing up VMs and LXCs running Docker or LXCs created from OCI images,
have in common that the applications can be recreated with less effort from the images,
as long as specific configuration and data are available.
In file based backup system, that access files from within the guest (e.g pbs-client or rsync, Borg etc.), config file and data directories can be selected directly for inclusion to the backup.

PBS instead accesses the PVE on the hypervisor level, by backing up the storage areas, that represent the whole disk of a guest.
So to include only the specific config and data directories required, they can be placed on dedicated virtual disks, mounted on the paths including application data and config. So not all disks of a VM needs to be pulled into a backup.
On the one hand this can save some space, on the other hand with deduplication and manual setupsof guests the benefits might be questionable.

BR, Lucas
 
  • Like
Reactions: Johannes S
In file based backup system, that access files from within the guest (e.g pbs-client or rsync, Borg etc.), config file and data directories can be selected directly for inclusion to the backup.
This is true, but neglects we're living in 2026.

git clone my_container
cd my_container
docker compose up -d
PBS instead accesses the PVE on the hypervisor level, by backing up the storage areas, that represent the whole disk of a guest.
In context, the guest disks contains no relevent data, and are pointless to back up. When you have multiple guests accessing the same data, it doesnt really make sense to use a container/vm to master it when the host is perfectly capable without any overhead. Yes, you can use pbs to backup a dataset on the hypervisor, but I dont know what utility that has over zfs send- especially given that OPs usecase primarily revolved around uncompressible media files.

docker obviates a LOT of what makes hypervisors necessary in the first place. In a homelab environment hypervisors serve no real utility for an all docker workload.
 
  • Like
Reactions: Johannes S