Architectural advice for a seflhosted setup with 3 mini PC and a NAS

Sebuturn

New Member
Mar 2, 2025
2
1
3
Hello,

I'm a bit of a noob in the proxmox world, I've dived into it quite recently, I'm trying to wrap my head (and having a hard time with everything new) about the optimal architecture for my selfhosted home setup.
My (foolish?) objective would be to have a proxmox cluster that would allow me to have some HA VMs (for Docker (or k8s/k3s but I haven't dived into it yet), DBs, several other VMs).

The architecture I had in my head was to have the local drives of the mini pc to handle the VMs and have my persistant data on the NAS. But I started to search for the best way to have HA for the VMs and ran into a lot of different views how to do it (ZFS, CEPH, LVMs, Shared NAS, etc..) and now I have to admit I completely lost. A bit of help would be much apreciated :)

So, the material I have :

- 3 mini PCs :
- AMD Ryzen 7 6800H
- 32 Gb DDR5 RAM
- 515 Gb M2 NVME SDD
- 1 2.5 Gb NIC (and a switch with 3 2.5Gb ports)
- 1 1Gb NIC

- An other old mini pc with linux on bare metal that I use for my DMZ reverse proxy (and that I'm considering adding the the cluster and add my network segmentation in there)

- A Synology NAS DS923+ with 3 6TbHDD in raid 5 connected with a bonded 2 x 1Gb connection.

From what I gathered, the easiest setup would be to have all the data on my NAS, but I won't be using the 3x512Gb nvme disks at all then (and I'm saddened about that), and I wonder if performance wize there is something to do on that side (currently they are simply setup as 3 local lvm).


Thank you for your guidance :)
 
From what I gathered, the easiest setup would be to have all the data on my NAS,
Yeah. But a remote Raid5 on rotating rust is a very bad choice for running VMs from it. Use the NAS for what it is meant to be used: storing "normal" data, not actively running VMs.

Go for ZFS instead. It will get you a much, much better performance than NFS via that NAS. And "ZFS Replication" will allow High Availability.

With that single NVMe you can't establish redundancy, like ZFS mirrors. That's not good - and personally I would not like to go there. But ZFS on a single drive is still better than having none of those important features of ZFS.

Some more details: https://forum.proxmox.com/threads/f...y-a-few-disks-should-i-use-zfs-at-all.160037/
 
  • Like
Reactions: Sebuturn
Yeah. But a remote Raid5 on rotating rust is a very bad choice for running VMs from it. Use the NAS for what it is meant to be used: storing "normal" data, not actively running VMs.

Go for ZFS instead. It will get you a much, much better performance than NFS via that NAS. And "ZFS Replication" will allow High Availability.

With that single NVMe you can't establish redundancy, like ZFS mirrors. That's not good - and personally I would not like to go there. But ZFS on a single drive is still better than having none of those important features of ZFS.

Some more details: https://forum.proxmox.com/threads/f...y-a-few-disks-should-i-use-zfs-at-all.160037/
Ok, I'll deep dive into ZFS to get to know how it works exactly :)

I'll add an other M2 disk in each mini pc, I have an other free slot.
 
Last edited:
  • Like
Reactions: UdoB
optimal architecture for my selfhosted home setup.
This doesnt really mean anything. a "selfhosted home setup" has no metrics to design optimally for.

My (foolish?) objective would be to have a proxmox cluster that would allow me to have some HA VMs (for Docker (or k8s/k3s but I haven't dived into it yet), DBs, several other VMs).
any host will do that. just calculate how much ram/cpu/storage resources you need and deploy accordingly.

From what I gathered, the easiest setup would be to have all the data on my NAS
agreed.

Yeah. But a remote Raid5 on rotating rust is a very bad choice for running VMs from it.
"very bad" may just be fine for op's usecase. see first point. Op just needs to understand what the downsides are, but beyond that most "home" applications don't produce or consume enough iops for the raid itself to matter. Which lead to

, and I wonder if performance wize there is something to do on that side (currently they are simply setup as 3 local lvm).
why is this important? if your applications aren't using it, what difference does it make?

Were I you, I'd define what I'm trying to accomplish in real terms (eg, iops, fault tolerance, capacity, etc) and go from there. absent any meaningful metrics the rest of the conversation is meaningless too.
 
  • Like
Reactions: UdoB