Homelab build confirmation

cart3l

New Member
Feb 4, 2025
10
1
3
Hi all

I am struggling to decide on a new homelab server. I have needs for GPU passthrough, Firewall with 2.5GBit/s throughput, a couple of Windows and Linux VMs. I want to build a power efficient server.

I am considering several builds, AMD and Intel based builds:



MBAsRock B650D4U-2L2T/BCM
CPURyzen 9 7900
RAM64GB
PCIASUS Hyper M.2 X16 Card
M25x Samsung 990 Pro 1TB

According to my research iGPU passthrough to VMs is not properly working for AMD CPU iGPUs so I most likely would need to add a dedicated GPU which draws power.

The Intel based build:



MBEC262D4U-2L2T OR W680D4U-2L2T/G5 (expensive)
CPUIntel Core i9-14900T
RAM64GB
PCIASUS Hyper M.2 X16 Card
M25x Samsung 990 Pro 1TB

[td]​

On this build the GPU Passthrough should not be an issue. I do not know if the C262 Chipset has drawbacks against the W680? The W680 based mainboard is really expensive and I would like to avoid that.

Now my question: What build would you prefer and why? And is there any information on the AMD iGPU passthrough? I guess all boards support the required bifurcation.

Thanks for your inputs!
 
I recently built a new home server, and I wanted to model it on the HL-8 that 45 HomeLabs offers. I used the following:
Gigabyte B550I Aorus Pro AX
AMD Ryzen 5 Pro 5650-GE
Nemix RAM 2X32GB DDR4 3200 ECC Unbuffered
M.2 to SATA 6 Port Adapter Card, ASM1166
Noctua NH-L12Sx77
Corsair RM Series RM650
Fractal Design Node 304
10Gtek 10Gb PCI-E NIC Network Card
10Gtek 10G SFP+ DAC Cable

I am running Proxmox as my main OS and I have virtualized TrueNAS Scale as my NAS software. I have a total of 10 SATA ports-4 on the motherboard that are dedicated to Proxmox and 6 on the ASM1166 M.2 to SATA adapter. The SATA adapter is passed through to the TrueNAS VM. I currently have 8 drives installed, all enterprise SSD drives. I am running 5 VMs, 2 CTs and 15 or so docker apps (depending on how you count things like Redis, MariaDB, etc.) I went with a socket AM4 motherboard because ECC memory is so much cheaper for this platform. I was also able to find a new Ryzen 5 Pro 35 watt CPU on e-bay. All in this machine runs at about 40 watts on average. I don't do anything with Plex or Emby so GPU passthrough is a non-issue for me. I mostly run Wordpress, Nextcloud, Joplin, Bitwarden, Homeassistant, Joomla and Photoprism. I have been totally impressed with this build.

I personally wouldn't go for a Ryzen 9 or a core i9 CPU if you only have 64 GB of RAM. You will run out out of RAM long before you run out of CPU cores. Proxmox can over subscribe the CPU cores, but the same is really not true with RAM. Honestly I would double your RAM and look for a more modest CPU.

Why the interest in the ASUS Hyper M.2 X16 Card? I had one in my HP Z640 refurbished server and it worked great, but honestly I am not sure if it is worth the cost, and not sure it brings any appreciable benefit. I never noticed much of a performance difference if my VMs were in an NVME drive versus a SATA SSD. I am sure there is a difference, but from the standpoint of the end user, it wasn't noticeable to me. That could be because I store all my data on NFS shares on one of my NAS machines, not sure.

I am also not a huge fan of virtualizing my firewall. I prefer to run that on its own hardware. This way if I bring down the server for some reason, my wife and kids will not complain about the internet being down.
 
  • Like
Reactions: Johannes S
I recently built a new home server, and I wanted to model it on the HL-8 that 45 HomeLabs offers. I used the following:
Gigabyte B550I Aorus Pro AX
AMD Ryzen 5 Pro 5650-GE
Nemix RAM 2X32GB DDR4 3200 ECC Unbuffered
M.2 to SATA 6 Port Adapter Card, ASM1166
Noctua NH-L12Sx77
Corsair RM Series RM650
Fractal Design Node 304
10Gtek 10Gb PCI-E NIC Network Card
10Gtek 10G SFP+ DAC Cable
Thanks for your answer - a nice build! Missing IPMI though?

I am running Proxmox as my main OS and I have virtualized TrueNAS Scale as my NAS software. I have a total of 10 SATA ports-4 on the motherboard that are dedicated to Proxmox and 6 on the ASM1166 M.2 to SATA adapter. The SATA adapter is passed through to the TrueNAS VM. I currently have 8 drives installed, all enterprise SSD drives. I am running 5 VMs, 2 CTs and 15 or so docker apps (depending on how you count things like Redis, MariaDB, etc.) I went with a socket AM4 motherboard because ECC memory is so much cheaper for this platform. I was also able to find a new Ryzen 5 Pro 35 watt CPU on e-bay. All in this machine runs at about 40 watts on average. I don't do anything with Plex or Emby so GPU passthrough is a non-issue for me. I mostly run Wordpress, Nextcloud, Joplin, Bitwarden, Homeassistant, Joomla and Photoprism. I have been totally impressed with this build.
What SSD are you using? Generally your use-case is quite close to mine. I have a dedicated NAS.

The 40 watts is quite impressing with that amount of drives!
I personally wouldn't go for a Ryzen 9 or a core i9 CPU if you only have 64 GB of RAM. You will run out out of RAM long before you run out of CPU cores. Proxmox can over subscribe the CPU cores, but the same is really not true with RAM. Honestly I would double your RAM and look for a more modest CPU.
This is just the start - and double of what I have today. The system I am looking for supports a total of 128GB. Given the fact that I ran with 32 GB for the last 10 years tells me that I will be fine with that amount.
Why the interest in the ASUS Hyper M.2 X16 Card? I had one in my HP Z640 refurbished server and it worked great, but honestly I am not sure if it is worth the cost, and not sure it brings any appreciable benefit. I never noticed much of a performance difference if my VMs were in an NVME drive versus a SATA SSD. I am sure there is a difference, but from the standpoint of the end user, it wasn't noticeable to me. That could be because I store all my data on NFS shares on one of my NAS machines, not sure.
I actually never tried or benchmarked anything so I can't tell if SATA SSD are "much" slower than the NVME version and I am totally fine with a different SSD setup - it just needs to be somehow affordable
I am also not a huge fan of virtualizing my firewall. I prefer to run that on its own hardware. This way if I bring down the server for some reason, my wife and kids will not complain about the internet being down.
I have a different approach: After I confirmed that internet is not working, she knows where to put the network cable from the router & press the power button. This will boot up a small router with DHCP etc. and fixes the problem ;)
 
Thanks for your answer - a nice build! Missing IPMI though?
I don't know. I don't miss it.

What SSD are you using? Generally your use-case is quite close to mine. I have a dedicated NAS.
Mostly Samsung SM863a drives from e-bay. If you are diligent you can find them for around $45-50/TB

I have a different approach: After I confirmed that internet is not working, she knows where to put the network cable from the router & press the power button. This will boot up a small router with DHCP etc. and fixes the problem ;)
Oh no. Interruptions are not tolerated well in my home. LOL Plus I like the fact that my network is completely apart from my main NAS (synology) as well as my various Proxmox hosts (I have 4: my main host, an unused host I am trying to decide what to do with, a backup host that stores local copies of all my backup data as well as it runs my Ansible instance to automatically update all my VMs and Proxmox machines daily, and a "sand box" or test environment where I can feel free to break things and experiment.). I can shut off all my servers and my wife can still access the Synology or the network. AND, all my TVs, Ring cameras, Ring alarm system, smart appliances, etc. all still work even if a server crashes.
 
I don't know. I don't miss it.


Mostly Samsung SM863a drives from e-bay. If you are diligent you can find them for around $45-50/TB
thanks, i'll have a look!
Oh no. Interruptions are not tolerated well in my home. LOL Plus I like the fact that my network is completely apart from my main NAS (synology) as well as my various Proxmox hosts (I have 4: my main host, an unused host I am trying to decide what to do with, a backup host that stores local copies of all my backup data as well as it runs my Ansible instance to automatically update all my VMs and Proxmox machines daily, and a "sand box" or test environment where I can feel free to break things and experiment.). I can shut off all my servers and my wife can still access the Synology or the network. AND, all my TVs, Ring cameras, Ring alarm system, smart appliances, etc. all still work even if a server crashes.
alright - that setup is something else :P i am glad that my SLA is not that high haha!
 
  • Like
Reactions: Johannes S
alright - that setup is something else :P i am glad that my SLA is not that high haha!
Not really. I use one of these for my firewall/router: https://www.amazon.com/dp/B0BZJB9KX5?th=1

I run pfSense and it draws about 12 watts. Internet comes in by cable modem that then plugs into the pfSense box. The pfSense box is then connected to a managed switch, and everything else is hub and spoke off of the switch. This allows me to easily segment my network with VLANs. My wireless access point (https://www.amazon.com/dp/B0BGJJWPWC) can create up to 8 SSIDs, each of which can be assigned to a single VLAN. Not bad for $80. So when my daughter logs onto wifi, she cannot access anything in my home lab. Likewise I have my internet facing services in their own VLAN and completely isolated. Some of the ports on my switch are tagged to individual VLANs, and some are trunk ports that feed my VLAN aware devices, like my Proxmox machines.

I love that all of this works, regardless if my servers are up, down or sideways. I think virtualizing the pfSense is just too many eggs in one basket for my tastes.
 
Last edited:
Not really. I use one of these for my firewall/router: https://www.amazon.com/dp/B0BZJB9KX5?th=1

I run pfSense and it draws about 12 watts. Internet comes in by cable modem that then plugs into the pfSense box. The pfSense box is then connected to a managed switch, and everything else is hub and spoke off of the switch. This allows me to easily segment my network with VLANs. My wireless access point (https://www.amazon.com/dp/B0BGJJWPWC) can create up to 8 SSIDs, each of which can be assigned to a single VLAN. Not bad for $80. So when my daughter logs onto wifi, she cannot access anything in my home lab. Likewise I have my internet facing services in their own VLAN and completely isolated. Some of the ports on my switch are tagged to individual VLANs, and some are trunk ports that feed my VLAN aware devices, like my Proxmox machines.

I love that all of this works, regardless if my servers are up, down or sideways. I think virtualizing the pfSense is just too many eggs in one basket for my tastes.
nice setup! will have a look at the devices!