Proxmox as host, or TrueNas as Host?

Obednal1

New Member
Feb 14, 2023
21
4
3
Hi all, I'm considering combining my home tech into a one-machine Router/NAS/Server, with the ability for the storage to be expandable. Do you have any suggestions or input on how I best do this? Here are my thoughts so far:
Parts List: https://uk.pcpartpicker.com/list/W2LpbL
In short:
  • 8-core 3.8GHz CPU (on-bard graphics)
  • 32GB RAM
  • 2x 500GB SSD (500GB total with RAID1)
  • 4x 4TB HDD (8TB total with RAID10)
I'm thinking of running Proxmox as a host and TrueNAS as a VM (with HDD passthrough). Also planning on virtualising OpenWRT (with NIC passthrough).
List of Guests:
  • TrueNAS (12GB)
  • OpenWRT (1GB)
  • Plex (3GB)
  • Homepage (512Mb)
  • Docker (8GB)
  • HAOS (4GB)
  • Atlas (Off until needed) (3.5GB)
  • Kali (Off until needed) (3.5GB)
  • Lubuntu (Off until needed) (3.5GB)
Proxmox would have a RAID1 config with the two 500GB SSD's, this would support Proxmox, VM/CT Backups, Templates, ISO's. TrueNAS would have a RAID10 config with the HDD's, this would support Media, VM Volumes, CT Volumes, ZFS Network Share).

After reading around I've seen others using TrueNAS as a host and virtualizing Proxmox to then run everything under, not really sure about the best practice, what would you advise?
As a bit of background, I only recently got into home servers, virtualisation etc at the beginning of 2023 so still very new to the scene but eager to learn!
 
Last edited:
For ZFS (the software raid TrueNAS and PVE are using) you usually don't want QLC or consumer SSDs which those NV2 are both. Proper SSDs would be something like a Samsung PM9A3/PM983 or Micron 7400 PRO/7450 PRO/7400 MAX.
I'm thinking of running Proxmox as a host and TrueNAS as a VM (with HDD passthrough).
Then I would get a LSI HBA card with PCIe passthrough. Disk passthrough is still using virtualization and only way for the VM to directly access the real physical disks is to PCI passthrough a whole disk controller (where a B550 chipset mainboard wouldn't be great for).

After reading around I've seen others using TrueNAS as a host and virtualizing Proxmox to then run everything under, not really sure about the best practice, what would you advise?
I don't see the point. If you primarily want a NAS but also run some VMs just use a bare metal TrueNAS SCALE which can do both. PVE if you primarily want a Hypervisor, as PVE doesn't come with any NAS functionalities which then would need to be added in a VM with additional overhead in case you want a GUI.
 
Last edited:
  • Like
Reactions: Obednal1
For ZFS (the software raid TrueNAS and PVE are using) you usually don't want QLC or consumer SSDs which those NV2 are both. Proper SSDs would be something like a Samsung PM9A3/PM983 or Micron 7400 PRO/7450 PRO/7400 MAX.

Then I would get a LSI HBA card with PCIe passthrough. Disk passthrough is still using virtualization and only way for the VM to directly access the real physical disks is to PCI passthrough a whole disk controller (where a B550 chipset mainboard wouldn't be great for).


I don't see the point. If you primarily want a NAS but also run some VMs just use a bare metal TrueNAS SCALE which can do both. PVE if you primarily want a Hypervisor, as PVE doesn't come with any NAS functionalities which then would need to be added in a VM qith additional overhead incase you want a GUI.
This is great feedback, thank you so much for taking the time to share!