Help with storage strategy for simple home network

thool

New Member
Feb 18, 2022
2
0
1
57
I'm looking for some guidance on setting up Proxmox to support my home network, specifically the storage side of the equation.

My Proxmox installation is on a Supermicro: 1TB nvme, 64GB RAM, 16 x Intel Xeon CPU D-1541 @ 2.10GHz (1 Socket)

My current "home server" is a Dell Vostro 220s Series, Intel Pentium Dual E2200, 2 GB RAM, Ubuntu 18.04.6. It runs miniDLNA, Splunk, samba, NextPVR, and serves up all my media from 2 pooled (mergerfs) spinning rust. Horrible.

I'd like to create 3 VMs, all housed on the nvme, likely 2 cpu and 4gb ram each:
File server: samba, nfs share, this would serve up all my media to other vms and samba to clients
Media server: NextPVR, miniDLNA, this would read/write off the file server to my kodi/vlc
Admin server: splunk and my speedtest monitor

The biggest question is storage. I'm looking at a single WD Red for media, only accessible to the File server VM. Eventually, I'd like to bring in 1 more HDD and use mergerfs to present as a single "disk" to be used by Media server only. What is the best way to introduce the disk to Proxmox and have the File server utilize it? I see NFS as a native Proxmox option, but don't know if that is the right solution here. (Not ready for RAID at this time, still learning)

Thank you!
 
You could run your SMB/NFS server inside a LXC and then use bind-mounts to bring a folder from your host into your LXC. As the host and LXC share the same hardware and kernel there would be no virtualization or protocol overhead between host and LXC.

Or if you want to use a VM you could use qm set passthrough to bring a physical disk from your host as a virtual disk into a VM. But here you still got virtualization and protocol overhead as your VM will only be able to see a virtual disk (so stuff like SMART isn't working from inside the VM).

A third option would be use PCI passthrough a HBA with disks attached to it into a VM. This is a real phyical passthrough and your VM could directly access the real physical disks, so there is no additional overhead and no additional abstraction layer. But your mainboard/bios/cpu/hba needs to support this. But you could test if your onboard SATA controller got its own IOMMU group and and can be passed through.

Forth option would be to just store your stuff on virtual disks. This might be useful in case you are planning to get additional nodes, because that way you could migrate your NAS VM/LXC between nodes.

You should also think about how you want to backup your data as this might be different depending on how you setup your storage.
 
Last edited:
  • Like
Reactions: generalproxuser
@Dunuin

Third option: I am curious about that. Is there any links/references regarding that?

I managed to get option 1 going for me (lxc container as nfs, tftp, smb server) and it is currently mounting the root of my zfs pool. From inside the container I manage everything else (via command line). I had other plans for the file sharing but now I am wondering if I should have just passed through all the disks in my zfs pool and have a vm manage it.
 
@Dunuin

Third option: I am curious about that. Is there any links/references regarding that?
You can follow the PCI passthrough tutorial: https://pve.proxmox.com/wiki/Pci_passthrough
Its primary focus in on GPUs but its not that different for HBAs. You just don't need to block GPU drivers and so on.
But keep in mind that it will passthrough the complete HBA (disk controller) from the host to the VM with all of its ports. So if you only got one disk controller your host won't have any SATA/SAS ports left. But here that should be fine as PVE is installed to a NVMe which is not managed by the HBA.
 
Last edited:
  • Like
Reactions: generalproxuser
Thanks, still so much to learn and I appreciate all the help.

To clarify a couple concepts: If I'm setting up a new VM to act as a file server, presumably selecting the default "local-lvm" storage will store the VM and its virtual HDD on my NVME. Later on, if I add a physical SSD to my Proxmox server, how would I introduce that new physical resource to the VM I had created?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!