best way to create a NAS with proxmox?

Boerny41

New Member
Aug 19, 2022
8
0
1
Hi,

I recently purchased hardware to set up a new proxmox server. I used to run a Truenas VM inside proxmox but without an HBA. This time I bought one, but I'm really not a fan of it. It draws 11W without a hard drive attached, it gets hot and it lacks a temp sensor. If the zip-tied fan fails one day, the card will just destroy itself or actually start to smolder.

Now I'm thinking of other solutions to get my network share up and running without using the HBA.


What I want is:
  • secure storage with high data integrity
  • automatic scrub and smart test
  • Sync from mobile device to the storage
  • an easy way to access the data on the disks even if Proxmox fails (therefore no hardware RAID)
  • Email notifications when disks go bad
  • Ability to expand the storage
  • SMB share

Thinking about it, a few possibilities came to mind, but I lack the knowledge to determine what is best.
  1. Connect the drives to Proxmox, create a ZFS pool, install Samba on Proxmox and share the ZFS.
  2. Again, ZFS pool in Proxmox, create a vDisk with almost the full pool size, give it to some VM and create the SMB share there.
  3. Use a NAS-VM that is ok with direct pass-through disks and doesn't need a HBA like Turenas.
 
I'm trying to do this as well but almost everything I read says not to pass disks through (to TrueNas).

I had thought I'd be able to create a ZPool and assign that to a VM as a virtual drive - then the VM would partition and format from there?

I'm also looking at running SMB on proxmox, just to remove that virtual layer.

I was previously running Debain with the 4 drives in Raid5, and SMB. Moving from old PC to PowerEdge T430 so was looking for new solutions. I'm also running a DE for browsing and TV/YouTube - also not advised, but I can' t waste that console!

Would also be interested in any advice on this..
 
I had thought I'd be able to create a ZPool and assign that to a VM as a virtual drive - then the VM would partition and format from there?
Thats an option. But ZFS got terrible overhead and overhead isn't adding up but multiplying (so exponential growth). So it's not a good idea to run ZFS on top of ZFS and ZFS is want you need for TrueNAS.
So I would either passthrough a whole HBA card (or alternatively individual disks if PCI passthrough isn't an option) and only run a single ZFS on the guest level. Or I would run ZFS on the PVE host and then skip TrueNAS and use a virtual disk formated with something more simple like ext4/xfs inside a NAS VM that allows for more simple filesystems (like OMV).
Or skip the whole idea of a NAS VM and set up your ZFS + SMB/NFS server directly on the PVE host (but then managed via CLI as PVE isn'T offering any NAS features).
 
Or skip the whole idea of a NAS VM and set up your ZFS + SMB/NFS server directly on the PVE host (but then managed via CLI as PVE isn'T offering any NAS features).
Also doing ZFS+SMB in a LX(C) container is also an option. Data is then bind-mounted
 
Thanks for the reply!

I was forgetting the TNas is basically zfs which of course we don't want to run twice. I think you OMV suggestion is what I was thinking because then Proxmox would be looking after the zfs and the VM would just see a disk. Also the host would be managing the memory for zfs directly.

If I pass the HBA through would the zfs volumes that TrueNas would create be importable by other systems should the need arise.

Also I have an existing zfs volume - would passing the HBA to TrueNas VM destroy it or would I be able to import?

I will have a little look at OMV in the meantime - thanks for the suggestions!

Oh, one last thought would my RaidZ be any more or less resilient running in VM or on the host? I'm guessing no difference as you're passing disks or PCIE controller direct to VM?
 
If I pass the HBA through would the zfs volumes that TrueNas would create be importable by other systems should the need arise.
Yes, passthing through a HBA is like TrueNAS running bare metal directly accessing the real physical disks. Every OS with ZFS support could import the pool.

Also I have an existing zfs volume - would passing the HBA to TrueNas VM destroy it or would I be able to import?
Not sure how TrueNAS handles ZFS pools not created by TrueNAS. Best you ask that in their community forum.

Oh, one last thought would my RaidZ be any more or less resilient running in VM or on the host? I'm guessing no difference as you're passing disks or PCIE controller direct to VM?
When passing through individual disks they will show up as virtual disks inside TrueNAS. Only way for TrueNAS to access the real physical disks in PCI passthrough of a whole HBA. So yes, disk passthrough of individual disks would add another abstraction layer that could cause problems...
 
Also doing ZFS+SMB in a LX(C) container is also an option. Data is then bind-mounted
Interesting, so run zfs in the container, not PVE? I haven't used containers at all yet, but am keen to for using AI toolchains with gpu passthrough, just want to get my general server done 1st. It's non critical backups of my desktop and a place for my downloads - apps, iso's, videos - stuff I can cope with losing...
 
Only way for TrueNAS to access the real physical disks in PCI passthrough of a whole HBA. So yes, disk passthrough of individual disks would add another abstraction layer that could cause problems...
 
Interesting, so run zfs in the container, not PVE?
No, ZFS on the PVE host and than bind-mount folders of dataset mountpoints from the PVE host into the LXC so you can share them there via SMB server. But I guess things like shadow copy for SMB won't work then...
 
No, ZFS on the PVE host and than bind-mount folders of dataset mountpoints from the PVE host into the LXC so you can share them there via SMB server. But I guess things like shadow copy for SMB won't work then...
Ah, I get you now! That would be a pretty lightweight way to go. Added to my reading list!
 
So yeah - after much reading i've been playing with containers and it really looks like a solid way to go. I can't see any disadvantages to sharing my main pool with bind-mount in the container with webmin or cockpit to manage it and the shares. Takes SO little memory too!

If I were to run Truenas Scale and pass through my HBA i wouldn't be able to use my other 3 disks connected to the HBA with Proxmox, and it would use more resources!

This solution is excellent for a home lab like mine - thank for the constructive help -I'm sure I'll be back for more!

Edit: Already a question! I can't unmount these drives from proxmox, so how can I stop them being used by proxmox or anything else?
 
Last edited:
I'm trying to do this as well but almost everything I read says not to pass disks through (to TrueNas).

I had thought I'd be able to create a ZPool and assign that to a VM as a virtual drive - then the VM would partition and format from there?

I'm also looking at running SMB on proxmox, just to remove that virtual layer.

I was previously running Debain with the 4 drives in Raid5, and SMB. Moving from old PC to PowerEdge T430 so was looking for new solutions. I'm also running a DE for browsing and TV/YouTube - also not advised, but I can' t waste that console!

Would also be interested in any advice on this..
I ended up creating a separate zfs in proxmox and use an unprivileged container for the smb share. (it's apparently not recommended to run the smb share on proxmox itself)

Didn't have any problems in month. Would recommend
 
I ended up creating a separate zfs in proxmox and use an unprivileged container for the smb share. (it's apparently not recommended to run the smb share on proxmox itself)

Didn't have any problems in month. Would recommend
I'm setting it up now - Cockpit is saying
Raise network interfaces - Failed to Start
Yet networking seems top be working in the container...

Just getting my head around SMB users and permissions...
 
In some cases, like I use DL360 Gen9 at my home. But this hardware has a problem with the PCIE passthrough. So the only solution for that is using a virtual disk? That would lose a lot of performance and also be hard to migrate. Like just moving the dedicated disk to another hardware for NAS.
 
So the only solution for that is using a virtual disk? That would lose a lot of performance and also be hard to migrate.
Welcome to this virtualization forum ;)
A virtual disk can be migrated online, so it is the opposite of hard (as in physically move the disk).

If you want to solve "the NAS problem" with pcie or disk passthrough, maybe you should not run virtualization.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!