Seeking advice: New storage upgrade configuration (host zpool vs. guest NAS via passthrough)

mkaicher

Member
May 23, 2020
9
0
6
34
Hello all,

I have a single node server running ~12 CTs and 4 VMs (linux & windows) with an SSD ZFS mirror for the root drive and CT/VM images and a HDD RAIDZ pool for media, backups, and other storage. I rely on bind-mounts to my containers but also host SMB and NFS shares directly from the host for my VMs and other LAN devices (which I understand is not a best practice). Anyway, I'm planning to replace my RAIDZ pool with a much larger RAIDZ2 pool in the coming days. I'd planned to keep the same setup but now I'm debating if I should pass through my SATA controller to a dedicated NAS guest (probably TrueNAS). On one hand, I'm afraid of losing the convenience of bind-mounts, especially considering the issues around NFS/SMB in unprivileged CTs. One thought would be to move my LXC services to a VM-based docker host. On the other hand, I really like the cleanliness of a dedicated NAS guest...and of course the features and webgui convenience.

I guess I'm looking for opinions from anyone who has experience with one or preferably both methods. My pool is very large so it's not something I could easily change down the road. Any advice is welcome and appreciated!
 
Using SMB/NFS shares with unprivileged LXCs is annoying but not impossible. I mount the NFS/SMB shares from the NAS to the PVE host and then use bind-mounts with user/group remapping to bring the mountpoints of these network shares into the LXC.
Works fine (if you don't need to migrate the LXCs between hosts) but you of cause still only get NFS/SMB performance. And restoring such a LXC from a backup to a fresh PVE would result in a not working LXC if you don't edit the hosts fstab first so that the new host also gets the NFS/SMB shares mounted first.

Also keep in mind that you are not limited by your NICs performance when using NFS/SMB with a local NAS VM. Virtio NICs throughput will be limited by your CPU performance. Lets say your CPU can handle 20 Gbit but you only got a physical Gbit NIC. Then the NAS VM still could communicate with the LXCs or other VMs with 20 Gbit so NFS/SMB isn't that slow but you of cause still get way more overhead than using just bind-mounts.
 
Last edited:
Using SMB/NFS shares with unprivileged LXCs is annoying but not impossible. I mount the NFS/SMB shares from the NAS to the PVE host and then use bind-mounts with user/group remapping to bring the mountpoints of these network shares into the LXC.
Works fine (if you don't need to migrate the LXCs between hosts) but you of cause still only get NFS/SMB performance. And restoring such a LXC from a backup to a fresh PVE would result in a not working LXC if you don't edit the hosts fstab first so that the new host also gets the NFS/SMB shares mounted first.

Also keep in mind that you are not limited by your NICs performance when using NFS/SMB with a local NAS VM. Virtio NICs throughput will be limited by your CPU performance. Lets say your CPU can handle 20 Gbit but you only got a physical Gbit NIC. Then the NAS VM still could communicate with the LXCs orsother VM with 20 Gbit so NFS/SMB isn't that slow but you of cause still get way more overhead than using just bind-mounts.
I appreciate the reply. I had considered that route but wasn't 100% sure of performance implications, so I've learned something! I suppose it's worth playing around with before doing the full restore from backup. Just feels like a clunky solution to get a "clean" host. Thanks again!
 
Hello all,

....I rely on bind-mounts to my containers but also host SMB and NFS shares directly from the host for my VMs and other LAN devices (which I understand is not a best practice).
Nonsense, it may not suit everyone or all circumstances, but it's an approach as valid as any other. Have you heard of hyperconvergent platforms?
 
Nonsense, it may not suit everyone or all circumstances, but it's an approach as valid as any other. Have you heard of hyperconvergent platforms?
Are you saying my current method serving NFS/SMB from the host is valid? It is the simplest option in my case.

I have played around with Proxmox clusters and live migration if that's what you're asking. I just really don't have the need or suitable hardware. Assuming I did, I'd probably look into ceph or setup a bare-metal TrueNAS. But with only one node, I think I'm just fishing for opinions from anyone who's gone the NAS VM route.
 
Quite valid.

Of course there are arguments for (simplicity, efficiency) and against (modification of the host environment, complex cli-based administration) but I've never seen any of the proxmox staff advise against running a system this way. I've had my systems setup both ways in the past but now I run NFS and SMB directly on the host as that works best for me.

On the other hand, many people do run TrueNAS as a VM quite happily and they get the GUI management and the isolation of function. Each to their own.

For clusters, as you rightly say, you would need to think about ceph or dedicated storage systems.
 
Quite valid.

Of course there are arguments for (simplicity, efficiency) and against (modification of the host environment, complex cli-based administration) but I've never seen any of the proxmox staff advise against running a system this way. I've had my systems setup both ways in the past but now I run NFS and SMB directly on the host as that works best for me.

On the other hand, many people do run TrueNAS as a VM quite happily and they get the GUI management and the isolation of function. Each to their own.

For clusters, as you rightly say, you would need to think about ceph or dedicated storage systems.
I appreciate your input and have decided to stick with the original setup. As I'm sure everyone on this forum knows, it's hard not to second guess your system when making a major hardware upgrade.

Maybe we'll get some NAS functionality in future versions of PVE. Unlikely, I know....
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!