ZFS File Share on Proxmox

minorsatellite

New Member
Nov 25, 2023
16
5
3
I am new to Proxmox (coming from vSphere) and slowly clawing my way through the documentation while trying to wrap my mind around some of the unique design/architectural challenges of the product. I was drawn to it primarily due to its deep integration with ZFS.

I now have my first VM running, a Windows Server VM, and am ready to move on to the second piece, where I need to provide some type network storage for the server VM. Early on I have made some ill-informed assumptions, call it wishful thinking, that I would be able to host a datastore directly on the Proxmox host (sans the guest OS) and share it out over SMB, but that is looking increasingly like a long-shot. More specifically, I was hoping to create a file share directly on the same ZFS storage where my VM zvols currently reside and make that available to any of my VMs. Having gone more deeply into the product, it's pretty clear by now that no such tooling for this functionality exists, logically, of course, because it is a hypervisor platform first. Makes sense!

Currently I only have a single pool, but even if I had that +1, it seems the only way to do this is to create another VM to serve as my network storage, where I will need one virtual disk for the guest OS, and one for the ZFS file system. I don't think there would be any advantage to creating multiple virtual disks to mimic top-level VDEV devices for my virtual ZPOOL, since I have a 3-way mirror backing the storage already? Thoughts?
If anyone has any better ideas or alternative to what I would welcome your thoughts.

Thanks in advance.
 
I now have my first VM running, a Windows Server VM, and am ready to move on to the second piece, where I need to provide some type network storage for the server VM
Proxmox is not a NAS software like TrueNAS, so you won't find any nice webGUI to create a smb share.
At the same time, Proxmox is just Debian, so you can use any Debian tutorials to create a file share.
For example this.

You also can, but I would strongly advise against it, create a TrueNAS VM and passthrough disks to that VM.

Really depends on your needs.
 
You could do a turnkey LXC file server, or if you want to be a bit adventurous just install Webmin (runs on port 10000) on the host, either one will give you a GUI to setup Samba shares
 
  • Like
Reactions: minorsatellite
Thats more or less the perspective I was looking for, thank you. I suppose that I was too close to the problem to see the obvious, which is the course of action you have suggested.

My Debian experience has flowed mostly through ZoL via Ubuntu so this seems like a pretty trivial solution.
 
  • Like
Reactions: Kingneutron
You could do a turnkey LXC file server, or if you want to be a bit adventurous just install Webmin (runs on port 10000) on the host, either one will give you a GUI to setup Samba shares
Interesting, would this approach allow me to utilize a manually created datastore (ZFS filesystem) on the primary pool?
 
With ZFS, you could just set ' sharesmb=on ' on the dataset and it should show up as a share
Cool!

I've used "sharenfs" in the past, but never tried smb. It is worth to note that you need to apt install samba first. Without it I got this error:
Code:
~# zfs set "sharesmb=on" rpool/shared/smb-temp
cannot share 'rpool/shared/smb-temp: system error': SMB share creation failed

:-)
 
Once samba is running, you don't need to reboot or anything - just make sure the smbd and nmbd services are Started and Enabled (so they survive a reboot) and issue: ' zfs share -a '
 
And also look into shadow copy feature of ZFS datasets and Samba. GREAT for windows client to access previous zfs snapshots directly.
 
@Kingneutron Attempting to implement your suggestion, but have hit a few speed bumps.

I have my filesystem created (with sharesmb=on), but the Create Directory tool (at the Datacenter node) is forcing me to select a content type that is not al all applicable to my use case, eg.Disk Image, ISO image, Container template, etc., which results in the creation a directory below the root file share/parent directory. When attempting to create the directory on the host level, it wants unallocated raw storage devices and only provides the option of xfs or ext4 as a filesystem. I am not entirely sure why I need this or what any of this would ultimately accomplish.
 
Last edited:
Now that I have my Windows Server guest OS (VM) configured, it requires network storage to mount so that users connecting to Windows application server can access application data (via mapped drive on application server).

For this I have created a dedicated ZFS filesystem on the PVE host to share out via Samba. I have the Proxmox node joined to the domain. The Samba server is configured Domain Member Server, and usual Share parameters/permissioins configured as per usual for general shared storage. Am I leaving anything out?
 
@Kingneutron Attempting to implement your suggestion, but have hit a few speed bumps.

I have my filesystem created (with sharesmb=on), but the Create Directory tool (at the Datacenter node) is forcing me to select a content type that is not al all applicable to my use case, eg.Disk Image, ISO image, Container template, etc., which results in the creation a directory below the root file share/parent directory. When attempting to create the directory on the host level, it wants unallocated raw storage devices and only provides the option of xfs or ext4 as a filesystem. I am not entirely sure why I need this or what any of this would ultimately accomplish.
Yes, this happens when you define something as Storage in Proxmox. If you're sharing Samba at the host level, the PVE GUI doesn't need to know about it unless you want to store proxmox-specific stuff there (ISOs, templates, imports, vdisks, etc)
 
Now that I have my Windows Server guest OS (VM) configured, it requires network storage to mount so that users connecting to Windows application server can access application data (via mapped drive on application server).

For this I have created a dedicated ZFS filesystem on the PVE host to share out via Samba. I have the Proxmox node joined to the domain. The Samba server is configured Domain Member Server, and usual Share parameters/permissioins configured as per usual for general shared storage. Am I leaving anything out?
As long as you can write to the share, I can't think of anything at the moment. Just make sure nobody outside of your LAN can access it (like the internet at large)
 
As long as you can write to the share, I can't think of anything at the moment. Just make sure nobody outside of your LAN can access it (like the internet at large)
Even after joining the host to the domain, it was still necessary for me to manually create the krb5.conf file, oddly that does not get created during the join. Optional packages that needed to be installed, all of the Winbind stuff, configure nsswitch.conf, and even after all of that, my winbind calls fail, except for
Code:
winbind -t
. Still unable to enumerate users/groups. :mad:
 
I have it working now. Binding the Proxmox Data Center to AD realm was no substitute for binding the actual host to the domain so joining the host to the domain resolved the issue and wbinfo -t/u enumerates correctly and SSO to file share works as expected. Yay!

I'm currently configuring Samba via smb.conf file but it's been suggested by @Kingneutron to use Webmin or an LXC container. I am familiar with Webmin, but to be honest I am not a huge fan.

The other suggesting is using a container to manage the Samba share. What exactly would that look like?
 
  • Like
Reactions: Kingneutron
The other suggesting is using a container to manage the Samba share. What exactly would that look like?
Take a look here: https://github.com/bashclub/zamba-lxc-toolbox

Their "zmb-*" containers build exactly what _I_ wanted - a Windows share with direct snapshot access via Windows-Explorer/Previous versions.

(( I have connected it to an Univention UCS AD --> your mileage may vary. ))
 
  • Like
Reactions: Kingneutron
@udo Thank you. I took you advice. I choose the zmb-member container. I am not sure if the build actually completed because it seemed to struggle resolving one or more debian repos, Im thinking there might be DNS/routing issues with the build environment. I can't really check since the Proxmox console clears after you click away from it, which is super annoying.

Also, it built with a predefined user account but i have no idea what the credentials are. Any thought?
 

Attachments

  • Screenshot 2025-01-11 at 23.29.48.png
    Screenshot 2025-01-11 at 23.29.48.png
    44.7 KB · Views: 3
Also, it built with a predefined user account but i have no idea what the credentials are. Any thought?
My personal installation has happened some years ago, so I am not sure if all my knowledge is valid for the current scripts.

That said:

1) you can enter any container from the PVE host via pct enter <yourcontainerid>. Without knowing the conainer-internal password :-)

2) You probably copied zamba.conf.example, right? In there you can find
Code:
# Defines the `root` password of your LXC container. Please use 'single quatation marks' to avoid unexpected behaviour.
LXC_PWD='Start!123'
 
  • Like
Reactions: Kingneutron
I actually found it in the ReadMe file thanks.

I will read through rest of it to see exactly how I can put this to use. The container is using the default vmbr network interface but for some reason the networking on the CT is not working out of the gate, not sure why.
 
Thanks everyone for your support.

I have my zmb-member container working after several failed attempts and its almost ready for production, but now comes the newbie questions.

Since none of the ZFS utilities/kernel module are a component of the container, I am a little unsure exactly what ZFS has to do with this project? Please enlighten me. :)

The install.sh script did create a perfectly working, AD integrated Samba share at /tank/share on top of my underlying ZFS filesystem, but as a disk image it has no hooks into the host filesystem itself, unless i am mistaken. Given that, how is snapshotting supposed to work without exposing the host filesystem to the container?

There are few details on how to fully implement this, such as host filesystem integration and the actual snapshotting mechanism but logic holds that all of that has to occur on the host itself where I would install a host snapshotting tool of my choice (such as zfs-auto-snapshot) and then expose the host filesystem using something like this: pct set nnn -mp0 /tank/share,mp=/container/mount/point.

Thanks in advance.
 
Last edited: