ZFS File Share on Proxmox

Since none of the ZFS utilities/kernel module are a component of the container, I am a little unsure exactly what ZFS has to do with this project? Please enlighten me. :)
I hesitate to write a lengthy reply - my knowledge is too limited to count as a reliable reference...
The install.sh script did create a perfectly working, AD integrated Samba share at /tank/share on top of my underlying ZFS filesystem, but as a disk image
It isn't a disk image, it is a ZFS dataset. (Which is different from a ZVOL.)
it has no hooks into the host filesystem itself, unless i am mistaken. Given that, how is snapshotting supposed to work without exposing the host filesystem to the container?

There are few details on how to fully implement this, such as host filesystem integration and the actual snapshotting mechanism but logic holds that all of that has to occur on the host itself where I would install a host snapshotting tool of my choice (such as zfs-auto-snapshot) and then expose the host filesystem using something like this: pct set nnn -mp0 /tank/share,mp=/container/mount/point.
You are correct, snapshotting can only happen on the host as it has ultimate access to the data sets (and the pool). Inside of the container there are no ZFS tools/commands available. And yes, any "normal" tool should be fine.

But please note that the naming convention must fit to the line "shadow: snapprefix = ^zfs-auto-snap_\(frequent\)\{0,1\}\(hourly\)\{0,1\}\(daily\)\{0,1\}\(monthly\)\{0,1\}" inside the smb.conf. There are some more important configuration values set. Try to understand each and every line.

Generally a ZFS snapshot is not visible at all but nevertheless accessible via mydataset/.zfs/snapshot. (The "." makes it "hidden", as usual in Linux.) Knowing this allows me to access those snapshots inside a share named "temp" (which is implemented as separate ZFS dataset on the host) this way:
Code:
..../temp/.zfs# ls -Al snapshot/ | head -n 4
insgesamt 0
drwxrwxrwx 1 root root 0 Jan 16 18:04 auto-d-250116180402
drwxrwxrwx 1 root root 0 Jan 17 08:04 auto-d-250117080402
drwxrwxrwx 1 root root 0 Jan 17 18:04 auto-d-250117180402
Inside of each directory I can see each and every file that was present at the respective point in time. Of course this is "readonly".

In Windows those directories/files shall appear as "Previous Versions" in the file explorer. (I had used this successfully two years ago but now it does not work for me anymore as something broke. Windows is irrelevant for me, so it is just not important.)

Best regards :-)
 
Thanks @udo . I will read through your reply in depth a little later.

I guess what I am confused about is the dataset vs disk image question. I will accept your opinion/notion on this given you likely have more experience than I do, but my direct experience suggest the opposite..

Here is what I see when I perform a zfs list

tank/base-101-disk-0 3.10M 630G 96K -
tank/base-101-disk-1 6.07M 630G 64K -
tank/base-101-disk-2 160G 782G 7.65G -
tank/containers 707M 630G 699M /tank/containers

Assuming that local-zfs storage maps to containers (which it does) should't I expect to see the zmb-member container as a child dataset of tank/containers?

On the other hand, when I perform an ls -l under the containers mount point, this is what I see:

root@pve:/tank/containers# ls -l /tank/containers/images/201/
total 715753
-rw-r----- 1 root root 34359738368 Jan 17 12:50 vm-201-disk-0.raw
-rw-r----- 1 root root 107374182400 Jan 17 12:50 vm-201-disk-1.raw

Those look like disk images to me.

Thanks in advance!
 
Last edited:
You can ask ZFS itself what type a child is:

Code:
~# zfs get type  tank/data/subvol-100-disk-0
NAME                         PROPERTY  VALUE       SOURCE
tank/data/subvol-100-disk-0  type      filesystem  -

~# zfs get type  tank/data/vm-1938-disk-0
NAME                      PROPERTY  VALUE   SOURCE
tank/data/vm-1938-disk-0  type      volume  -

"filesystem" is used to store files. It is (usually) mounted and you can "cd" into it.

"volume" is a block device. You give it to a VM and the VM will create a filesystem from the inside. The host can not "cd" into that storage!

My above discussion was about containers and mountpoints. This approach requires directories (on a munted filesystem) on the host. My way to supply those directories is via ZFS datasets. And that's a requirement for the snapshot-magic (but not for the mountpoints in itself).


This:
root@pve:/tank/containers# ls -l /tank/containers/images/201/
looks like a Directory storage for me. This type can be created on every filesystem, be it ext4 or NFS. It has nothing to do with the capabilities of ZFS. Perhaps you had created a dataset tank/data and then from the PVE Gui defined it as a directory storage.
 
I omitted to mention that it was recommended elsewhere to create directory storage for the containers. At this point I am just experimenting as I am new to both Proxmox and LXC so I just trying to get a handle on the basics and understand what is the best approach for a given use-case.

I have been using ZFS for more than five years and consider myself reasonably adept but its integration with Proxmox is what is new to me and Proxmox's innovative use of ZFS many capabilities requires expanding one's understanding of both.

I am going to switch gears and recreate my container using the standard method as you have suggested. What I liked about the directory approach is that it organizes the filesystem in a more tidy fashion. At least insofar as the samba-lxc-toolbox is concerned, creating a new CT using its tooling inopportunely drops the filesystem at the root of the pool and together with all of my other parent and child filesystems making it very untidy mess. The install script will only take tank, instead of tank/containers, as a filesystem for some reasons ... I'lll keep investigating.

The other problem I was experiencing is that when passing the underlying filesystem to my container the Samba share just did not behave quite the same as the Samba share running on the host. The share appeared normally, however when connecting to and trying to browse the share, it failed. It could be my choice of Samba parameters were inadequate but I more or less cloned what I had for the host and it still did not function correctly. Maybe I will have better luck with the standard approach. Now if I could only solve the filesystem layout issue.
 
The other problem I was experiencing is that when passing the underlying filesystem to my container the Samba share just did not behave quite the same as the Samba share running on the host.
One problem area I did not mention yet is "User-IDs". I've connected that Zamba file server to my Univention UCS Windows DC / AD. This is the way I chose to make sure user "john with id 1234" is the same account on all clients.

ID mapping is a rabbit hole - and I can't give a howto for this one.

For containers you may find hints if you search for "lxc.idmap".
 
A rabbit hole to be sure. I really don't want to bother with RFC2307 and its not needed for this use-case so if that is the only option for this pre-built container then I think that I will have to look at other options..
 
I passed on doing a bind mount (as it is known) due to all of the limitations involving snapshotting, etc, and other complexities, and will just use the dataset created by the install script.

It turns out that the RFC2307 winbind idmap parameters won't pose any problem and after chmod'ing my file share directory with bit modes 1777 it just works!

Thanks for your help.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!