Hi everyone,
I’d like to float an idea that keeps popping up in the community: a first‑party, GUI‑integrated way to expose ZFS datasets or Ceph pools over SMB/CIFS—without having to roll a full‐blown NAS VM each time.
All three modes could share the same front‑end panel (ExtJS) so the administrator sees:
Behind the scenes:
Thanks for reading—looking forward to your thoughts!
— Eric (Zippy Technologies)
I’d like to float an idea that keeps popping up in the community: a first‑party, GUI‑integrated way to expose ZFS datasets or Ceph pools over SMB/CIFS—without having to roll a full‐blown NAS VM each time.
Motivation
- Many Proxmox clusters already run Ceph RBD or local ZFS for VM disks.
- Windows (and Mac) users on the same network still need a classic file share.
- Spinning up a heavyweight NAS VM just to serve a handful of folders can feel wasteful, yet hand‑editing
smb.conf
on the host breaks Proxmox’s “appliance‑like” simplicity.
Proposed Add‑on
A “SMB Gateway” module that Proxmox could surface under Datacenter → Storage → Add → SMB Gateway with three deployment modes:Mode | How it would work | Footprint | Typical use‑case |
---|---|---|---|
Native (host) | Installs Samba on the node, adds share stanzas, manages AD/Kerberos join | 20 MB RAM | Labs or single‑node boxes where HA isn’t critical |
Embedded LXC (preferred default) | Wizard creates a minimal container, bind‑mounts ZFS dataset or CephFS/RBD map, auto‑generates shares | 40‑80 MB RAM | Most clusters: clean isolation, easy Proxmox‑HA fail‑over |
Classic NAS VM | Wizard deploys a template VM (TrueNAS SCALE, Debian, etc.), attaches RBD or CephFS passthrough, registers it with HA | ≥ 512 MB RAM | Enterprises wanting full NAS apps, VSS, kernel‑space Samba |
Code:
Datacenter → Storage → Add → SMB Gateway
[ ] Use existing dataset/pool (dropdown)
[ ] Create new sub‑dataset/share (text field)
[x] Join Active Directory
Mode: ( Native | LXC | VM )
HA: ( none | auto‑relocate | CTDB cluster )
Quota: ____ GiB
Create ▶
- ZFS backend:
zfs create
, setxattr=sa
, bind‑mount into CT/VM or export directly. - Ceph backend: user selects CephFS subvolume or chooses RBD to create/map a thin image (auto‑enables
exclusive-lock
). - Optional tie‑in to the new Ceph mgr SMB module for users already on Ceph Reef who prefer the official containerised gateway.
Benefits
- One‑click share creation—keeps “appliance” feel of Proxmox.
- Resource‑efficient: LXC mode idles at ~50 MB RAM instead of a full VM.
- HA‑aware: CT or VM can participate in Proxmox HA groups; CTDB integration gives SMB fail‑over IPs.
- Security isolation: No need to install extra packages on the hypervisor if you choose LXC or VM mode.
- Unified backups: Use existing Proxmox backup jobs for the CT/VM, or snapshot ZFS/CephFS directly.
Open Questions / Discussion Points
- Where to store configs?
- Embed in
storage.cfg
like current storage plugins, or keep a separatesmbgate.cfg
replicated by pmxcfs?
- Embed in
- AD/Kerberos helpers.
- Ship a tiny wrapper (
pve-samba-join
) so the wizard can perform realm joins and keytab management?
- Ship a tiny wrapper (
- CTDB & floating VIPs.
- Bundle a lightweight CTDB setup script, or let users opt‑in only when HA is enabled?
- CephFS vs. RBD.
- The new
ceph mgr smb
only does CephFS. Should our add‑on support RBD maps too, or steer people toward CephFS for true POSIX semantics?
- The new
- Snapshot semantics & VSS.
- Crash‑consistent ZFS/Ceph snapshots are fine for lab shares, but corporate file servers often need VSS. VM mode solves this—native/LXC might need pre/post hooks.
- Road‑map fit.
- Would the core team prefer this as an external plugin first (like
pve-storage-s3
was) or consider merging once mature?
- Would the core team prefer this as an external plugin first (like
Call for Feedback
- Developers: happy to help prototype the storage plugin and ExtJS panel—any objections to the architecture above?
- Admins: which mode would you actually use? Do you see major risks (performance, security)?
- Ceph users: anyone already running the new
ceph smb
manager in production—how stable is it for AD users?
Thanks for reading—looking forward to your thoughts!
— Eric (Zippy Technologies)
Last edited: