ZFS Nextcloud Guidance

crit_alert

New Member
May 5, 2025
1
0
1
Good Afternoon,

I am seeking some guidance for my setup and would appreciate anyone who could take the time to share their knowledge and experience. I have extensively browsed articles and forums learning a lot but seek some clarification.



I have been a light Proxmox user for a while, running HA and some misc VMs. I have a PFsense FW with OpenWRT AP.



Recently I acquired some more hardware for a Nextcloud Setup.



Hardware:

Mini Futjitsu PC. 16GB RAM

Storage: 1 x 512GB NVME

2 x 1TB SSD (Samsung 870 EVO)



Current Setup

Proxmox OS installed on NVME.

2 x SSDs in a ZFS Mirror.

Ubuntu VM. Boot Disk on the NVME.

Nextcloud manually installed on the VM - in and working with SSL and tweaks (currently LAN access only)

Attatched seperate virtual Disk residing on the ZFS pool. (zfspool/nc-data)

i.e sda = boot

sdb = nextcloud data



Issues

My main question is the storage side of things and getting the best setup for me.

Currently the 'raw files' i.e nextcloud photos sit under so many layers it seems:



Physical SSD(s) > ZFS pool > ZFS Dataset > Proxmox created Directory > Virtual Disk > File Structure.



Questions:

1.Is there a way to do this- that if for whatever reason I wanted to remove one of the SSDs, I could plug it in anywhere, (Ideally enter an decryption passhrase) and view the raw files, without having to deal with virtual disks etc, whilst still keeping the redundancy of a ZFS mirror?



2.I have ~1TB of useable space on the ZFS pool. So I allocated 800GB to Nextcloud currently for the virtual disk, lets say I want to use some for a seperate VMs storage for CCTV for example, what is the purpose of creating a new dataset within the ZFS pool if it would just be a virtual disk anyway? Is there any advantage to segregating these Disks over datasets.

  1. After creating the ZFS dataset, i.e zfs create... I am then making it useable in Proxmox by adding it as 'Datacenter > Storage > Add > Directory. Is there a better Option?
  2. I would prefer the 'cleanest' setup where the raw files dont seem behind so many layers of virtualisation. It has been a good learnig experience getting Nextcloud IAW and I am fully prepared to start over from scratch.
  3. Options would be more hardware with TrueNAS? Wipe Proxmomx and install Ubuntu on bare mental with the ZFS pool? Or if is this just the way it is then OK. I really would just like say in 10 years and moving all over, if I forget how to do all this stuff (hopefully not likely) I could recover everything easily.

    Thank you for your time.
Sam
 
I think it's already the case (or very close), but as soon as you remove the disk there's no redundance (it's a mirror afterall : 2 disks required).

For sure if those are not DC-class or similar you will have some pain when they start failing frequently.

On way to simplify this is instead of using a virtual disk in the Nextcloud VM, make available your ZFS pool via NFS for mounting in your VM.

Regarding using 2 disks and ZRAID, and ecncryption, this is what I do.

To create the array initially:

zpool create -o ashift=12 data raidz1 /dev/sda /dev/sdb

Then create an encrypted pool:
zfs create -o encryption=on -o keyformat=passphrase data/NextCloudSafe
zfs create -o encryption=on -o keyformat=passphrase data/OtherSafe

Then you can create sub pools:

zfs create data/NextCloudSafe/data
zfs create data/OtherSafe/Backups
etc.

All this is using dynamic space allocation, it took me some time to realize this. I haven't neede to reserve specific space so I haven't researched that.

After that, when rebooting, when I invoke this command it will ask for the passphrase and make the pools available normally (so, one command):
zfs mount -l -a

Look up zfs import and zfs file systems mount documentation to optionally mount your file systems read-only which will be safer.

I suggest you try and experiment this, on a test setup.

If you remove one disk and connect it on another Proxmox setup with the same versions, the above zfs mount command should also bring up the drive after entering the pasword (i haven't tested this).

Putting back the drive in the initial system should also work again as usual, it will trigger a resilvering (sync) operation which should be short (give nothing has changed).