Ditching Unraid in favor of Proxmox for ZFS storage shares?

Gymnae

Member
Apr 26, 2024
4
1
8
Hi there,
I'm currently running Unraid in a VM on Proxmox and passing through the SATA controllers, as well as one NVME drive on my Motherboard.

I started using Unraid as an easy method to use disks of different sizes and have a cache in front. But over time, came to realize that the vanilla implementation of Unraid's mover isn't working the way I want it to - it doesn't provide a fast read and write cache. And it's a VM, adding a potential point of failure.

Within Unraid, I created two ZFS storage Pools:
  1. Primary pool: One single NVME drive as a kind of read and write cache, since it's running as a primary pool
  2. Secondary pool: Two spinning disks and an SSD L2ARC cache
A custom shell script moves data between primary and secondary based on age and modification time, ensuring files below a certain threshold are always available on primary but also backed up to secondary. Files above a certain size threshold and modification time are moved to secondary. This is an attempt to create my own read and write cache. ZFS special devices weren't available in Unraid at the time, and I am still unsure whether they would help me.

I access data stored in user shares in these pools via mounted NFS shares on the Proxmox host. From there, I bind-mount these shares to LXCs and VMs.

Now, I want to reduce complexity and, hopefully, increase speed by
  1. Removing the network layer of NFS
  2. Not relying on Unraid as a VM for delivering data, because if the VM is not running, LXCs and VMs requiring the bind mounts fail to start
  3. Getting rid of complex UID mapping and access right management between Unraid, NFS, and guests. Sledgehammer methods like all_squash help, but require manual intervention when mapping changes occur
  4. Getting rid of script-based moving between pools
I wonder if I could move the ZFS pools to Proxmox and replicate the convenience of user shares on either the Proxmox host or with the help of a lightweight LXC container.
I am open to changing my ZFS "infrastructure".

As I understand, backups would be atomic and faster with this when using the Proxmox backup server.
I am also unsure about the overhead NFS actually creates. I increased the MTU for the NFS exclusive internal virtual network between Proxmox and Unraid to 9000.

Could I achieve these goals with Proxmox without Unraid? Or is the NFS overhead not that meaningful?
 
Hi,

not clear why use such complicated schema with so many layers/break points, why not use the same disks directly on PVE?
You could achieve all of it by using the same disks by PVE as LVM or ZFS storage.
It's not complex and exciting as it is now, rather, just boringly reliable)
Could you clarify what the main goal?
 
  • Like
Reactions: Johannes S
Hi,
Thanks for your reply :)

I would love to ditch the complexity. I would love to ditch Unraid. The complexity I added to Unraid stems from Unraid's differing focus. It's targeted for quickly dumping lots of files and mixing disks of different sizes, but it's not designed as a tiered storage solution. It targets Plex and torrent users, as well as home NAS users who require backup storage and the ability to consume large video files.

I'd like to achieve a tiered storage, where frequently and recently accessed files and folders are read from and written to the fastest available storage. All data is always parity protected by the RAID arrangement of slower disks.
And since I do not have enough storage media to achieve this in a sane fashion, I built the mentioned script and dance.

tl;dr:
My goal:
  1. My single NVME as fast read/write storage for frequently/recently accessed data
  2. SSD and mirrored HDDs for permanent, but hot storage
  3. All files on the NVME to be parity protected
  4. The underlying storage structure should be hidden from consuming LXCs, VMs, i.e., the solution is accessed through one mountpoint, the scaffolding enabling the tiering should not matter or be visible to consumers
  5. As little plumbing as possible - no unraid or truenas if possible
 
Last edited:
FYI for whoever this may interest in the future:
I ditched Unraid in favor of attaching the drives directly to Proxmox and use my 2TB NVME as an L2ARC in front of the HDDs.
 
and use my 2TB NVME as an L2ARC in front of the HDDs.
For your next iteration: read about adding a (mirrored) "Special Device". And in your current setup try to verify if L2ARC does actually work the way you want it. Examine the output of zarcsummary for this. See https://openzfs.github.io/openzfs-docs/man/master/1/zarcsummary.1.html

(( For my few pools with rotating spindles the SD is a must have and at the same time I have zero L2ARC devices in those pools. ))