ZFS Multidimensional Pool

nic_pve

New Member
Mar 4, 2025
1
0
1
Hello there,
first of all:
I'm not new to Proxmox, but I'm a complete newbie at ZFS.

Why I came across ZFS?
I have a bunch of 2TB and 4TB Hard-Drives lying around and also 2 1TB M.2 SSD´s. At the moment I use a Hardware-Raid Controller to get a "Drive" (or better 2 Drives) into my Proxmox Server.
The problem at the Moment is that my HW-Raid does not submit any Disk-Health information to Proxmox and because of that I would not come across a Disk-Failure if there is any.

The solution I have in Mind:
Set my HW-Raid Controller to HBA-Mode (I know how, and also I tested that the Drive-Status gets "published" to Proxmox).
Then create a ZFS-Pool for each group of 2 time 2TB Drives to get 4TB and then create a big pool with the ZFS-Pools and also the 4TB-Bare Drives.

What I Have:
10x 2TB Drives
12x 4TB Drives
2x 1TB M.2 SSD
What I plan:
10x 2TB Drives = 5x 4TB ZFS Pool
And then:
10x Bare Drives + 5x 4TB ZFS Pool = One Big ZFS Pool (I think RaidZ2 is equivalent to Raid 6) + M.2 SSD Cache to get higher Read/Write Speeds + 2 Spare Drives

If I'm about right, this should all add up to around 50TB of usable Storage with SSD Cache

Now the Problem:
I couldn't find anything for Proxmox if it is possible to set up multidimensional ZFS-Pool (I don't even know if this is the right name for this).
Can anyone help me here?


P.S. I want to use the 50TB Storage just as pure Storage for Nodes (not the Root-Disk!). E.g. I plan to have a local File-Server, Shares where my Movies go and so on.


I'm sorry in advance for every pro user who reads through my post here and probably found 476 errors in the first sentence. Please bear with me, as ZFS (and especially ZFS in combination with Proxmox) is completely new territory for me. If any details are needed that I have not (yet) provided, I will be happy to provide them.
 
What I plan:
10x 2TB Drives = 5x 4TB ZFS Pool
I don't think that is possible. But you could split your 4TB drives into two partitions and create 24 2TB parts.
10x Bare Drives + 5x 4TB ZFS Pool = One Big ZFS Pool (I think RaidZ2 is equivalent to Raid 6) + M.2 SSD Cache to get higher Read/Write Speeds + 2 Spare Drives
L2ARC is highly overrated. Maybe create a 10x 2TB + 10x first partition of 4TB drive as a strip of mirrors? That gives you better IOPS, which is preferred for VMs. Use the other 14 secnds partitions as slow storage using RAIDz2?
P.S. I want to use the 50TB Storage just as pure Storage for Nodes (not the Root-Disk!). E.g. I plan to have a local File-Server, Shares where my Movies go and so on.
Maybe use something different than ZFS
I'm sorry in advance for every pro user who reads through my post here and probably found 476 errors in the first sentence. Please bear with me, as ZFS (and especially ZFS in combination with Proxmox) is completely new territory for me. If any details are needed that I have not (yet) provided, I will be happy to provide them.
Maybe search the forum a bit about ZFS and VM performance (since most ZFS configurations are very slow but have a lot of features).

Please note that Proxmox itself runs fine of a mirror of slow HDD. Maybe use the pair of SSD as ZFS special devices (as that usually performs better than L2ARC or SLOG). Lots of options with all of your drives but your experience seems limited indeed. ZFS is very heavy and has lots of overhead because of its features. Depending on your VM and workload, your need IOPS more than available space. Use different pools (and setups) for VMs and slow sequential storage like movies. I can't tell you the best configuration, you'll have to experiment and reseach yourself.
 
set up multidimensional ZFS-Pool
possible- yes (just not in the way you describe) but you really dont want to.

data is distributed to all vdev's simultaneously. If you have a pool that comprises of vdevs with different capacities/memberships/arrangements that means that the filesystem will attempt to shoehorn sequential writes to vdevs with different chunk size and performance, making for unbalanced performance and uneven padding leading to wasted space.

just make separate zpools for each class/size of disk you have, eg

rpool- mirror, 1TB SSD (boot+ vm disk store)
zpool1- raidz2, 10x2tb
zpool2- raidz2, 12x4tb