ZFS Setup for Storage

macos_user

New Member
Mar 6, 2024
6
1
3
Hey all,

Setting up a PVE cluster with 3 nodes right now, each will have 8 600gb drives and 8 1.2tb drives. OS is on a seperate drive not listed. I plan on having 3 pools:
- One for VM's, ISO's, general file storage, etc.
- One for Surveillance footage (need as much space as possible here, but footage will also be stored in the cloud with AWS) This is NOT mission critical
- One for Backups (PVE Snapshots, daily Google Drive Backups, etc.) Currently, google drive takes up appx 1tb, and I'm not sure how much storage PVE Snapshots will take. I can also store this in the cloud as a secondary.

I should also note that this will be an HA cluster and I do have room to add more nodes in the cluster in the future. I do not believe the Surveillance and Backup pools need to be HA.

Given this, how would you utilize these drives in a ZFS setup? I appreciate the help here
 
> each will have 8 600gb drives and 8 1.2tb drives. OS is on a separate drive not listed.

> I plan on having 3 pools:
- One for VM's, ISO's, general file storage, etc.

> - One for Surveillance footage (need as much space as possible here, but footage will also be stored in the cloud with AWS) This is NOT mission critical

> - One for Backups (PVE Snapshots, daily Google Drive Backups, etc.) Currently, google drive takes up appx 1tb, and I'm not sure how much storage PVE Snapshots will take. I can also store this in the cloud as a secondary.


You probably don't need 3 separate pools for this. You have (2) sets of differently-sized drives.

RAIDZ1 the 8x600GB drives = zpool1 for VM/ISO/general (have at least 1-2 spare drives per node for replacements)

RAIDZ2 the 8x1.2TB drives = zpool2 for video footage and Backups
^ Use ZFS datasets and per-dataset settings for compression/type and recordsize=1M to differentiate instead of having a 3rd pool.

BTW, protip - you don't have to name the zpool ' tank ' - that's pretty ancient. You can name it whatever you want. Personally I start mine with 'z' (makes them easy to exclude from tar backups) and generally name them after whatever disk type they use. zseatera4 = Seagate 4TB NAS drives pool

Definitely explore other opinions tho, this is just my take on it. With such small drives you probably don't want to "waste" 1/2 the space with a mirror pool, and with your use case you probably don't need fastest I/O.

And whatever you do, you will need backups - especially if you do raidz1. The odds of another disk failing during a rebuild are lower with smaller drives (under ~2TB) but they're still odds.
 
Last edited:
You probably don't need 3 separate pools for this. You have (2) sets of differently-sized drives.
Thank you for this. I honestly have zero clue why my brain jus latched onto having 3 pools. I completely didn't even process have 2 pools and putting both backups and footage in one of them.

I'm already losing a large chunk of the data in zpool1 as it will be an HA pool. If I make zpool2 non-HA, barring drive failure while offline, data should be fine writing to the 2 remaining vdevs on that pool if one node/vdev does down right?
 
  • Like
Reactions: Kingneutron
I'm already losing a large chunk of the data in zpool1 as it will be an HA pool. If I make zpool2 non-HA, barring drive failure while offline, data should be fine writing to the 2 remaining vdevs on that pool if one node/vdev does down right?
ZFS is not a shared access filesystem, so it is not a Cluster Aware Filesystem, that means you cant use it for cluster HA.
ZFS is local access filesystem.

If you need to have Cluster Aware Filesystem, then your choices are a) Ceph - it is built-into PVE, with good overlay management b) ZFS async replication

Also, keep in mind that using consumer grade SSD and NVMe with ZFS is not optimal.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Kingneutron
@bbgeek17 Is there a way to set up a decently-performing Ceph cluster for a homelab-ish (I say -ish, because I use my homelab for both work and personal stuff) environment that won't require a second mortgage to pay for
I only dabble in Ceph, especially if it is at homelab level. Hopefully other forum members, more well versed in it, will provide the guidance you seek.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
BTW, protip - you don't have to name the zpool ' tank ' - ... . zseatera4 = Seagate 4TB NAS drives
Been there, done that.

Imagine: after replacing a failed drive now I have "zseatera4" with a Western Digital 6 TB disk and planning to replace the other Seagates too. Renaming a ZFS pool used as semi-shared Storage with replication in a cluster might not be easy...

Best regards
 
  • Like
Reactions: Kingneutron

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!