Newb, confused at raidz2 available space

jahf

New Member
Apr 11, 2022
5
1
3
Background: I'm installing PVE 7.1 on my home system to test things out (5950X, 128GB ECC, multiple drives, 2 GPUs). I'm coming from some experience with Unraid but looking to expand my horizons (and get away from the USB boot, which bit me 2 too many times).

I was testing TrueNAS Scale last week and when I created a raidz2 there with 4x 8TB drives, it reported <>14TB free space after raidz2 creation as I expected (losing 2 disks worth of space for parity ... plan is eventually to expand with more drives).

Today when creating essentially the same setup in the Proxmox ZFS config:

  • Selecting the 4 drives
  • Selecting raidz2

Proxmox reports ... 32TB size/free.

So I'm a bit confused here. Probably a user issue more than anything. But I'm wondering how to verify that this raidz2 actually has the parity data enabled.

Is this just an oddness in Proxmox? I assume so as `zfs list` shows only 14TB available to the pool.
 
Background: I'm installing PVE 7.1 on my home system to test things out (5950X, 128GB ECC, multiple drives, 2 GPUs). I'm coming from some experience with Unraid but looking to expand my horizons (and get away from the USB boot, which bit me 2 too many times).

I was testing TrueNAS Scale last week and when I created a raidz2 there with 4x 8TB drives, it reported <>14TB free space after raidz2 creation as I expected (losing 2 disks worth of space for parity ... plan is eventually to expand with more drives).

Today when creating essentially the same setup in the Proxmox ZFS config:

  • Selecting the 4 drives
  • Selecting raidz2

Proxmox reports ... 32TB size/free.

So I'm a bit confused here. Probably a user issue more than anything. But I'm wondering how to verify that this raidz2 actually has the parity data enabled.

Is this just an oddness in Proxmox? I assume so as `zfs list` shows only 14TB available to the pool.
Read this to understand volblocksize and padding overhead: https://www.delphix.com/blog/delphi...or-how-i-learned-stop-worrying-and-love-raidz

I guess you kept the default 8K volblocksize and with a 4 disk raidz2 you will loose 66,6% of the raw capacity to parity+padding when using zvols. Have a look what the difference between a dataset and a zvol is. You probably used datasets with TrueNAS and datasets won't be effected by padding overhead as these use the recordsize and not the volblocksize. Only zvols are effected by padding overhead und PVE uses zvols when you create a virtual disk for a VM.
And also keep in mind that PVE uses "TiB/GiB/..." instead of "TB/GB/..." everywhere. And a ZFS pool should always be atleast 20% free. So if you take both of that into account your pool can only store 7.76 TiB of data as zvols (4 * 8 TB * 0.33333 * 0.8 * 0.909495) or 11.64 TiB of data as datasets (4 * 8 TB * 0.5 * 0.8 * 0.909495).

Also keep in mind that ZFS is counting capacity in different ways. The zpool command will show you your pools raw capacity, so it should show 32TB. But the zfs command will show the usable capacity for datasets where the parity is already substracted. So it should show more like 15-16TB of capacity. And then there is the padding overhead which none of those two commands take into account because it is indirect. Lets say you got 50% parity overhead + 16.66% padding overhead of the raw capacity (what you probably got now). Then only 33% of the pools raw capacity are usable for zvols, because for every 0.66 TB of data you write into a zvol it will additionally write 0.33TB of space wasting padding blocks and an additional 1TB of parity data. In other words, if you write something to a zvol it will need 50% more space on the pool to be stored (or +200% size if you want to include the parity data). So the zfs list command will show you that this zvol is 1.5TB in size even if you only write 1TB of data to it.
 
Last edited:
Thanks for the reply.

Just to clarify, the 32TB I was seeing is in the Proxmox UI and using the standard UI method for creating the pool. I don't see any advanced options in the UI to affect the blocksize so yes, it was whatever the default is.

I'll read through that link before going further. I'm in a "I can reinstall over and over" learning mode right now.

I haven't read it yet, but will jump in with a couple of questions:

1) Am I better off creating the pools via CLI so that I can adjust options?

2) Yes on TrueNAS I was using datasets for all data after creating the pool in the TNS UI. I don't have a problem with using CLI for datasets on Proxmox, just curious if I have specific gotchas I need to be on the lookout for?

3) In the Proxmox UI I don't see a way to setup backups on the ZFS volumes, just the "local" set (which maybe includes the ZFS pool but I'm assuming not). Again, haven't actually set up any backups. Wondering if there's a good guide on setting up ZFS backups for Proxmox ZFS?

I would have actually stayed on TrueNAS Scale (no dig against Proxmox) but I ran into major limitations with their VM Passthrough as I needed a single GPU setup. I'm not 100% against virtualizing TrueNAS for my storage, just investigating using Proxmox for the volumes to reduce the number of machines running.

EDIT: Further clarification ... I'm not going to be storing VMs on this raidz2. Those will go on SSDs. The raidz2 is primarily media storage, personal data archive, and VM backups. So IOPS isn't a big concern. I'm ok with less than 14TB usable for now (I'll expand later on and recopy to recover the space from the expansion) ... but 7.7 is actually right about where I'm already at on data so yes, I'll be looking into the block size to see what I can optimize there.
 
Last edited:
Thanks for the reply.

Just to clarify, the 32TB I was seeing is in the Proxmox UI and using the standard UI method for creating the pool. I don't see any advanced options in the UI to affect the blocksize so yes, it was whatever the default is.
Block size can be set at "Datacenter -> Storage -> YourZFSPool -> Edit -> BlockSize". But keep in mind that the volblocksize can't be changed later. It is only set once at creation. So if you change the block size there all your existing zvols will keep the wrong volblocksize and only newly created zvols will make use of the new value. So you basically need to destroy all zvols on that pool and recreate them to not waste space. Easiest would be a backup+restore to recreate them.
1) Am I better off creating the pools via CLI so that I can adjust options?
PVE isn't configuring any ZFS options. They are all just the OpenZFS defaults, no matter what your pool layout looks like. So if you got a raidz3 of 60 HDDs or just a mirror of 2 SSDs. Both pools will use the exactly same configuration. So you will have to use the CLI anyway to optimize stuff, so the wear, performance and capacity loss isn't that terrible. And using the CLI you got way more power and possibilities.
2) Yes on TrueNAS I was using datasets for all data after creating the pool in the TNS UI. I don't have a problem with using CLI for datasets on Proxmox, just curious if I have specific gotchas I need to be on the lookout for?
No, its both OpenZFS.
3) In the Proxmox UI I don't see a way to setup backups on the ZFS volumes, just the "local" set (which maybe includes the ZFS pool but I'm assuming not). Again, haven't actually set up any backups. Wondering if there's a good guide on setting up ZFS backups for Proxmox ZFS?
Thats because a "ZFS storage" is handled by PVE as a block level storage that can only be used for VMs virtual disks (zvols) or LXC virtual disks (datasets). PVE will only allow you to store backups on a file level storage like a "Directory storage". So you could manually create a dataset on your pool and then add a "directory storage" and point it to the mountpoint of that dataset. Also don't forget to set the "is_mountpoint" option for that directory storage which can only be done by CLI or you might run into problems later.
EDIT: Further clarification ... I'm not going to be storing VMs on this raidz2. Those will go on SSDs. The raidz2 is primarily media storage, personal data archive, and VM backups. So IOPs aren't a big concern. I'm ok with less than 14TB usable for now (I'll expand later on and recopy to recover the space from the expansion) ... but 7.7 is actually right about where I'm already at on data so yes, I'll be looking into the block size to see what I can optimize there.
If you just use datasets you should be able to store around 11.64 TiB of data there.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!