About ZFS, RAIDZ1 and disk space

afgb

New Member
Jun 4, 2022
2
1
3
Hello everyone,

This week a hard drive in my old NAS gave up. No big concern, the unit did what it had to do, the volume went in read only mode. At the end of the day, I lost only one file, the rest of the changed files since the last backup, I was able to copy on external drives.

As I had already a proxmox node running with three VMs and two NVMe M.2 drives in a neat Node 804 case, I decided to buy myself 4 new hard drives and use two of the newer drives of the old NAS. This would deliver me 20TB on ZFS RAIDZ1 with the option to replace the older ones later down the track. I did not need 40TB of space. Or so I thought. :)

It created the pool fine, 19.03TB to be exact. All looks good. However when I create a VM disk of say 8TB, I am ending up using 14.35TB and I don't understand why that is. I read through many threads but I did not find the answer. Many related subjects. Perhaps I missed one.

I store music masters so I need the space. Not interested in performance, compression or encryption. I got various options on the table, but I need to know I make the right decision. My question is, do files (like the VM disk I wanted to create) really take up more space than for example on local LVM storage?

If so, then Thin Provisioning is not an option and I simply should create more space by upgrading sdd to sdf (see below).

Overall I am very happy with proxmox so would prefer to include the storage in this node.

Thanks for answering my question.


HDD setup
---------------
sda - 8TB - new
sdb - 8TB - new
sdc - 4TB - from old NAS will be upgraded after data transfer to 8TB
sdd - 4TB - from old NAS
sde - 4TB - new
sdf - 4TB - new
 

Attachments

  • Output.txt
    4 KB · Views: 6
Thats because of padding overhead. See this article that describes it: https://www.delphix.com/blog/delphi...or-how-i-learned-stop-worrying-and-love-raidz

With a 6 disk raidz1 using ashift=12 (ZFS will use 4K as the smallest block/sector size it can write to a single disk) and the default volblocksize of 8K you will loose 50% of your raw capacity when only using zvols. 17% of the raw capacity you will loose directly because of parity data. And 33% you will loose because of padding overhead. And this padding overhead is indirect and only effects zvols. It basically means that everything you will store in a zvol will consume 166% space on the pool.

To minimize that padding overhead you will need to increase your volblocksize but this comes with other problems like really bad performance of IO that reads/writes blocks that are smaller than the volblocksize.

When using a 6 disk raidz1 with a ashift=12 you would need to increase the volblocksize to atleast 32K to only loose 20% instead of 50% of your raw capacity.

Also don't forget to set a quota so you can never write that pool full by accident. A ZFS pool should not be filled more then 80% or it will get slow and fragments faster and if it gets completely full and inoperatable and you won't even be able to delete stuff anymore to free it up again. And because of Copy-on-Write a ZFS pool can't be defragmented either, so you really want to minimize fragmentation as much as possible.
 
Last edited:
  • Like
Reactions: afgb
Thanks for responding and referencing the document.

Based on what you explained, I have come to the conclusion that RAIDZ1 is not the way to go for me so I came up with the following.

I make three mirrored ZFS pool of 8TB (1), 4TB (2) and 4TB (3). Once I got this up and running, I order one other 8TB drive and together with the similar 8TB drive I have, I am ending up (after resilvering and expanding) with 8TB on pool 2. In the process I got rid of the last two older hard drives.

This way I get maximum flexibility by upgrading each time two drives rather than six before I get the space benefits. It provides proper redundancy and good performance which is an added bonus.

Again thanks for your help. Appreciated.
 
  • Like
Reactions: Neobin

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!