[SOLVED] Can only use 7TB from newly created 12TB ZFS Pool?

lesleyxyz

New Member
Jul 31, 2023
2
0
1
Hi all,

I'm quite new to Proxmox and ZFS, but am an overall experienced sysadmin.
I hoped to find some help here :)

I have 4x 4TB (= 16TB) hard drives and decided to create a ZFS Pool from them using RaidZ1 + Compression.
The UI shows I have 12TB available (which is normal, as 4TB is used for parity).
When I now try to create a disk of 10TB for my VM using the Web UI it says:

Code:
failed to update VM 100: zfs error: cannot create 'media-r5/vm-100-disk-0': out of space (500)


Only when I go down to about 7TB I can create a disk.
Then when I go to my ZFS cluster it says it has used 11.8/12TB.

How is this possible? I thought I only needed to sacrafice the 4TB of the RaidZ parity for this to work?
Is it possible to fix this? It's important to note that I haven't used the ZFS pool for anything else.

Thank you & kind regards,

Lesley
 
Last edited:
Search this forum for "padding overhead". In short:
When using a 4 disk raidz1 with the default 8K volblocksize you will lose half of the raw capacity when using VM virtual disks (zvols): https://docs.google.com/spreadsheet...jHv6CGVElrPqTA0w_ZY/edit?pli=1#gid=1224630924
So when storing 8TBs of data additional 4TB will be wasted because of padding blocks and additional 4 TBs for parity data. So you can only store ~8TBs of VM virtual disks (but ~12TB of LXC virtual disks as these use datasets and not zvols). And a ZFS pool shouldn`t be filled more than 80-90% or it will become slower and will fragment faster. So its actually more like 6.4 to 7.2 TB of usable storage.
If you care about performance or expandability I would highly recommend using a striped mirror instead of a raidz1. If you still want a raidz1, destroy and recreate all VMs after changing the pools blocksize to at least 16K (or maybe even 64K).
 
Last edited:
  • Like
Reactions: lesleyxyz
Search this forum for "padding overhead". In short:
When using a 4 disk raidz1 with the default 8K volblocksize you will lose half of the raw capacity when using VM virtual disks (zvols): https://docs.google.com/spreadsheet...jHv6CGVElrPqTA0w_ZY/edit?pli=1#gid=1224630924
So when storing 8TBs of data additional 4TB will be wasted because of padding blocks and additional 4 TBs for parity data. So you can only store ~8TBs of VM virtual disks (but ~12TB of LXC virtual disks as these use datasets and not zvols). And a ZFS pool shouldn`t be filled more than 80-90% or it will become slower and will fragment faster. So its actually more like 6.4 to 7.2 TB of usable storage.
If you care about performance or expandability I would highly recommend using a striped mirror instead of a raidz1. If you still want a raidz1, destroy and recreate all VMs after changing the pools blocksize to at least 16K (or maybe even 64K).
Thank you Dunuin! This was really helpful. I've set the blocksize to 16k and was able to use ~9750 GB of my pool!

Thank you for the references, I will use refer to this guide first next time :)

I was aware of TB vs TiB. It seems that because of the zfs pool blocksize, I am more restricted than than 10.2 TiB

Thank you all again :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!