Where is my 1.2TB goes?

nwongrat

Member
Feb 16, 2023
34
0
6
This is newly insall proxmox and new nvme drive. I created zfs starage RaidZ1 4 drives of 1TB. I assume that I should have about 3 TB of strage. However, when I create 2nd disk for proxmox backcup (run as vm) I can create only 1.6TB, zfs strage used up to 2.55TB. I could not even create 1.7TB disk. I got an error show not enough disk space.

vboemSV.png


Is there something i am missing?
I understand if there will be some overhead for zfs but really 1.2TB???
In this zfs storage. It is only contain one vm disk which has the size of 1.6TB. Nothing more. Only that disk.

PS. The first disk went to local-zfs storage which has 1.2Tb.
 
Last edited:
https://www.klennet.com/notes/2019-07-04-raid5-vs-raidz.aspx


Equivalent RAID levels​


As far as disk space goes, RAIDZn uses n drives for redundancy. Therefore


  • RAIDZ (sometimes explicitly specified as RAIDZ1) is approximately the same as RAID5 (single parity),
  • RAIDZ2 is approximately the same as RAID6 (dual parity),
  • RAIDZ3 is approximately the same as (hypothetical) RAID7 (triple parity).

Disk space overhead is not precisely the same. RAIDZ is much more complicated than traditional RAID, and its disk space usage calculation is also complicated. Various factors affect RAIDZ overhead, including average file size.
 
First, drive capacity is marketed using powers of 10 but operating systems measure storage using powers of 2. A 1TB drive will never format to 1TB usable space. As you are aware, RAIDZ1 provides usable capacity for N-1 drives. You also have ZFS overhead. Hence, (4-1) * 0.93 = ~2.79. Proxmox is showing you have 2.81 TB available.
 
Yes, thats padding overhead. With 4 disks in a raidz1 using he default ashift=12 and default volblocksize=8K you will lose 50% of your raw capacity (or even 60% if you care about performance) when using VMs.
To not lose that much space to padding overhead you would need to increase your volblocksize to at least 16K or maybe even 64K. But this then also results in massive overhead when doing IO that is smaller than the volblocksize (with a 64K volblcoksize sync writing 1000x 8K will write 1000x 64k...same for random reads where 1000x 64K would need to be read instead of 1000x 8K).
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!