ZFS/ProxMox Disk Size Weirdness

poisedforflight

New Member
Jul 6, 2025
4
0
1
I want to apologize on the front end, this is my first time to work with ProxMox and ZFS. I also want to say I'm sorry for how much information will be below. I tend to be long winded and I never know what info might be useful so I'm including pretty much everything I can think of.

Setup:
* Old Dell T420
* PERC H710 flashed to IT mode
* 8x"8TB" HDD - reporting slightly different sizes

When I attempt to create a RAIDZ2 pool from the 8x HDD in the web gui I receive the pop-up warning below.

1751769896643.png

1751767310749.png

Well, darn. So I google a bit and come across how to setup the pool from the CLI. Originally I run the same command from the errored task above, and I see this:

Code:
root@pve01:/sbin# zpool create -o 'ashift=12' hdd-01 raidz2 /dev/disk/by-id/scsi-35000cca260135b80 /dev/disk/by-id/scsi-35000cca2601abdc0 /dev/disk/by-id/scsi-35000cca26018c8d4 /dev/disk/by-id/scsi-35000cca26029da58 /dev/disk/by-id/scsi-35000cca26029e0b0 /dev/disk/by-id/scsi-35000cca2605cc5d4 /dev/disk/by-id/scsi-35000cca26029dbb0 /dev/disk/by-id/scsi-35000cca2600f19a0
invalid vdev specification
use '-f' to override the following errors:
raidz contains devices of different sizes

Sure enough, I look at lsblk and the 4 HDs I originally had are different than the 4 that I bought recently.

Code:
root@pve01:~# lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda           8:0    0   7.2T  0 disk
sdb           8:16   0   7.2T  0 disk
sdc           8:32   0   7.2T  0 disk
sdd           8:48   0   7.2T  0 disk
sde           8:64   0   7.3T  0 disk
sdf           8:80   0   7.3T  0 disk
sdg           8:96   0   7.3T  0 disk
sdh           8:112  0   7.3T  0 disk


OK, well that's interesting, but I thought it would have created the pool using the smallest drive size, similar to how RAID works. Let's see what happens when you use -f. So I try it again, with a -f to force the issue and it seems to run smoothly, no errors.

If I look at zpool list, I see the pool and it shows what looks to be an accurate sum of aggregated disk sizes.

Code:
NAME      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
hdd-01   57.2T  1.23M  57.2T        -         -     0%     0%  1.00x    ONLINE  -
nvme-01  1.45T   564K  1.45T        -         -     0%     0%  1.00x    ONLINE  -

If I do a df -h, I see what looks like the accurate size for a RAIDZ2 of these 8 disks:

Code:
root@pve01:~# df -h
Filesystem        Size  Used Avail Use% Mounted on
udev               95G     0   95G   0% /dev
tmpfs              19G  2.0M   19G   1% /run
rpool/ROOT/pve-1  114G  1.9G  112G   2% /
tmpfs              95G   46M   95G   1% /dev/shm
tmpfs             5.0M     0  5.0M   0% /run/lock
nvme-01           1.5T  128K  1.5T   1% /nvme-01
rpool             112G  128K  112G   1% /rpool
rpool/var-lib-vz  112G  128K  112G   1% /var/lib/vz
rpool/ROOT        112G  128K  112G   1% /rpool/ROOT
rpool/data        112G  128K  112G   1% /rpool/data
/dev/fuse         128M   16K  128M   1% /etc/pve
tmpfs              19G     0   19G   0% /run/user/0
hdd-01             41T  256K   41T   1% /hdd-01

BUT, when I go back to the WebGUI:

a) The disk size shows ~63TB, which doesn't match either of the numbers above

1751771172059.png


Can anyone help me understand what all is going on here, and how I can get the WebGUI to show the accurate size of the pool?
 
Last edited:
Hi poisedforflight,

the view at Datacenter->$YOURNODENAME->ZFS shows the raw storage size, just like `zpool list`, but in T(erra)B(yte) instead of T(eb)iB(yte) [0] [1].

The same goes with the zfs filesystem size that you can retrieve via `zfs list` and that you can match against the Filesystem Size shown at Datacenter->$YOURNODENAME->$ZFS_STORAGE_NAME.

So the observed discrepancies derive from the GUI making use of decimal prefixes, while the zfs tools use binary prefixes [3] for displaying storage size.

I hope this helpful!

[0] https://duckduckgo.com/?q=57.2+TiB+in+TB&ia=web
[1] https://duckduckgo.com/?q=1.45TiB+in+TB&ia=web
[2] https://en.wikipedia.org/wiki/Binary_prefix
 
  • Like
Reactions: poisedforflight
Keep also in mind, that the ZFS pool for anything but mirrors will always show gross / raw values, not net values. That is different from any other hardware RAID implementation I've ever seen.
 
  • Like
Reactions: poisedforflight
Thank you both for the info. I had assumed since the mirrored pool showed the net value that the RAIDZ2 pool would show the same.
 
Keep also in mind, that the ZFS pool for anything but mirrors will always show gross / raw values, not net values. That is different from any other hardware RAID implementation I've ever seen.
The more I think about this, the less I understand it. It seems like a great way for someone to over allocate without realizing they are doing so.
 
The more I think about this, the less I understand it. It seems like a great way for someone to over allocate without realizing they are doing so.
Yes, but it's the only way to get real numbers. RaidZ is complicated and the usage space changes constantly with the data you store. If you e.g. store small volblock sizes, you got a lot of waste due to padding overhead on ashift=12 and if you use large recordsizes, you can store a lot without padding overhead. this cannot be known before, therefore they don't even try and just give the raw space values.
 
  • Like
Reactions: poisedforflight
Yes, but it's the only way to get real numbers. RaidZ is complicated and the usage space changes constantly with the data you store. If you e.g. store small volblock sizes, you got a lot of waste due to padding overhead on ashift=12 and if you use large recordsizes, you can store a lot without padding overhead. this cannot be known before, therefore they don't even try and just give the raw space values.
I think I see what you're saying. I'm new to ZFS and trying to learn about the things you are mentioning.
 
Here are other threads about the same topic: