[SOLVED] ZFS storage total size doesn't match.

bdbash

New Member
Aug 24, 2024
3
0
1
I have 6 512GB SSD drives, which is configured as SINGLE zpool with TWO RAIDZ1 vdev for balancing write-speed and redundancy.
Naviely, I should get a 2TB ZFS storage in my pve node, I named the storage as "Tank".
But zfs list command tells me, I've got about 1.79TB total space of Tank.
ZFS page tells me, I've got 3.02TB  total space of Tank.
zpool list command tells me, I've got 2.78TB total space of Tank.
This is so confusing. I can't tell which is correct or worng.

I've google this problem for a week, but without a soultion nor an answer.
SO how can I tell which total space is correct?
 

Attachments

  • Tank_storage (3).png
    Tank_storage (3).png
    229 KB · Views: 5
  • Tank_storage (5).png
    Tank_storage (5).png
    58.4 KB · Views: 6
  • Tank_storage 1.97T.png
    Tank_storage 1.97T.png
    152.4 KB · Views: 5
  • Tank_storage 3.02T.png
    Tank_storage 3.02T.png
    77 KB · Views: 4
  • Tank_storage_RAIDZ1X2.png
    Tank_storage_RAIDZ1X2.png
    151.4 KB · Views: 4
Last edited:
ZFS page show you installed brutto space, so 6*512GB and in summery 3TB.
zpool list shows you installed netto space which is about brutto*0.9, so 6*512*0.9=2.7TB but depending on your pool design (mirror/raidz*) the usage is going slower or faster as more or less parity information is written by writing data.
zfs list shows you your effective capacity after pool design decision but splitted into "used until now" and "avail since now", so your pool design gives you 136G+1660G=1.8TB netto which again is your expected 2TB brutto*0.9.
zfs list even shows you used space for the 2 categories dataset (mountet zfs file storage) and zvol (block storage which is unmountet).
But depending on your pool/dataset decision of using compression, which is mostly to be enabled and then mostly lz4, your real available space is more than in a standard uncompressed filesystem like ext4/xfs and so you will be able to save round about 2.5TB data into your 1.8TB netto space (eg. when cp from ext4/xfs into your zfs dataset /Tank when lz4 on). :)
 
  • Like
Reactions: bdbash
ZFS page show you installed brutto space, so 6*512GB and in summery 3TB.
zpool list shows you installed netto space which is about brutto*0.9, so 6*512*0.9=2.7TB but depending on your pool design (mirror/raidz*) the usage is going slower or faster as more or less parity information is written by writing data.
zfs list shows you your effective capacity after pool design decision but splitted into "used until now" and "avail since now", so your pool design gives you 136G+1660G=1.8TB netto which again is your expected 2TB brutto*0.9.
zfs list even shows you used space for the 2 categories dataset (mountet zfs file storage) and zvol (block storage which is unmountet).
But depending on your pool/dataset decision of using compression, which is mostly to be enabled and then mostly lz4, your real available space is more than in a standard uncompressed filesystem like ext4/xfs and so you will be able to save round about 2.5TB data into your 1.8TB netto space (eg. when cp from ext4/xfs into your zfs dataset /Tank when lz4 on). :)
Now I get it, brutto&netto analogy is quite easy to understand.
Thank you very much!
 
Also keep in mind that the zpool statement above does ONLY apply to raidz vdevs. If you have mirrored vdevs, zpool will show the netto space and all outputs align more than they do in a raidz setup:

4 virtual disks with 1 GiB each:
Code:
$ zpool create test-mirror mirror /zpool/temp/disk1 /zpool/temp/disk2
$ zpool list -v test-mirror
NAME                    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
test-mirror             960M   129K   960M        -         -     0%     0%  1.00x    ONLINE  -
  mirror                960M   129K   960M        -         -     0%  0.01%      -  ONLINE
    /zpool/temp/disk1      -      -      -        -         -      -      -      -  ONLINE
    /zpool/temp/disk2      -      -      -        -         -      -      -      -  ONLINE

$ zpool create test-raidz1 raidz1 /zpool/temp/disk1 /zpool/temp/disk2 /zpool/temp/disk3 /zpool/temp/disk4
$ zpool list -v test-raidz1
NAME                    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
test-raidz1            3.75G   191K  3.75G        -         -     0%     0%  1.00x    ONLINE  -
  raidz1               3.75G   191K  3.75G        -         -     0%  0.00%      -  ONLINE
    /zpool/temp/disk1      -      -      -        -         -      -      -      -  ONLINE
    /zpool/temp/disk2      -      -      -        -         -      -      -      -  ONLINE
    /zpool/temp/disk3      -      -      -        -         -      -      -      -  ONLINE
    /zpool/temp/disk4      -      -      -        -         -      -      -      -  ONLINE
 
Last edited:
  • Like
Reactions: bdbash
That looks like a "zpool list" bug aeh as you need "100% parity" in a 2-disk mirror ... but that's a luxury problem :)
 
Also keep in mind that the zpool statement above does ONLY apply to raidz vdevs. If you have mirrored vdevs, zpool will show the netto space and all outputs align more than they do in a raidz setup:

4 virtual disks with 1 GiB each:
Code:
$ zpool create test-mirror mirror /zpool/temp/disk1 /zpool/temp/disk2
$ zpool list -v test-mirror
NAME                    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
test-mirror             960M   129K   960M        -         -     0%     0%  1.00x    ONLINE  -
  mirror                960M   129K   960M        -         -     0%  0.01%      -  ONLINE
    /zpool/temp/disk1      -      -      -        -         -      -      -      -  ONLINE
    /zpool/temp/disk2      -      -      -        -         -      -      -      -  ONLINE

$ zpool create test-raidz1 raidz1 /zpool/temp/disk1 /zpool/temp/disk2 /zpool/temp/disk3 /zpool/temp/disk4
$ zpool list -v test-raidz1
NAME                    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
test-raidz1            3.75G   191K  3.75G        -         -     0%     0%  1.00x    ONLINE  -
  raidz1               3.75G   191K  3.75G        -         -     0%  0.00%      -  ONLINE
    /zpool/temp/disk1      -      -      -        -         -      -      -      -  ONLINE
    /zpool/temp/disk2      -      -      -        -         -      -      -      -  ONLINE
    /zpool/temp/disk3      -      -      -        -         -      -      -      -  ONLINE
    /zpool/temp/disk4      -      -      -        -         -      -      -      -  ONLINE
plot thickening
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!