ZFS pool report wrong space usage on disk

antipiot

Well-Known Member
Jan 26, 2019
67
5
48
37
Hello every body!
I'm very new to proxmox and facing a strange thing:

I built a ZFS RAIDZ1 / RAID5 from 4 4TB drives for a total usable size of theorically 12 TB.

Once formatted, there's 10.21 TB Usable for datas:
Using Thick Provisionning - no snapshot running
I have a VM with a virtual hard drive in RAW format of 7.15 TB. (as shown in my raid content)

Once the math are done, i should have still 10.21 - 7.15 TB left but:
The summary of my RAID tells i have:
Usage 94.38% (9.63 TiB of 10.21 TiB)

the report of this show my vm 100 disk use 9.63 TB: How?
how can i claim all this wasted space?

Code:
root@pve1:~# zfs list -o name,avail,used,refer,lused,lrefer,mountpoint,compress,compressratio
NAME                 AVAIL   USED  REFER  LUSED  LREFER  MOUNTPOINT  COMPRESS  RATIO
RAID5                 587G  9.63T   140K  6.63T     40K  /RAID5            on  1.00x
RAID5/vm-100-disk-0   587G  9.63T  9.63T  6.63T   6.63T  -                 on  1.00x

RAID5/vm-100-disk-0  type                  volume                 -
RAID5/vm-100-disk-0  creation              Fri Jan 25 19:21 2019  -
RAID5/vm-100-disk-0  used                  9.63T                  -
RAID5/vm-100-disk-0  available             587G                   -
RAID5/vm-100-disk-0  referenced            9.63T                  -
RAID5/vm-100-disk-0  compressratio         1.00x                  -
RAID5/vm-100-disk-0  reservation           none                   default
RAID5/vm-100-disk-0  volsize               7.15T                  local

Looks like i have the same issue ?
https://forum.proxmox.com/threads/zfs-doesnt-update-free-space-correctly.44568/

Thanks in advance for your help :)
 
Last edited:
How did you built this pool and why did you not enable compression? Please post code output in CODE tags for better readability. Please also post zpool status -v.
 
How did you built this pool and why did you not enable compression? Please post code output in CODE tags for better readability. Please also post zpool status -v.
Thanks for you help!
in the meantime, i destroyed the raid and virtual disk and started over again.
Everything looks fine for now.
I did not enable compression as the default settings is "none": Do you suggest to enable it? if yes, wich one?
 
Hello again !
My problem is back :-(
I really dont get what's hapening:

Code:
DATAS                 612G  9.61T   140K  6.61T     40K  /DATAS            on  1.00x
DATAS/vm-100-disk-0   612G  9.61T  9.61T  6.61T   6.61T  -                 on  1.00x

Code:
DATAS/vm-100-disk-0  type                  volume                 -
DATAS/vm-100-disk-0  creation              Sun Jan 27 15:31 2019  -
DATAS/vm-100-disk-0  used                  9.61T                  -
DATAS/vm-100-disk-0  available             612G                   -
DATAS/vm-100-disk-0  referenced            9.61T                  -
DATAS/vm-100-disk-0  compressratio         1.00x                  -
DATAS/vm-100-disk-0  reservation           none                   default
DATAS/vm-100-disk-0  volsize               7.32T                  local
DATAS/vm-100-disk-0  volblocksize          8K                     default
DATAS/vm-100-disk-0  checksum              on                     default
DATAS/vm-100-disk-0  compression           on                     inherited from DATAS
DATAS/vm-100-disk-0  readonly              off                    default
DATAS/vm-100-disk-0  createtxg             19                     -
DATAS/vm-100-disk-0  copies                1                      default
DATAS/vm-100-disk-0  refreservation        7.55T                  local
DATAS/vm-100-disk-0  guid                  8485366488526016711    -
DATAS/vm-100-disk-0  primarycache          all                    default
DATAS/vm-100-disk-0  secondarycache        all                    default
DATAS/vm-100-disk-0  usedbysnapshots       0B                     -
DATAS/vm-100-disk-0  usedbydataset         9.61T                  -
DATAS/vm-100-disk-0  usedbychildren        0B                     -
DATAS/vm-100-disk-0  usedbyrefreservation  0B                     -
DATAS/vm-100-disk-0  logbias               latency                default
DATAS/vm-100-disk-0  dedup                 off                    default
DATAS/vm-100-disk-0  mlslabel              none                   default
DATAS/vm-100-disk-0  sync                  standard               default
DATAS/vm-100-disk-0  refcompressratio      1.00x                  -
DATAS/vm-100-disk-0  written               9.61T                  -
DATAS/vm-100-disk-0  logicalused           6.61T                  -
DATAS/vm-100-disk-0  logicalreferenced     6.61T                  -
DATAS/vm-100-disk-0  volmode               default                default
DATAS/vm-100-disk-0  snapshot_limit        none                   default
DATAS/vm-100-disk-0  snapshot_count        none                   default
DATAS/vm-100-disk-0  snapdev               hidden                 default
DATAS/vm-100-disk-0  context               none                   default
DATAS/vm-100-disk-0  fscontext             none                   default
DATAS/vm-100-disk-0  defcontext            none                   default
DATAS/vm-100-disk-0  rootcontext           none                   default
DATAS/vm-100-disk-0  redundant_metadata    all                    defaul
 
How did you built this pool and why did you not enable compression? Please post code output in CODE tags for better readability. Please also post zpool status -v.
here's what i get:

Code:
root@pve1:~# zpool status -v
  pool: DATAS
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        DATAS       ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            sda     ONLINE       0     0     0
            sdb     ONLINE       0     0     0
            sdc     ONLINE       0     0     0
            sdd     ONLINE       0     0     0

may be related to this aswell:
https://forum.proxmox.com/threads/zfs-eating-more-poolspace-than-allocated.37860/

looks like this on DATAS report: obviously, i did NOTHING at this very moment :)

oTfeO1x
https://imgur.com/a/oTfeO1x
oTfeO1x
 
Last edited:
the report of this show my vm 100 disk use 9.63 TB: How?
how can i claim all this wasted space?

It is not any wasted space here! It is only about block device size.

Let take a simple example. Let say you want to write a 8K in your VM(
volblocksize=8K). Now this 8K will be split by 3 HDD(raidz1). So 8K /3 = 2.6 K =>
On each of 3 HDD zfs will need to write 2.6 K. BUT, the minimum is 4 K(ashift=12) => the toatal space used = 4K*3 = 12 K.

BINGO... so we have some "wasted space"(4 K), using 12 K instead of 8 K for each block who is write on VM. So in your case insted of 7.15 TB size you get 9.63 TB ;) => bad luck ;)

What you cand do? You can use a bigger volblocksize(n x 4 K x 3, n=integer)! For example if you have volblocksize = 12 K, then for each 12 K to be write => 12 K /3 = 4K => 1 block of 4K(= ashift) for each data disk => 0 "wasted space"! => how lucky I am ;)

Good luck!
 
  • Like
Reactions: antipiot
It is not any wasted space here! It is only about block device size.

Let take a simple example. Let say you want to write a 8K in your VM(
volblocksize=8K). Now this 8K will be split by 3 HDD(raidz1). So 8K /3 = 2.6 K =>
On each of 3 HDD zfs will need to write 2.6 K. BUT, the minimum is 4 K(ashift=12) => the toatal space used = 4K*3 = 12 K.

BINGO... so we have some "wasted space"(4 K), using 12 K instead of 8 K for each block who is write on VM. So in your case insted of 7.15 TB size you get 9.63 TB ;) => bad luck ;)

What you cand do? You can use a bigger volblocksize(n x 4 K x 3, n=integer)! For example if you have volblocksize = 12 K, then for each 12 K to be write => 12 K /3 = 4K => 1 block of 4K(= ashift) for each data disk => 0 "wasted space"! => how lucky I am ;)

Good luck!
Thanks for your answer and clear explanation!

Do i did something wrong creating my RAID and Raw drive to be in such situations?

When creating a new RAW disk for a VM i dont have any option to set this settings: is is located elsewhere?

How do this react if a add a new disk to the pool?

Again, many thanks for your clear indications :)
 
Hi again ;)

This settings is on Datacenter-storage --- your zfs-storage-name -> proprieties -> Block Size!

In my own case I use DIFFERENTS zfs storage datasets with different Block Size(16 K is minimum, because I mostly use zfs mirror). So as my VM need, I can use many vDisk with differents block size.
And sorry for my bad english and humor ;)
 
Thanks for your quick answer.
While documenting myself about this i've read this wich suggest, if i got it right to set the datablock at128 KB.
did i missread something?

I guess i cannot change blocksize on the go:)
 
Thanks for your quick answer.
While documenting myself about this i've read this wich suggest, if i got it right to set the datablock at128 KB.
did i missread something?

I guess i cannot change blocksize on the go:)

You can create a new volume with the new volblocksize dd the old to the new device, remove the old and rename the new to the old name - all while the VM is offline.
 
Let take a simple example. Let say you want to write a 8K in your VM(
volblocksize=8K). Now this 8K will be split by 3 HDD(raidz1). So 8K /3 = 2.6 K =>
On each of 3 HDD zfs will need to write 2.6 K. BUT, the minimum is 4 K(ashift=12) => the toatal space used = 4K*3 = 12 K.

BINGO... so we have some "wasted space"(4 K), using 12 K instead of 8 K for each block who is write on VM. So in your case insted of 7.15 TB size you get 9.63 TB ;) => bad luck ;)

What you cand do? You can use a bigger volblocksize(n x 4 K x 3, n=integer)! For example if you have volblocksize = 12 K, then for each 12 K to be write => 12 K /3 = 4K => 1 block of 4K(= ashift) for each data disk => 0 "wasted space"! => how lucky I am ;)

THANK YOU! I wasn't aware of that. I also have similar problems with increased usage of a pool after send/receive from a single-disk to raidz2 backup pool. I'll investigate if you've just described my problem or not.
 
While documenting myself about this i've read this wich suggest, if i got it right to set the datablock at128 KB.
did i missread something?

NO, is not that simple. It depends a lot of the data usage in your VM. For example if you have a lot of big files .... yes, then is better to use a bigger block size(for example, a backup sistem, or a Video system). But if your files are many with a small size, then is better to use a small block size(a mail server is a good example). Also to be more complicated, the zfs compression can also have a impact(bigger blocks will be more compressilbe then a small one).
Anyway a good value to start is 32-64 K(but you must test to see if is OK for your own data usage)

I guess i cannot change blocksize on the go:)

No. But you can try ;) For example you can create a vzdump backup for your VM(without any compression) and then you can restore this backup using the desired datasets(with the desired block size)

Good luck!
 
Last edited:
Yep! Many thanks :)

I was not able to create datas on the storage using 12k block:
zfs error: cannot create 'DATAS/vm-100-disk-0': 'volblocksize' must be power of 2 from 512B to 1M at /usr/share/perl5/PVE/API2/Qemu.pm line 1253. (500)

Trying now with 16 : looks fine ATM. - will report back.

1st report:
compress ratio getting hit now wich never happened before:

DATAS 10.0T 183G 140K 174G 40K /DATAS on 1.03x
DATAS/vm-100-disk-0 10.0T 183G 183G 174G 174G - on 1.03x

Sizes seems to match with host usage report

2nd report:
16k block still used more real space than logical space:
i stopped transfer before end as i was already able to see the "waste" of space growing.

Now doing with 32K seems fine for a EXT4 guest formated drive.
Main datas are Movies
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!