Local storage reporting is way wrong

Feb 5, 2022
5
0
6
45
Hello,

I am rather new to proxmox 8. Have used a few years ago 6 & 7.
I have a "strange issue". I have just installed a new 8.2 server and used 2x 256GB in mirror.

1) My "local" & "local-zfs" BOTH reports almost 250GB.
2) My "local" has a total of 4.5GB in backups & ISOs
3) My "local-zfs" has a total of 51,5GB worth of vdisks but reports only 2,79GB in use.
4) My ZFS "rpool" reports a total of 253GB out of which 9,25GB are allocated.

So either there is something wrong with reporting, or I am doing the wrong interpretation here.

Can anybody provide a link or something so that I can clear things out?
 
Last edited:
Mind sharing the output of zfs list inside of [code][/code] tags?
 
Mind sharing the output of zfs list inside of [code][/code] tags?
Of course......

Code:
NAME                       USED  AVAIL  REFER  MOUNTPOINT
rpool                     8.61G   220G   104K  /rpool
rpool/ROOT                2.07G   220G    96K  /rpool/ROOT
rpool/ROOT/pve-1          2.07G   220G  2.07G  /
rpool/data                2.60G   220G    96K  /rpool/data
rpool/data/vm-100-disk-0  2.01G   220G  2.01G  -
rpool/data/vm-101-disk-0    56K   220G    56K  -
rpool/data/vm-101-disk-1   599M   220G   599M  -
rpool/var-lib-vz          3.94G   220G  3.94G  /var/lib/vz

Its one of things I did first..... seems to be like there is some sort of sharing cause otherwise it makes no sense.
 
Last edited:
please edit it and place the output inside of [code][/code] or use the formatting buttons of the editor (Code), as otherwise the output is barely readable :)
 
Done that already..... just refresh
timing ;)

1) My "local" & "local-zfs" BOTH reports almost 250GB.
What you can see is that each dataset in a ZFS pool shares the free space (AVAIL), unless you make reservations.
Therefore, for each storage, the total space is usually calculated by currently used + free.

2) My "local" has a total of 4.5GB in backups & ISOs
Actual usage on ZFS is a bit less, most likely due to compression.
Check zfs get compressratio rpool/var-lib-vz It most likely is a bit over 1
3) My "local-zfs" has a total of 51,5GB worth of vdisks but reports only 2,79GB in use.
By default, the local-zfs storage is configured as "thin provision", meaning that while the disk images can be larger, there is no reservation on the dataset backing the disk image. So you see just the actual used space.
If there would be a reservation, the USED and REFER columns would differ, as they would use more space, but wouldn't refer to all the used space.
For example, disable the "thin provision" checkbox in the storage config and create a new VM disk. Then compare the dataset created for it. zfs get all rpool/data/... will list all properties for that dataset.
It will have a reservation and the overall free space in the pool will be quite a bit less.

4) My ZFS "rpool" reports a total of 253GB out of which 9,25GB are allocated.
This is the sum of all (child)datasets combined.

I hope this explains what you see. ZFS is quite flexible in the space allocation, but that can have some unexpected behavior. Enjoy the changing graphs regarding the space in the local-zfs and local storage, if one of the other allocates quite a bit more space. ;-)
 
timing ;)


What you can see is that each dataset in a ZFS pool shares the free space (AVAIL), unless you make reservations.
Therefore, for each storage, the total space is usually calculated by currently used + free.


Actual usage on ZFS is a bit less, most likely due to compression.
Check zfs get compressratio rpool/var-lib-vz It most likely is a bit over 1

By default, the local-zfs storage is configured as "thin provision", meaning that while the disk images can be larger, there is no reservation on the dataset backing the disk image. So you see just the actual used space.
If there would be a reservation, the USED and REFER columns would differ, as they would use more space, but wouldn't refer to all the used space.
For example, disable the "thin provision" checkbox in the storage config and create a new VM disk. Then compare the dataset created for it. zfs get all rpool/data/... will list all properties for that dataset.
It will have a reservation and the overall free space in the pool will be quite a bit less.


This is the sum of all (child)datasets combined.

I hope this explains what you see. ZFS is quite flexible in the space allocation, but that can have some unexpected behavior. Enjoy the changing graphs regarding the space in the local-zfs and local storage, if one of the other allocates quite a bit more space. ;-)
Thank you very for the detailed explanation. I thought as much, but everything seemed quite strange thus needed some confirmation. I am playing around with ZFS since I use truenas scale as well..... but there the size were "fixed" since I was controlling the creation.

One more question, if I may..... You talked about reservation so from my understanding I can set the max size for every storage type and thus getting a more accurate report. Where may I set that reservation?
 
If the ZFS storage is not thin provisioned, the datasets for the disk images will be created with a reservation.

I suggest you read up on man zfsprops to see what each property does. The reservation and/or quota property might be of interest. Be careful though if you modify the system, there could be unintended side effects as we do not test such setups.
 
If the ZFS storage is not thin provisioned, the datasets for the disk images will be created with a reservation.

I suggest you read up on man zfsprops to see what each property does. The reservation and/or quota property might be of interest. Be careful though if you modify the system, there could be unintended side effects as we do not test such setups.
Thanks a lot. I already found the no thin-provizioned option and realized that there is no size to set, rather the disk images do get the space needed. I am used to have hypervisors with fixed volume size (at least on the OS level) and setting thin / think per disk image.
It seems its back to school for me..... thank you again.
 
Thanks a lot. I already found the no thin-provizioned option and realized that there is no size to set, rather the disk images do get the space needed. I am used to have hypervisors with fixed volume size (at least on the OS level) and setting thin / think per disk image.
It seems its back to school for me..... thank you again.

FWIW The useful command to make sense out of all these (especially once snapshots come into play or on RAIDZs) is: zfs list -o space
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!