Data displays too high used capacity?

fabienfs

Member
Jun 6, 2021
18
1
8
31
Hello,

I have a 5TB ZFS raid that I use as data for several VMs.
When I go to data> summary, it says there is 8.58% used : 449GB used of 5TB (see in attach).,

1623396827151.png


Yet I am not using 449GB.
When I go to see in my VM Disk, the total is 276GB and it's correct.

Where is the 173GB difference?
Knowing in addition that the Proxmox system is on another dedicated hard drive

1623397054259.png

thanks
 
Hi,

Do you have snapshots on the system that may also be contributing to the usage?

You could also compare the output of zpool list and zfs list to see if there is much discrepancy there, as ZFS reserves a certain amount of storage space for itself. However, I don't think it should be contributing this much to the difference.
 
Let me guess...you are using a raidz1/2/3 pool and didn't increased the volblocksize from 8K to something higher? In that case you are wasting space because of bad padding. Solution would be to increase the "Block size" for that pool (look at these 6 spreadsheets to get an idea of what volblocksize to use), delete all VMs and recreate them (importing from backup should be enough).

If you are not using raidz1/2/3 your problem might be that you are not using discard to free up stuff. That needs to be configured inside every single VM and you need to use a virtual storage controller that supports TRIM/discard (like virtio SCSI) and the discard checkbox must be checked.
 
Last edited:
Do you have snapshots on the system that may also be contributing to the usage?
No

You could also compare the output of zpool list and zfs list to see if there is much discrepancy there, as ZFS reserves a certain amount of storage space for itself. However, I don't think it should be contributing this much to the difference.

result :

Code:
# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
data   7.27T   114G    7.15T        -         -     0%     1%  1.00x    ONLINE  -
rpool   149G  16.5G   133G       -         -     0%    11%  1.00x    ONLINE  -

# zfs list
NAME                                  USED  AVAIL     REFER  MOUNTPOINT
data                                     450G  4.67T      140K  /data
data/base-100-disk-0       129G  4.78T       19.0G  -
data/base-150-disk-0       116G  4.77T       20.5G  -
data/vm-120-disk-0          110G  4.76T       23.1G  -
data/vm-130-disk-0          95.1G  4.75T       20.5G  -
rpool                                   16.5G   128G      104K  /rpool
rpool/ROOT                       16.4G   128G       96K  /rpool/ROOT
rpool/ROOT/pve-1            16.4G   128G      16.4G  /
rpool/data                          96K   128G          96K  /rpool/data

With the zfs list command, we can see that my data disks are tens of GB more than what I have determined

Let me guess...you are using a raidz1/2/3 pool and didn't increased the volblocksize from 8K to something higher? In that case you are wasting space because of bad padding. Solution would be to increase the "Block size" for that pool (look at these 6 spreadsheets to get an idea of what volblocksize to use), delete all VMs and recreate them (importing from backup should be enough).

If you are not using raidz1/2/3 your problem might be that you are not using discard to free up stuff. That needs to be configured inside every single VM and you need to use a virtual storage controller that supports TRIM/discard (like virtio SCSI) and the discard checkbox must be checked.

I guess that must be the problem.
Or can I check if these are blocksize of 8 ?
What is the ideal size? Or can I change the 8?
don't see where the block size is defined?
Here is what I defined when creating the VM :

1623450090726.png

thanks
 
Or can I check if these are blocksize of 8 ?
Run for example zfs get volblocksize data/base-100-disk-0. If you don't changed it (Datacenter -> Storage -> YourPoolName -> Edit -> "Block size") it should be 8K. And the volblocksize can't be changed later because it is only set at creation of a zvol. So after changing it you need to destroy all virtual disks and recreate them (restoring a vzdump backup will create a new copy too so that should work).
What is the ideal size?
That depends of your storage setup. How many drives? What kind of raid (raidz1/raidz2/raidz3)? What ashift was used when creating the pool?
don't see where the block size is defined?
Datacenter -> Storage -> YourPoolName -> Edit -> "Block size"
 
Run for example zfs get volblocksize data/base-100-disk-0. If you don't changed it (Datacenter -> Storage -> YourPoolName -> Edit -> "Block size") it should be 8K.
Code:
zfs get volblocksize data/base-100-disk-0
NAME                  PROPERTY      VALUE     SOURCE
data/base-100-disk-0  volblocksize  8K        default
Exact, is 8k

That depends of your storage setup. How many drives? What kind of raid (raidz1/raidz2/raidz3)? What ashift was used when creating the pool?
It's a raidz1 with 4 SSD of 2TB.
I think a defined default ashift 12

1623480410538.png

1623480621350.png

Datacenter -> Storage -> YourPoolName -> Edit -> "Block size"
Based on my setup and what I showed above. Do you recommend a blocksize for me?
If I want to change it so I just go to Datacenter > Storage > Pool name > Edit > Block size and I change?
Then, how to destroy the virtual disks of the VMs and recreate them? is it a complicated step?

One last thing, why did the fact of having blocksize of 8 take so many tens of GB? I wouldn't have that with a bigger blocksize anymore?

thank you very much
 
You can look at this table.

With 8K (= 2 sectors at ashift 12) volblocksize you will loose 50% of your total raw capacity due to parity and padding. With 12K, 24K, 36K, 48K ( = 3, 6, 9, 12 sectors) and so on you would only loose 25% of your total raw capacity. So I would test it with 12K.

Raidz1/2/3 is by the way bad for latency and small random reads/writes and only useful if you got mostly big sequential reads/writes where you just want bandwidth and more capacity. A striped mirror would be better for workloads like a VM storage.
 
Last edited:
  • Like
Reactions: dylanw

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!