[SOLVED] ZFS Size Difference

masteryoda

Member
Jun 28, 2020
30
1
13
54
I have a zfs pool with raidz3-0. i have 7 disks in the pool, each 1 TB.

in the node->ZFS screen i see that the pool size is 6.99TB, shows 5.31TB as Free & 1.68 TB allocated.

but the Storage -> zfs-pool summary screen shows Usage is 2.73TB of 3.86TB

(please see attached screen shots)

Can someone please explain why this difference?zfs-issue-01.pngzfs-issue-02.png
 
Last edited:
one is pool-level information for a ZFS pool called "local-zfs", the other is storage-level information for a storage call "local-zfs" (which might or might not correspond to that pool, or a dataset on it). in general, pool vs filesystem level space usage is different in ZFS (see the respective man pages - "zpoolprops" vs "zfsprops"), but also things like quotas/reservations might play a role..
 
one is pool-level information for a ZFS pool called "local-zfs", the other is storage-level information for a storage call "local-zfs" (which might or might not correspond to that pool, or a dataset on it). in general, pool vs filesystem level space usage is different in ZFS (see the respective man pages - "zpoolprops" vs "zfsprops"), but also things like quotas/reservations might play a role..
thanks, so i need to watch for the storage-level info than the pool-level info
 
yes, in almost all cases the file-system level information is what is interesting.
 
  • Like
Reactions: masteryoda
Please also search this forum for "padding overhead". When using the defaults and not increasing the volblocksize before creating your first VM you will waste tons of capacity (= only 20% of those 7TB would be actually usable for VM disks. 75% loss because of padding and parity. And of the resulting 25% you lose another 20% as you shouldn't fill a pool more than 80% to keep in fast).
 
Last edited:
  • Like
Reactions: masteryoda
Please also search this forum for "padding overhead". When using the defaults and not increasing the volblocksize before creating your first VM you will waste tons of capacity (= only 20% of those 7TB would be actually usable for VM disks. 75% loss because of padding and parity. And of the resulting 25% you lose another 20% as you shouldn't fill a pool more than 80% to keep in fast).
thanks, saw this post by you
https://forum.proxmox.com/threads/h...drives-total-2-1-hot-spare.138369/post-617345

Now if i already have VMs, i assume i need to backup and remove all the VMs and restore after I change the blocksize, right?
 
what about lxc containers? I also have a cloudinit template
LXCs use datasets (so filesystems without block devices underneath it) and padding overhead only affects block devices (zvols). So LXC won't be affected and you can keep them.
 
Finally went ahead and destroyed the pool. this time i used raidz1 as z2 was an overkill for my setup, i only end up with 1/2 the space. in addition turned on thin provisioning.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!