Bad disk usage report

Alessandro 123

Well-Known Member
May 22, 2016
653
24
58
40
I'm using ZFS.
In storage node, on the web interce, I see "local-zfs" and "local".

Both are created over the same ZFS raid because i don't have any extra disk installed.
The current visualization is messy bacause is showing that "local" has 44used over 395GB and "local-zfs" is 428GB/780

Obviously, the "local" storage is part of the "local-zfs" and the correct total space is shown as "local-zfs".
Based on the current visualization, it seem that total space is "395+780"

Even in node list the space is wrong. "Disk usage %" column is showing about 11.2%. That's totally wrong, i'm currently using more than 54%, not the 11%
 
what does zfs list and df show?
 
Code:
# zfs list
NAME                       USED  AVAIL  REFER  MOUNTPOINT
rpool                      482G   351G   140K  /rpool
rpool/ROOT                44.2G   351G   140K  /rpool/ROOT
rpool/ROOT/pve-1          44.2G   351G  44.2G  /
rpool/data                 429G   351G   140K  /rpool/data
rpool/data/vm-100-disk-1  9.92G   351G  9.92G  -
rpool/data/vm-101-disk-1  14.9G   351G  14.9G  -
rpool/data/vm-101-disk-2  7.38G   351G  7.38G  -
rpool/data/vm-102-disk-1  90.9G   351G  90.9G  -
rpool/data/vm-102-disk-2  12.1G   351G  12.1G  -
rpool/data/vm-103-disk-1  18.3G   351G  18.3G  -
rpool/data/vm-103-disk-2  5.85G   351G  5.85G  -
rpool/data/vm-104-disk-2  13.2G   351G  13.2G  -
rpool/data/vm-104-disk-3   116M   351G   116M  -
rpool/data/vm-105-disk-1   181G   351G   181G  -
rpool/data/vm-105-disk-2  5.70G   351G  5.70G  -
rpool/data/vm-106-disk-1  21.3G   351G  21.3G  -
rpool/data/vm-106-disk-2  48.6G   351G  48.6G  -
rpool/swap                8.50G   359G   681M  -
Code:
# df -h
Filesystem        Size  Used Avail Use% Mounted on
udev               63G     0   63G   0% /dev
tmpfs              13G  579M   13G   5% /run
rpool/ROOT/pve-1  396G   45G  352G  12% /
tmpfs              63G   31M   63G   1% /dev/shm
tmpfs             5.0M     0  5.0M   0% /run/lock
tmpfs              63G     0   63G   0% /sys/fs/cgroup
rpool/ROOT        352G  128K  352G   1% /rpool/ROOT
rpool/data        352G  128K  352G   1% /rpool/data
/dev/fuse          30M   28K   30M   1% /etc/pve
tmpfs              13G     0   13G   0% /run/user/0
 
so the initial data does make 'kinda' sense:

you have (in total) 482G used and 351G available = 833G total

the rootfs has as maximum size its used space + avail -> 44.2G + 351G = 395.2G

the vm data part has as a maximum size its used space + avail -> 429G + 351G = 780G

so each indivial perspective is ok, because the rootfs uses 44.2 G which is 12% of the total size it could have
 
The used space yes, but root fs is shown as 395GB of total space. As total space is the same for both, root and data volumes, the same total space should be shown.
 
but then the used space is wrong , because in your example it would show 44G/833G which would suggest you have still ~800G free which is not the case
 
I have a RAIDZ-2 made of 4x 480GB. More or less 800GB of usable space.

How much free space I have and how much free space can I still use for both backups and VM (stored on different volumes)
This is not clear from the web interface
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!