zfs pool different sizes reported

m_l_s

Member
Dec 28, 2020
3
0
6
24
Hey, I hope anyone might be able to help and point me in the right direction to solve the following issue.

I have a zfs-pool in my machine which I use for mass storage. While I knew I was running low on storage, I expected to still have a few gigs available. But when I tried to rename a file on the pool I got an out of Storage error: (/Data2 is the pool MP)
Bash:
$ touch /Data2/tst
touch: cannot touch '/Data2/tst': No space left on device

Upon looking in the Web-interface it still said I have ~135Gb left, so I took a closer look and discovered the following:
Bash:
$ df -h /Data2/

Filesystem      Size  Used Avail Use% Mounted on

Data2           7.9T  7.9T     0 100% /Data2

$ zpool get capacity,size,health,fragmentation

NAME   PROPERTY       VALUE     SOURCE

Data2  capacity       98%       -

Data2  size           10.9T     -

Data2  health         ONLINE    -

Data2  fragmentation  34%       -

Somehow du and zpool report significantly different capacities and uses. I expect zpool/the web-interface to be correct, because the Pool consists of 2 12T drives and was created with exactly those drives.

How can I maybe use my last bit of storage?
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!