PBS will show wrong datastore size when zfs is used

layer7.net

Member
Oct 5, 2021
43
3
13
24
Hi,

a zfs dataset with 1 TB of quota was created.

A new datastore with the path of the zfs dataset was created.

The "df" command on the CLI will show the correct size of 1 TB.

But the PBS UI will show the size of the whole zpool.

Is this a bug or wanted behavior?

I would expect that the PBS code would use the whole path of a mountpoint to check for its size and not just the basedirectory....
 
Hi!
there was a similar forum post [0] a while back. It was with a different filesystem but I think it's the same problem. You could try executing `stat -f .` in the datastore and see if `Blocks Total * Block Size` returns the correct size. If my assumption is correct it won't, and `Blocks Total * Fundamental block size` will return the correct number.
There is already a patch on the mailing list to fix this [1] , although it hasn't been merged yet.

[0]: https://forum.proxmox.com/threads/pbs-3-1-2-wrong-datastore-information-sshfs.139875/
[1]: https://lists.proxmox.com/pipermail/pbs-devel/2024-January/007676.html
 
Last edited:
  • Like
Reactions: Dunuin
Hi!
there was a similar forum post [0] a while back. It was with a different filesystem but I think it's the same problem. You could try executing `stat -f .` in the datastore and see if `Blocks Total * Block Size` returns the correct size. If my assumption is correct it won't, and `Blocks Total * Fundamental block size` will return the correct number.
There is already a patch on the mailing list to fix this [1] , although it hasn't been merged yet.

[0]: https://forum.proxmox.com/threads/pbs-3-1-2-wrong-datastore-information-sshfs.139875/
[1]: https://lists.proxmox.com/pipermail/pbs-devel/2024-January/007676.html
Hi @ggoller !

Thank you for this high quality reply!

zfs dataset:

```
# stat -f teststore
File: "teststore"
ID: 5f7c6ce8003ab807 Namelen: 255 Type: zfs
Block size: 131072 Fundamental block size: 131072
Blocks: Total: 8388608 Free: 8388607 Available: 8388607
Inodes: Total: 2147483606 Free: 2147483600
```

zpool itself:

```
# stat -f zpool
File: "zpool"
ID: cb0dbcfb008ad493 Namelen: 255 Type: zfs
Block size: 131072 Fundamental block size: 131072
Blocks: Total: 80740349 Free: 80740348 Available: 80740348
Inodes: Total: 20669529315 Free: 20669529306
```

If i see it correctly, then:

dataset => Blocks Total * Block Size = 1 TB ( give or take )
zpool => Blocks Total * Block Size = 10 TB ( give or take )

So the numbers seems correct.

df -h will show:

```
zpool 9.7T 128K 9.7T 1% /zpool
zpool/teststore 1.0T 128K 1.0T 1% /zpool/teststore
```

So as it seems to me, where ever the Proxmox UI got the wrong number from, it was not stat -f reporting the wrong value.

On the other hand, i just removed and recreated the datastore now without clicking any of both checkboxes and removing the datastore manually in zfs.

So i did:

zfs destroy zpool/teststore

and

zfs create zpool/teststore
zfs set quota=1T zpoo/teststore

and readded the datastore to the proxmox UI.

Why ever, its reporting now the correct value. I did not doublecheck if during the night something changed before i removed/readded the datastore and/or if after executing the stat something changed.

But what ever triggered here something that the numbers are now displaying correctly...

Thank you very much for your information and time!
 
  • Like
Reactions: ggoller