[SOLVED] Am I doing this ZFS thing wrong? "zfs list" shows unexpected values

ph0x

Renowned Member
Jul 5, 2020
1,327
220
73
/dev/null
Hey there!
I managed to install PBS on my QNAP NAS (*yay*) and set up five spinning disks as a raidz2 with 10G SLOG.

ZFS layout is as follows:
Bash:
root@pbs:~# zfs list
NAME                      USED  AVAIL     REFER  MOUNTPOINT
bpool                    7.23G  7.73T     7.19G  /bpool
bpool/pbs                43.0M  7.73T     43.0M  /bpool/pbs
bpool/shares              866K  7.73T      185K  /bpool/shares
bpool/shares/homes        170K  7.73T      170K  /bpool/shares/homes
bpool/shares/stuff        170K  7.73T      170K  /bpool/shares/stuff
bpool/shares/morestuff    170K  7.73T      170K  /bpool/shares/morestuff
bpool/shares/youguessedit 170K  7.73T      170K  /bpool/shares/youguessedit
rpool                     995M  22.3G       96K  /rpool
rpool/ROOT                993M  22.3G       96K  /rpool/ROOT
rpool/ROOT/pbs-1          993M  22.3G      993M  /

The backup datastore is located in /bpool/pbs. I ran the first backup of five vm's and as you can see in that code snippet above, used disk space is counted in bpool but not in bpool/pbs. Is this normal? Did I misconfigure something?

I know that "du" is not suitable with zfs in all cases, but here it shows:
Bash:
root@pbs:~# du -sh /bpool/pbs
7.2G    /bpool/pbs

What would you think could be the issue here?
 
Should bpool/pbs not also be visible here?
Code:
root@pbs:~# df -h
Filesystem        Size  Used Avail Use% Mounted on
udev              7.8G     0  7.8G   0% /dev
tmpfs             1.6G  8.9M  1.6G   1% /run
rpool/ROOT/pbs-1   24G  994M   23G   5% /
tmpfs             7.9G     0  7.9G   0% /dev/shm
tmpfs             5.0M     0  5.0M   0% /run/lock
tmpfs             7.9G     0  7.9G   0% /sys/fs/cgroup
rpool              23G  128K   23G   1% /rpool
rpool/ROOT         23G  128K   23G   1% /rpool/ROOT
bpool             7.8T  7.2G  7.8T   1% /bpool
bpool/shares      7.8T  256K  7.8T   1% /bpool/shares
tmpfs             1.6G     0  1.6G   0% /run/user/0
 
sounds like the dataset bpool/pbs is not mounted, so you are actually backing up into the 'pbs' directory of the 'bpool' dataset
 
You guessed correctly, although I have no clue why this dataset got unmounted. Could have known though, since a few minutes earlier PBS complained about a missing chunks directory before I recreated the datastore (this time in bpool directly) ...
Works fine now, thanks!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!