ZFS size / allocated difference?

Elleni

Active Member
Jul 6, 2020
145
6
38
51
I have created a raidz1 pool for the pbs-backup datastore. On pbs - Storage / zfs the size shown is 1.77 TB but on the summary page of the datastore the size is only 1.24 TB. Why is there such a difference and will zfs allocate more space / expand the pool when it gets full (929 GB used atm). Or can I assign all the space / enlarge the datastore to those 1.7 TB manually?

Code:
zpool list
NAME          SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
backup5-srv  1.81T   928G   928G        -         -     1%    49%  1.00x    ONLINE  -
pbs-backup   1.77T  1.25T   529G        -         -    12%    70%  1.00x    ONLINE  -
rpool        45.5G  3.61G  41.9G        -         -     8%     7%  1.00x    ONLINE  -
Code:
zfs list
NAME               USED  AVAIL     REFER  MOUNTPOINT
backup5-srv        928G   870G      928G  /backup5-srv
pbs-backup         929G   343G      929G  /pbs-backup
rpool             1.75G  19.6G      151K  /rpool
rpool/ROOT        1.74G  19.6G      140K  /rpool/ROOT
rpool/ROOT/pve-1  1.74G  19.6G     1.74G  /
rpool/data         140K  19.6G      140K  /rpool/data

Also the raidz2 pool for the root-pool is aparently to big. Is it possible to shrink the rpool and expand the pbs-backup pool? What is the recommended minimum size for a root pool for pbs?
 
Last edited:
Hi,

please use the search in combination with Proxmox VE. The answer is the same.
 
Hi Wolfgang, although i have much to do, I already did a quick search before posting, but maybe my search terms were not adaequate. Would you mind sharing a link? I would be very thankfull.
 
It is padding overhead. I bet you didn't calculated the best volblocksize and kept the preconfigured value of 8k. In that case it is not uncommon that 50% more space is required to store the data. It's not easy to change that later because it can only be set at creation time of your virtual harddisks. So you need to backup everything, destroy the old virtual harddisks and create new ones after optimizing the volblocksize of the pool.

For more informations look at the links above.
 
Last edited:
Thanks for your posts guys. How can I check if the raidz data pool was created with good settings? I would like to optimize the datapool - even if necessary to recreate it to waste the least possiblew size. The pbs server has 4 x 500 GB disks. I created a small partition for pbs installation (rootpool) as raidz2 as I wanted the system available even when 2 disks would fail. The rest of the diskspace is on a second, large partition.

With those larger partitions I created a raidz1 data pool for backups with the following command:

zpool create -f -o ashift=12 -O compression=on -O encryption=on -O keyformat=passphrase -O keylocation=prompt pbs-backup raidz disk-id1 disk-id2 disk-id3 disk-id4.
Code:
root@pbs:~# zpool status
  pool: pbs-backup
state: ONLINE
  scan: none requested
config:

        NAME                                                  STATE     READ WRITE CKSUM
        pbs-backup                                            ONLINE       0     0     0
          raidz1-0                                            ONLINE       0     0     0
            disk-id1-part4                                    ONLINE       0     0     0
            disk-id2-part4                                    ONLINE       0     0     0
            disk-id3-part4                                    ONLINE       0     0     0
            disk-id4-part4                                    ONLINE       0     0     0

errors: No known data errors

  pool: rpool
state: ONLINE
  scan: scrub repaired 0B in 0 days 00:02:34 with 0 errors on Sun Nov  8 00:26:52 2020
config:

        NAME                                                STATE     READ WRITE CKSUM
        rpool                                              ONLINE       0     0     0
          raidz2-0                                         ONLINE       0     0     0
            disk-id1-part3                                 ONLINE       0     0     0
            disk-id2-part3                                 ONLINE       0     0     0
            disk-id3-part3                                 ONLINE       0     0     0
            disk-id4-part3                                 ONLINE       0     0     0

errors: No known data errors
 
Last edited:
I'm not sure where you can set the volblocksize in PBS. In PVE it is under "datacenter -> storage -> YourPool -> Volblocksize".

Looking at that table for 4 drives with raidz1 a good value would be 64k at ashift 12 (only loose 27% and not 50% of your total raw capacity for parity/padding). And 64k at ashift 12 for 4 drives with raidz2 (only loose 52% instead of 67% for parity/padding).
 
Last edited:
It is likely as you said that the default blocksize of 8k is used as I don't even know how to change or set it. So is that the answer of the initial question?

Is it recommended to leave as is for my raidz1 datapool for pbs, or is there any recomandation on re-creating the pool with a different volblocksize for the pbs-data-pool - and if so - how?

Looking at my 2 pve nodes I also see 8k volblocksize on a 2 x 2TB nvme disk mirror. Any reason to change or is this setting ok?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!