zfs vol size on backup server bigger then on node??

andy77

Well-Known Member
Jul 6, 2016
248
13
58
40
Hello @ all,

I did tested now pve-zsync to make some backups on my new backup server but have problems with understanding the used space of the volume backup.

Here the config of the node:
Code:
NAME   PROPERTY                       VALUE                          SOURCE
rpool  size                           6.94T                          -
rpool  capacity                       53%                            -
rpool  altroot                        -                              default
rpool  health                         ONLINE                         -
rpool  guid                           10070505588150836828           -
rpool  version                        -                              default
rpool  bootfs                         rpool/ROOT/pve-1               local
rpool  delegation                     on                             default
rpool  autoreplace                    off                            default
rpool  cachefile                      -                              default
rpool  failmode                       wait                           default
rpool  listsnapshots                  off                            default
rpool  autoexpand                     off                            default
rpool  dedupditto                     0                              default
rpool  dedupratio                     1.00x                          -
rpool  free                           3.25T                          -
rpool  allocated                      3.69T                          -
rpool  readonly                       off                            -
rpool  ashift                         12                             local
rpool  comment                        -                              default
rpool  expandsize                     -                              -
rpool  freeing                        0                              -
rpool  fragmentation                  21%                            -
rpool  leaked                         0                              -
rpool  multihost                      off                            default
rpool  feature@async_destroy          enabled                        local
rpool  feature@empty_bpobj            active                         local
rpool  feature@lz4_compress           active                         local
rpool  feature@multi_vdev_crash_dump  enabled                        local
rpool  feature@spacemap_histogram     active                         local
rpool  feature@enabled_txg            active                         local
rpool  feature@hole_birth             active                         local
rpool  feature@extensible_dataset     active                         local
rpool  feature@embedded_data          active                         local
rpool  feature@bookmarks              enabled                        local
rpool  feature@filesystem_limits      enabled                        local
rpool  feature@large_blocks           enabled                        local
rpool  feature@large_dnode            enabled                        local
rpool  feature@sha512                 enabled                        local
rpool  feature@skein                  enabled                        local
rpool  feature@edonr                  enabled                        local
rpool  feature@userobj_accounting     active                         local
Code:
pool: rpool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            sda3    ONLINE       0     0     0
            sdb3    ONLINE       0     0     0
            sdc3    ONLINE       0     0     0
            sdd3    ONLINE       0     0     0
            sde3    ONLINE       0     0     0
            sdf3    ONLINE       0     0     0
            sdg3    ONLINE       0     0     0
            sdh3    ONLINE       0     0     0


And here the config of the backup server:
Code:
storage  size                           72.5T                          -
storage  capacity                       30%                            -
storage  altroot                        -                              default
storage  health                         ONLINE                         -
storage  guid                           11321045570636972644           -
storage  version                        -                              default
storage  bootfs                         -                              default
storage  delegation                     on                             default
storage  autoreplace                    off                            default
storage  cachefile                      -                              default
storage  failmode                       wait                           default
storage  listsnapshots                  off                            default
storage  autoexpand                     off                            default
storage  dedupditto                     0                              default
storage  dedupratio                     1.00x                          -
storage  free                           50.1T                          -
storage  allocated                      22.4T                          -
storage  readonly                       off                            -
storage  ashift                         12                             local
storage  comment                        -                              default
storage  expandsize                     -                              -
storage  freeing                        0                              -
storage  fragmentation                  0%                             -
storage  leaked                         0                              -
storage  multihost                      off                            default
storage  feature@async_destroy          enabled                        local
storage  feature@empty_bpobj            active                         local
storage  feature@lz4_compress           active                         local
storage  feature@multi_vdev_crash_dump  enabled                        local
storage  feature@spacemap_histogram     active                         local
storage  feature@enabled_txg            active                         local
storage  feature@hole_birth             active                         local
storage  feature@extensible_dataset     active                         local
storage  feature@embedded_data          active                         local
storage  feature@bookmarks              enabled                        local
storage  feature@filesystem_limits      enabled                        local
storage  feature@large_blocks           enabled                        local
storage  feature@large_dnode            enabled                        local
storage  feature@sha512                 enabled                        local
storage  feature@skein                  enabled                        local
storage  feature@edonr                  enabled                        local
storage  feature@userobj_accounting     active                         local
Code:
pool: storage
 state: ONLINE
  scan: scrub repaired 0B in 0h0m with 0 errors on Sun Jan 13 00:24:03 2019
config:

        NAME        STATE     READ WRITE CKSUM
        storage     ONLINE       0     0     0
          raidz2-0  ONLINE       0     0     0
            sda     ONLINE       0     0     0
            sdb     ONLINE       0     0     0
            sdc     ONLINE       0     0     0
            sdd     ONLINE       0     0     0
            sde     ONLINE       0     0     0
            sdf     ONLINE       0     0     0
            sdg     ONLINE       0     0     0
            sdh     ONLINE       0     0     0
        cache
          nvme0n1   ONLINE       0     0     0

Here now the vol info from the node:
Code:
root@node:/etc/cron.d# zfs get all rpool/data/vm-101-disk-0
NAME                      PROPERTY              VALUE                  SOURCE
rpool/data/vm-101-disk-0  type                  volume                 -
rpool/data/vm-101-disk-0  creation              Fri Jan 25 18:22 2019  -
rpool/data/vm-101-disk-0  used                  59.3G                  -
rpool/data/vm-101-disk-0  available             2.55T                  -
rpool/data/vm-101-disk-0  referenced            58.9G                  -
rpool/data/vm-101-disk-0  compressratio         1.13x                  -
rpool/data/vm-101-disk-0  reservation           none                   default
rpool/data/vm-101-disk-0  volsize               100G                   local
rpool/data/vm-101-disk-0  volblocksize          8K                     default
rpool/data/vm-101-disk-0  checksum              on                     default
rpool/data/vm-101-disk-0  compression           on                     inherited from rpool
rpool/data/vm-101-disk-0  readonly              off                    default
rpool/data/vm-101-disk-0  createtxg             18685                  -
rpool/data/vm-101-disk-0  copies                1                      default
rpool/data/vm-101-disk-0  refreservation        none                   default
rpool/data/vm-101-disk-0  guid                  1003074226894478923    -
rpool/data/vm-101-disk-0  primarycache          all                    default
rpool/data/vm-101-disk-0  secondarycache        all                    default
rpool/data/vm-101-disk-0  usedbysnapshots       437M                   -
rpool/data/vm-101-disk-0  usedbydataset         58.9G                  -
rpool/data/vm-101-disk-0  usedbychildren        0B                     -
rpool/data/vm-101-disk-0  usedbyrefreservation  0B                     -
rpool/data/vm-101-disk-0  logbias               latency                default
rpool/data/vm-101-disk-0  dedup                 off                    default
rpool/data/vm-101-disk-0  mlslabel              none                   default
rpool/data/vm-101-disk-0  sync                  standard               inherited from rpool
rpool/data/vm-101-disk-0  refcompressratio      1.13x                  -
rpool/data/vm-101-disk-0  written               381M                   -
rpool/data/vm-101-disk-0  logicalused           39.9G                  -
rpool/data/vm-101-disk-0  logicalreferenced     39.5G                  -
rpool/data/vm-101-disk-0  volmode               default                default
rpool/data/vm-101-disk-0  snapshot_limit        none                   default
rpool/data/vm-101-disk-0  snapshot_count        none                   default
rpool/data/vm-101-disk-0  snapdev               hidden                 default
rpool/data/vm-101-disk-0  context               none                   default
rpool/data/vm-101-disk-0  fscontext             none                   default
rpool/data/vm-101-disk-0  defcontext            none                   default
rpool/data/vm-101-disk-0  rootcontext           none                   default
rpool/data/vm-101-disk-0  redundant_metadata    all                    default

And now the vol info from the backup server:
Code:
root@storage:~# zfs get all storage/backups/node/vm-101-disk-0
NAME                                       PROPERTY              VALUE                  SOURCE
storage/backups/node/vm-101-disk-0  type                  volume                 -
storage/backups/node/vm-101-disk-0  creation              Fri Feb  1  1:00 2019  -
storage/backups/node/vm-101-disk-0  used                  74.5G                  -
storage/backups/node/vm-101-disk-0  available             34.0T                  -
storage/backups/node/vm-101-disk-0  referenced            74.4G                  -
storage/backups/node/vm-101-disk-0  compressratio         1.13x                  -
storage/backups/node/vm-101-disk-0  reservation           none                   default
storage/backups/node/vm-101-disk-0  volsize               100G                   local
storage/backups/node/vm-101-disk-0  volblocksize          8K                     default
storage/backups/node/vm-101-disk-0  checksum              on                     default
storage/backups/node/vm-101-disk-0  compression           lz4                    inherited from storage
storage/backups/node/vm-101-disk-0  readonly              off                    default
storage/backups/node/vm-101-disk-0  createtxg             382515                 -
storage/backups/node/vm-101-disk-0  copies                1                      default
storage/backups/node/vm-101-disk-0  refreservation        none                   default
storage/backups/node/vm-101-disk-0  guid                  13724468318431351622   -
storage/backups/node/vm-101-disk-0  primarycache          all                    default
storage/backups/node/vm-101-disk-0  secondarycache        all                    default
storage/backups/node/vm-101-disk-0  usedbysnapshots       103M                   -
storage/backups/node/vm-101-disk-0  usedbydataset         74.4G                  -
storage/backups/node/vm-101-disk-0  usedbychildren        0B                     -
storage/backups/node/vm-101-disk-0  usedbyrefreservation  0B                     -
storage/backups/node/vm-101-disk-0  logbias               latency                default
storage/backups/node/vm-101-disk-0  dedup                 off                    default
storage/backups/node/vm-101-disk-0  mlslabel              none                   default
storage/backups/node/vm-101-disk-0  sync                  standard               default
storage/backups/node/vm-101-disk-0  refcompressratio      1.13x                  -
storage/backups/node/vm-101-disk-0  written               0                      -
storage/backups/node/vm-101-disk-0  logicalused           39.5G                  -
storage/backups/node/vm-101-disk-0  logicalreferenced     39.4G                  -
storage/backups/node/vm-101-disk-0  volmode               default                default
storage/backups/node/vm-101-disk-0  snapshot_limit        none                   default
storage/backups/node/vm-101-disk-0  snapshot_count        none                   default
storage/backups/node/vm-101-disk-0  snapdev               hidden                 default
storage/backups/node/vm-101-disk-0  context               none                   default
storage/backups/node/vm-101-disk-0  fscontext             none                   default
storage/backups/node/vm-101-disk-0  defcontext            none                   default
storage/backups/node/vm-101-disk-0  rootcontext           none                   default
storage/backups/node/vm-101-disk-0  redundant_metadata    all                    default


As you can see, the difference in "used space" is much higher.
Any ideas about the reason for this?


Thanks a lot in advance
 
Hi,

this is because you compare RaidZ1 with RaidZ2.
RaidZ2 has tow disk parity so it is normal that it needs more space than a RaidZ.
 
Hi, thx for the answer. That's what I first though too. But shouldn't that be shown in "ZFS usable storage capacity" instad of "Zpool storage capacity"?

Code:
NAME                                                                        AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
storage                                                                     24.1T  25.9T        0B    256K             0B      25.9T
storage/backups/node/vm-101-disk-0                                   24.1T  85.6G     1.22G   84.4G             0B         0B


As we see in this output, the "AVAIL" and "USED" space form "storage" is arround 50TB. So this is the allready calculated "ZFS usable storage capacity" because all the discs together without pariry bits deducted have arround 72TB.

So if for the "storage" itself the capacity is shown in "ZFS usable storage", why the volume storage of vm-101-disk-0 itself should be in "Zpool storage capacity"?

THX very much for your help and effort!
 
Hi @wolfgang,
Hi @ all,

I am still trying to understand this stuff because it makes somehow no sense to me.

It really seems that the sizes are different because of parity bits.

raidz1 = 59.3G
raidz2 = 74.5G
raid10 = 35G

But why then I cannot use the 72.5TB RAW disk space of the storage server but only 49,90TB? Shouldn't these 49.90TB be allready with parity bits deducted?

What I also do not get is, why raid10 is showing me the "real size"? It is mirrored, shouldn't it show also the double size?

I would really appreaceate any help in understanding that.

THX
 
What I also do not get is, why raid10 is showing me the "real size"? It is mirrored, shouldn't it show also the double size?
Raid 10 is a straightforward allocation model.
RaidZ with dynamic blocks can't tell before a block is written how much space it takes.
So thas why the raid10 shows you the useable space and RaidZ shows the raw space.
 
Sorry @wolfgang to bother you so much, but I don't understand this, because you wrote:
So thas why the raid10 shows you the useable space and RaidZ shows the raw space.

But zfs list on raidz (1 and 2) is showing me the usable space too and not the raw space
Code:
NAME                                         USED  AVAIL  REFER
storage                                     44.4T  5.52T   256K

zpool get all:
Code:
NAME     PROPERTY                       VALUE
storage  size                           72.5T
storage  allocated                      62.5T
 
See
create 4 disks with 10G

Code:
qemu-img create -f raw  disk1.raw 10G
qemu-img create -f raw  disk2.raw 10G
qemu-img create -f raw  disk3.raw 10G
qemu-img create -f raw  disk4.raw 10G

Create a Raid10
Code:
zpool create raid10 mirror ~/testzfs/disk1.raw ~/testzfs/disk2.raw mirror ~/testzfs/disk3.raw ~/testzfs/disk4.raw -o ashift=12

With the raid10 you see with zpool (storage manager) 19,9GB free and at zfs(the filesystem level) 19,3GB available.

delete the pool and create a RaidZ3
Code:
zpool create raidZ2 raidz2 ~/testzfs/disk1.raw ~/testzfs/disk2.raw ~/testzfs/disk3.raw ~/testzfs/disk4.raw -o ashift=12 -f

With the raidZ2 you see with zpool (storage manager) 39.7GB free and at zfs(the filesystem level) 18,7GB available.

delete the pool and create a RaidZ3
Code:
zpool create raidZ3 raidz3 ~/testzfs/disk1.raw ~/testzfs/disk2.raw ~/testzfs/disk3.raw ~/testzfs/disk4.raw -o ashift=12 -f

With the raidZ2 you see with zpool (storage manager) 39.7GB free and at zfs(the filesystem level) 9.63GB available.

zfs list shows you an approximate available space.
in case of raid10 it is exactly the available space.
In raidZx the value will always variate.

When you have a look to zpool see the free raw space. So not allocated space but this is not the useable space.
 
zfs list shows you an approximate available space.
in case of raid10 it is exactly the available space.
In raidZx the value will always variate.

This is fully clear to me. And that is also what I assumed until now.

zfs list = showing approxmiate available (usable) pool space.

The thing I do not unterstand is at the same command (zfs list), the size of the volume? How "zfs list" is showing at "available pool space" the approximate space, and on the other hand for volumes it is showing the full size including parity?


raidz1:
Code:
NAME                        USED  AVAIL  REFER
rpool/data/vm-101-disk-0   81.7G  2.30T  75.5G

raidz2:
Code:
NAME                                         USED  AVAIL  REFER
storage/backups/c1lxnode8px/vm-101-disk-0   96.3G

raid10:
Code:
NAME                                        USED  AVAIL  REFER
rpool/data/vm-101-disk-0                   45.0G  7.12T  45.0G

vm-101 filesystem:
Code:
root@vm101:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        99G   46G   49G  49% /

Even if "zfs list" is showing the volumes space with partity, how can it be possible that on raidz2 (with 8 disks) parity is more then 100% of real used space?
 
Hi @andy77

How "zfs list" is showing at "available pool space" the approximate space

See here:

http://schalwad.blogspot.com/2015/09/understanding-how-zfs-calculates-used.html

A RAIDZ-2 storage pool created with three 136GB disks reports SIZE as 408GB and initial FREE values as 408GB. This reporting is referred to as the inflated disk space value, which includes redundancy overhead, such as parity information. The initial AVAIL space reported by the zfs list command is 133GB, due to the pool redundancy overhead.
# zpool create tank raidz2 c0t6d0 c0t7d0 c0t8d0
# zpool list tank
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
tank 408G 286K 408G 0% 1.00x ONLINE -
# zfs list tank
NAME USED AVAIL REFER MOUNTPOINT
tank 73.2K 133G 20.9K /tank

Another major reason of different used space reported by zpool list and zfs list is refreservation by a zvol. Once a zvol is created, zfs list immediately reports the zvol size (and metadata) as USED. But zpool list does not report the zvol size as USED until the zvol is actually used.
 
@guletz
thx for your answer.

Please consider the whole question because the one part, which you copied is totaly clear.

How "zfs list" is showing at "available pool space" the approximate space, and on the other hand for volumes it is showing the full size including parity
 
Please consider the whole question because the one part, which you copied is totaly clear.


Another major reason of different used space reported by zpool list and zfs list is refreservation by a zvol. Once a zvol is created, zfs list immediately reports the zvol size (and metadata) as USED. But zpool list does not report the zvol size as USED until the zvol is actually used.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!