ZFS Space Inflation

LnxBil

Distinguished Member
Feb 21, 2015
8,774
1,387
273
Saarland, Germany
Hi ZFS lovers,

I play around with best practice of a backup of a VM. The VM stores ISO files and has a data disk of 768 GB. This is very huge to backup and I'd like to do it more efficient.

I moved the data from a ext4-based 768 GB disk to a 768 GB ZFS-based one inside the VM and excluded the disk image from backup. The backup itself should be done from "outside" via ZFS send/receive.

I built a new backup server with ZFS to store my proxmox backups (a proxmox installation itself) and it'll be used to receive the data. The system uses a RAID-z2 for the data (6x3 TB), no L2ARC/ZIL and 16 GB of RAM.

All pools have compression enabled (on), 4K recordsize and the send/receive synchronization has just finished and I do not know how the numbers add up:

Source system:

Code:
$ zfs list -t all -r -o space isodump/samba
NAME                               AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
isodump/samba                       311G   429G      264K    429G              0          0
isodump/samba@2015_12_16-13_35_53      -      0         -       -              -          -
isodump/samba@2015_12_17-10_57_26      -      0         -       -              -          -
isodump/samba@2015_12_18-06_38_25      -    43K         -       -              -          -

Actual transfer (initial sync):

Code:
$ ssh -C isodump zfs send -R isodump/samba@2015_12_16-13_35_53 | zfs receive -Fduv rpool/proxmox/2007
receiving full stream of isodump/samba@2015_12_16-13_35_53 into rpool/proxmox/2007/samba@2015_12_16-13_35_53
received 459GB stream in 34387 seconds (13,7MB/sec)

So, we received 30 GB more due to (I think) uncompressed data and also metadata. The transfer was really slow, but there ware a complete cluster backup from 5 nodes running beside the send/receive, so no surprise there.

But on the backup server, the pool is really huge, too huge in my opinion.

Code:
$ zfs list -r -t all rpool/proxmox/2007
NAME                                           USED  AVAIL  REFER  MOUNTPOINT
rpool/proxmox/2007                            1,86T  2,85T  2,64G  /rpool/proxmox/2007
rpool/proxmox/2007@2015_11_15-23_01_15         346G      -   346G  -
rpool/proxmox/2007@2015_11_20-18_16_54         189M      -   553G  -
rpool/proxmox/2007@2015_11_27-18_17_10         114M      -   630G  -
rpool/proxmox/2007@2015_12_04-18_28_18         114M      -   671G  -
rpool/proxmox/2007@2015_12_14-09_45_23        5,00G      -   676G  -
rpool/proxmox/2007@2015_12_17-12_55_34            0      -  2,64G  -
rpool/proxmox/2007/samba                       877G  2,85T   877G  /rpool/proxmox/2007/samba
rpool/proxmox/2007/samba@2015_12_16-13_35_53      0      -   877G  -

One can see that before yesterday, we had the data inside the 2007 filesystem and it was not as much as it is now. I do not know why the filesystem uses now 220 GB more than before or in the original system.

Can anyone explain this?

Best,
LnxBil
 
Last edited:
  • Like
Reactions: semanticbeeng
For sending snaps i do "zfs send -i sending/pool@snap1 sending/pool@snap2 | zfs receive receiving/pool" and it is fast. Sending with -R and other attributes is useful for the first time I think.

The size can be different depend on sending pool settings and receiving pool settings.

Example

Desktop pool settings

Code:
rpool/ROOT/ubuntu-1  type                  filesystem             -
rpool/ROOT/ubuntu-1  creation              Wed Nov 11 18:39 2015  -
rpool/ROOT/ubuntu-1  used                  13.0G                  -
rpool/ROOT/ubuntu-1  available             1.36G                  -
rpool/ROOT/ubuntu-1  referenced            11.5G                  -
rpool/ROOT/ubuntu-1  compressratio         1.76x                  -
rpool/ROOT/ubuntu-1  mounted               yes                    -
rpool/ROOT/ubuntu-1  quota                 none                   default
rpool/ROOT/ubuntu-1  reservation           none                   default
rpool/ROOT/ubuntu-1  recordsize            128K                   default
rpool/ROOT/ubuntu-1  mountpoint            /                      local
rpool/ROOT/ubuntu-1  sharenfs              off                    default
rpool/ROOT/ubuntu-1  checksum              on                     default
rpool/ROOT/ubuntu-1  compression           lz4                    inherited from rpool
rpool/ROOT/ubuntu-1  atime                 off                    inherited from rpool
rpool/ROOT/ubuntu-1  devices               on                     default
rpool/ROOT/ubuntu-1  exec                  on                     default
rpool/ROOT/ubuntu-1  setuid                on                     default
rpool/ROOT/ubuntu-1  readonly              off                    default
rpool/ROOT/ubuntu-1  zoned                 off                    default
rpool/ROOT/ubuntu-1  snapdir               hidden                 default
rpool/ROOT/ubuntu-1  aclinherit            restricted             default
rpool/ROOT/ubuntu-1  canmount              on                     default
rpool/ROOT/ubuntu-1  xattr                 on                     default
rpool/ROOT/ubuntu-1  copies                2                      inherited from rpool
rpool/ROOT/ubuntu-1  version               5                      -
rpool/ROOT/ubuntu-1  utf8only              off                    -
rpool/ROOT/ubuntu-1  normalization         none                   -
rpool/ROOT/ubuntu-1  casesensitivity       sensitive              -
rpool/ROOT/ubuntu-1  vscan                 off                    default
rpool/ROOT/ubuntu-1  nbmand                off                    default
rpool/ROOT/ubuntu-1  sharesmb              off                    default
rpool/ROOT/ubuntu-1  refquota              none                   default
rpool/ROOT/ubuntu-1  refreservation        none                   default
rpool/ROOT/ubuntu-1  primarycache          all                    default
rpool/ROOT/ubuntu-1  secondarycache        all                    default
rpool/ROOT/ubuntu-1  usedbysnapshots       1.50G                  -
rpool/ROOT/ubuntu-1  usedbydataset         11.5G                  -
rpool/ROOT/ubuntu-1  usedbychildren        0                      -
rpool/ROOT/ubuntu-1  usedbyrefreservation  0                      -
rpool/ROOT/ubuntu-1  logbias               latency                default
rpool/ROOT/ubuntu-1  dedup                 off                    default
rpool/ROOT/ubuntu-1  mlslabel              none                   default
rpool/ROOT/ubuntu-1  sync                  disabled               inherited from rpool
rpool/ROOT/ubuntu-1  refcompressratio      1.73x                  -
rpool/ROOT/ubuntu-1  written               780K                   -
rpool/ROOT/ubuntu-1  logicalused           9.83G                  -
rpool/ROOT/ubuntu-1  logicalreferenced     8.47G                  -
rpool/ROOT/ubuntu-1  snapdev               hidden                 default
rpool/ROOT/ubuntu-1  acltype               off                    default
rpool/ROOT/ubuntu-1  context               none                   default
rpool/ROOT/ubuntu-1  fscontext             none                   default
rpool/ROOT/ubuntu-1  defcontext            none                   default
rpool/ROOT/ubuntu-1  rootcontext           none                   default
rpool/ROOT/ubuntu-1  relatime              off                    default
rpool/ROOT/ubuntu-1  redundant_metadata    all                    default
rpool/ROOT/ubuntu-1  overlay               off                    default

Backup system settings

Code:
NAME                   PROPERTY              VALUE                  SOURCE
zfs_mirror/linux/root  type                  filesystem             -
zfs_mirror/linux/root  creation              Fri Nov 27 20:45 2015  -
zfs_mirror/linux/root  used                  6.67G                  -
zfs_mirror/linux/root  available             730G                   -
zfs_mirror/linux/root  referenced            5.91G                  -
zfs_mirror/linux/root  compressratio         1.72x                  -
zfs_mirror/linux/root  mounted               no                     -
zfs_mirror/linux/root  quota                 none                   default
zfs_mirror/linux/root  reservation           none                   default
zfs_mirror/linux/root  recordsize            128K                   default
zfs_mirror/linux/root  mountpoint            none                   local
zfs_mirror/linux/root  sharenfs              off                    default
zfs_mirror/linux/root  checksum              on                     default
zfs_mirror/linux/root  compression           lz4                    inherited from zfs_mirror
zfs_mirror/linux/root  atime                 off                    inherited from zfs_mirror
zfs_mirror/linux/root  devices               on                     default
zfs_mirror/linux/root  exec                  on                     default
zfs_mirror/linux/root  setuid                on                     default
zfs_mirror/linux/root  readonly              off                    default
zfs_mirror/linux/root  zoned                 off                    default
zfs_mirror/linux/root  snapdir               hidden                 default
zfs_mirror/linux/root  aclinherit            restricted             default
zfs_mirror/linux/root  canmount              on                     default
zfs_mirror/linux/root  xattr                 on                     default
zfs_mirror/linux/root  copies                1                      default
zfs_mirror/linux/root  version               5                      -
zfs_mirror/linux/root  utf8only              off                    -
zfs_mirror/linux/root  normalization         none                   -
zfs_mirror/linux/root  casesensitivity       sensitive              -
zfs_mirror/linux/root  vscan                 off                    default
zfs_mirror/linux/root  nbmand                off                    default
zfs_mirror/linux/root  sharesmb              off                    default
zfs_mirror/linux/root  refquota              none                   default
zfs_mirror/linux/root  refreservation        none                   default
zfs_mirror/linux/root  primarycache          all                    default
zfs_mirror/linux/root  secondarycache        all                    default
zfs_mirror/linux/root  usedbysnapshots       778M                   -
zfs_mirror/linux/root  usedbydataset         5.91G                  -
zfs_mirror/linux/root  usedbychildren        0                      -
zfs_mirror/linux/root  usedbyrefreservation  0                      -
zfs_mirror/linux/root  logbias               latency                default
zfs_mirror/linux/root  dedup                 off                    default
zfs_mirror/linux/root  mlslabel              none                   default
zfs_mirror/linux/root  sync                  disabled               inherited from zfs_mirror
zfs_mirror/linux/root  refcompressratio      1.70x                  -
zfs_mirror/linux/root  written               0                      -
zfs_mirror/linux/root  logicalused           9.86G                  -
zfs_mirror/linux/root  logicalreferenced     8.50G                  -
zfs_mirror/linux/root  filesystem_limit      none                   default
zfs_mirror/linux/root  snapshot_limit        none                   default
zfs_mirror/linux/root  filesystem_count      none                   default
zfs_mirror/linux/root  snapshot_count        none                   default
zfs_mirror/linux/root  snapdev               hidden                 default
zfs_mirror/linux/root  acltype               off                    default
zfs_mirror/linux/root  context               none                   default
zfs_mirror/linux/root  fscontext             none                   default
zfs_mirror/linux/root  defcontext            none                   default
zfs_mirror/linux/root  rootcontext           none                   default
zfs_mirror/linux/root  relatime              off                    default
zfs_mirror/linux/root  redundant_metadata    all                    default
zfs_mirror/linux/root  overlay               off                    default

Desktop snapshots

Code:
NAME                           USED  AVAIL  REFER  MOUNTPOINT
rpool/ROOT/ubuntu-1@20151210  1.50G      -  11.4G  -
rpool/ROOT/ubuntu-1@20151219   756K      -  11.5G  -

Backup system snapshots

Code:
NAME  USED  AVAIL  REFER  MOUNTPOINT
zfs_mirror/linux/root@20151210  778M  -  5.83G  -
zfs_mirror/linux/root@20151219  0  -  5.91G  -
-
 
Maybe I' still to tired, but I cannot see a big difference beside using
Code:
(filesystem|snapshot)_(count|limit)
and having less copies on the backup side.

I also do second level backups on encrypted, external ZFS pools with higher compression ratio and I always saw a decrease in space usage, never an increase:

Code:
$ zfs list -r -t all rpool/proxmox/2007
NAME                                           USED  AVAIL  REFER  MOUNTPOINT
rpool/proxmox/2007                            1,86T  2,82T  3,00G  /rpool/proxmox/2007
rpool/proxmox/2007@2015_11_15-23_01_15         346G      -   346G  -
rpool/proxmox/2007@2015_11_20-18_16_54         189M      -   553G  -
rpool/proxmox/2007@2015_11_27-18_17_10         114M      -   630G  -
rpool/proxmox/2007@2015_12_04-18_28_18         114M      -   671G  -
rpool/proxmox/2007@2015_12_14-09_45_23        5,00G      -   676G  -
rpool/proxmox/2007@2015_12_17-12_55_34        66,6M      -  2,64G  -
rpool/proxmox/2007@2015_12_18-20_47_37            0      -  3,00G  -
rpool/proxmox/2007/samba                       877G  2,82T   877G  /rpool/proxmox/2007/samba
rpool/proxmox/2007/samba@2015_12_16-13_35_53      0      -   877G  -

$ zfs list -r -t all externes-backup-02/proxmox/2007
NAME                                                  USED  AVAIL  REFER  MOUNTPOINT
externes-backup-02/proxmox/2007                      1014G   639G  1,32G  /externes-backup-02/proxmox/2007
externes-backup-02/proxmox/2007@2015_11_15-23_01_15   342G      -   342G  -
externes-backup-02/proxmox/2007@2015_11_20-18_16_54   118M      -   547G  -
externes-backup-02/proxmox/2007@2015_11_27-18_17_10  75,2M      -   624G  -
externes-backup-02/proxmox/2007@2015_12_04-18_28_18  72,6M      -   665G  -
externes-backup-02/proxmox/2007@2015_12_14-09_45_23  4,85G      -   670G  -
externes-backup-02/proxmox/2007@2015_12_17-12_55_34      0      -  1,32G  -
 
@LnxBill,

Are you sure you need the -F on the receiving side, even on the first time? If I understand correctly it would cause the receiver to do a rollback on your proxmox/2007 space first before actually storing. Also it seems your snapshots on proxmox/2007 also include the proxmox/2007/samba space and your space usage is exactly doubled.
 
If you have free space and time can you do one test ?

Code:
zfs send isodump/samba@2015_12_16-13_35_53 | zfs receive another/pool/samba
zfs send -I isodump/samba@2015_12_16-13_35_53 isodump/samba@2015_12_18-06_38_25 | zfs receive another/pool/samba
 
@Nemesiz

I just did an external backup and the pool is now at the correct size, but I do not understand why:

Code:
$ zfs list -r -t all rpool/proxmox/2007
NAME                                           USED  AVAIL  REFER  MOUNTPOINT
rpool/proxmox/2007                            1,86T  4,28T  3,00G  /rpool/proxmox/2007
rpool/proxmox/2007@2015_11_15-23_01_15         346G      -   346G  -
rpool/proxmox/2007@2015_11_20-18_16_54         189M      -   553G  -
rpool/proxmox/2007@2015_11_27-18_17_10         114M      -   630G  -
rpool/proxmox/2007@2015_12_04-18_28_18         114M      -   671G  -
rpool/proxmox/2007@2015_12_14-09_45_23        5,00G      -   676G  -
rpool/proxmox/2007@2015_12_17-12_55_34        66,6M      -  2,64G  -
rpool/proxmox/2007@2015_12_18-20_47_37            0      -  3,00G  -
rpool/proxmox/2007/samba                       877G  4,28T   877G  /rpool/proxmox/2007/samba
rpool/proxmox/2007/samba@2015_12_16-13_35_53      0      -   877G  -

$ zfs list -r -t all externes-backup-02/proxmox/2007
NAME                                                        USED  AVAIL  REFER  MOUNTPOINT
externes-backup-02/proxmox/2007                             440G  1,17T  1,32G  none
externes-backup-02/proxmox/2007@2015_12_17-12_55_34            0      -  1,32G  -
externes-backup-02/proxmox/2007/samba                       439G  1,17T   439G  none
externes-backup-02/proxmox/2007/samba@2015_12_16-13_35_53      0      -   439G  -

The pool is still larger, but resits on a pool with higher compression (gzip9)

I also thought about reimporting the pool, maybe I'll give it a try.
 
Last edited:
@LnxBill,

Are you sure you need the -F on the receiving side, even on the first time?

No I don't, yet I had it on every first send/receive so I stick with it for consistency.

If I understand correctly it would cause the receiver to do a rollback on your proxmox/2007 space first before actually storing. Also it seems your snapshots on proxmox/2007 also include the proxmox/2007/samba space and your space usage is exactly doubled.

Hmm, I'm not sure if I understand what you mean. Of course the parent filesystem includes the children, but it does not explain what the "childiest" snapshot refers to double space. Please see my answer to Nemesiz and the result after another send/receive.
 
@Nemesiz
The pool is still larger, but resits on a pool with higher compression (gzip9)

If ashift is different in pools the size will not match. ZFS with ashift=9 will use less space than ashift=12 no matter HDD sector is 512 or 4k.

Can you compare ZFS file system with this function ?

Code:
function lsdu() (
    export SEARCH_PATH=$*
    if [ ! -e "$SEARCH_PATH" ]; then
        echo "ERROR: Invalid file or directory ($SEARCH_PATH)"
        return 1
    fi
    find "$SEARCH_PATH" -ls | gawk --lint --posix '
        BEGIN {
            split("B KB MB GB TB PB",type)
            ls=hls=du=hdu=0;
            out_fmt="Path: %s \n  Total Size: %.2f %s \n  Disk Usage: %.2f %s \n  Compress Ratio: %.4f \n"
        }
        NF >= 7 {
            ls += $7
            du += $2
        }
        END {
            du *= 1024
            for(i=5; hls<1; i--) hls = ls / (2^(10*i))
            for(j=5; hdu<1; j--) hdu = du / (2^(10*j))
            printf out_fmt, ENVIRON["SEARCH_PATH"], hls, type[i+2], hdu, type[j+2], ls/du
        }
    '
)

put it in .bashrc or .bash_aliases or in other file and load it and compare production file system with backup file system.

Example

Code:
# lsdu zfs_mirror/cloud/mk/
Path: zfs_mirror/cloud/mk/
  Total Size: 941.76 GB
  Disk Usage: 626.24 GB
  Compress Ratio: 1.5038

# zfs list zfs_mirror/cloud/mk
NAME  USED  AVAIL  REFER  MOUNTPOINT
zfs_mirror/cloud/mk  626G  1.39T  626G  /media/zfs_mirror/cloud/mk
 
Hi Nemesiz,

Oh boy, that's not good:

Code:
$ lsdu /rpool/proxmox/2007/samba
Path: /rpool/proxmox/2007/samba
  Total Size: 532,14 GB
  Disk Usage: 1,05 TB
  Compress Ratio: 0,4963

The parameter ashift ist indeed different, on the backup-server, it is set on all pools to 12, because the disks are all "real" 3 TB disks. The source is not set (so ashift is 0), it is virtualized (Proxmox) and resists on a clustered LVM (SAN-backed).

The second level backup volume, which received the data from the primary backup has also a compression ratio below 1, but very close:

Code:
$ lsdu /externes-backup-02/proxmox/2007/samba
Path: .
  Total Size: 532,14 GB
  Disk Usage: 536,62 GB
  Compress Ratio: 0,9917

Still strange. I thought that ashift will only align to bigger block sizes and that the "waste" will be in the size of the difference.

So, changing the ashift in the source will solve my problem? So, doing a restore on file or on ZFS level? I think that a send/receive on the machine will also lead to the described problem, so a file level restore would be better, wouldn't it?

Best,
LnxBil
 
If ashift is different in pools the size will not match. ZFS with ashift=9 will use less space than ashift=12 no matter HDD sector is 512 or 4k.
Just to clarify:
"On 07/26/11 10:14, Andrew Gabriel wrote:
Does anyone know if it's OK to do zfs send/receive between zpools with different ashift values?
The ZFS Send stream is at the DMU layer at this layer the data is uncompress and decrypted - ie exactly how the application wants it.
The ashift is a vdev layer concept - ie below the DMU layer.
There is nothing in the send stream format that knows what an ashift actually is.
-- Darren J Moffat" (http://markmail.org/message/gpkqoq67otmu6e5z)
 
The second level backup volume, which received the data from the primary backup has also a compression ratio below 1, but very close:

Code:
$ lsdu /externes-backup-02/proxmox/2007/samba
Path: .
  Total Size: 532,14 GB
  Disk Usage: 536,62 GB
  Compress Ratio: 0,9917

This stats looks OK. Its happen then ZFS pool ashift=12 and there are a lot of not fully used blocks.

Still strange. I thought that ashift will only align to bigger block sizes and that the "waste" will be in the size of the difference.

So, changing the ashift in the source will solve my problem? So, doing a restore on file or on ZFS level? I think that a send/receive on the machine will also lead to the described problem, so a file level restore would be better, wouldn't it?

No. Changing ashift will not solve your problem. To check current ashift use #zdb

To 'fix' /rpool/proxmox/2007/samba I recommend you to
1. send snapshot to new file system example rpool/proxmox/2007/samba2
2. remove old file system /rpool/proxmox/2007/samba
3. rename /rpool/proxmox/2007/samba2 to /rpool/proxmox/2007/samba
 
Today, I expanded the pool by another 6x 3 TB and finally got enough space for send/receive, which is running right now. I hope it'll be finished in the morning.
 
So, the filesystem is 'received' and no change to the usage:

Code:
root@backup ~ > zfs list rpool/proxmox/2007/samba rpool/proxmox/2007/samba-neu
NAME                           USED  AVAIL  REFER  MOUNTPOINT
rpool/proxmox/2007/samba       894G  13,5T   866G  /rpool/proxmox/2007/samba
rpool/proxmox/2007/samba-neu   894G  13,5T   894G  /rpool/proxmox/2007/samba

Maybe this is related to the raid-z2? I see that e.g. that 'zpool list' shows the raw disk space of my drives, not the real raid-z2 available space. Perhaps this also applies to the 'zfs list' for send/received filesystems? It does not apply to other non-received filesystems :-(

I'm still in the dark on this.
 
Here the lsdu examples:

Code:
root@backup ~ > lsdu /rpool/proxmox/1007/
Path: /rpool/proxmox/1007/
  Total Size: 770,00 GB
  Disk Usage: 532,01 GB
  Compress Ratio: 1,4473

root@backup ~ > lsdu /rpool/proxmox/2007/samba
Path: /rpool/proxmox/2007/samba
  Total Size: 522,13 GB
  Disk Usage: 1,03 TB
  Compress Ratio: 0,4965

root@backup ~ > lsdu /rpool/proxmox/2007/samba-neu
Path: /rpool/proxmox/2007/samba-neu
  Total Size: 522,13 GB
  Disk Usage: 1,03 TB
  Compress Ratio: 0,4965
 
Don`t know why it happens to you but can you do another test ?
Create new ZFS file system and rsync to it from /rpool/proxmox/2007/samba-neu
I wait for the results.
 
Still the same (minus the snapshots):

Code:
root@backup /tmp > zfs list rpool/proxmox/2007/samba rpool/proxmox/2007/samba-neu rpool/proxmox/2007/samba-rsync
NAME                             USED  AVAIL  REFER  MOUNTPOINT
rpool/proxmox/2007/samba         894G  12,7T   866G  /rpool/proxmox/2007/samba
rpool/proxmox/2007/samba-neu     894G  12,7T   866G  /rpool/proxmox/2007/samba-neu
rpool/proxmox/2007/samba-rsync   866G  12,7T   866G  /rpool/proxmox/2007/samba-rsync

I think I'm going to live with it as it is and do not question further. Thank's for your help
 
Sorry for the late answer, but I dropped the copies already and had to re-rsync it again:

Code:
root@backup /rpool/proxmox/2007/samba-rsync > lsdu .
Path: .
  Total Size: 522,13 GB
  Disk Usage: 1,03 TB
  Compress Ratio: 0,4965

exactly the same and still no clue.
 
I run into a real ashift problem today and the difference is really huge:

Code:
[    6.546981] ZFS: Loaded module v0.6.5.2-47_g7c033da, ZFS pool version 5000, ZFS filesystem version 5
root@proxmox4 ~ > lvcreate -aly --size 64G --name zfs-test-ashift-9 san-slow
Logical volume "zfs-test-ashift-9" created
root@proxmox4 ~ > lvcreate -aly --size 64G --name zfs-test-ashift-12 san-slow
Logical volume "zfs-test-ashift-12" created
root@proxmox4 ~ > zpool create -o ashift=9 zfs-test-ashift-9 /dev/san-slow/zfs-test-ashift-9
root@proxmox4 ~ > zpool create -o ashift=12 zfs-test-ashift-12 /dev/san-slow/zfs-test-ashift-12
root@proxmox4 ~ > zfs set compression=gzip-9 zfs-test-ashift-9
root@proxmox4 ~ > zfs set compression=gzip-9 zfs-test-ashift-12

then send/receive
Code:
root@proxmox4 ~ > zfs send -R local-zfs/root-gzip9@copy | zfs receive -v zfs-test-ashift-9/root-gzip9
receiving full stream of local-zfs/root-gzip9@copy into zfs-test-ashift-9/root-gzip9@copy received 1,62GB stream in 23 seconds (72,3MB/sec)

root@proxmox4 ~ > zfs send -R local-zfs/root-gzip9@copy | zfs receive -v  zfs-test-ashift-12/root-gzip9
receiving full stream of local-zfs/root-gzip9@copy into zfs-test-ashift-12/root-gzip9@copy received 1,62GB stream in 23 seconds (72,3MB/sec)

yields

Code:
root@proxmox4 ~ > zpool list zfs-test-ashift-9 zfs-test-ashift-12
NAME                 SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP   HEALTH  ALTROOT
zfs-test-ashift-12  63,5G  1,68G  61,8G         -     1%     2%  1.00x   ONLINE  -
zfs-test-ashift-9   63,5G   848M  62,7G         -     0%     1%  1.00x   ONLINE  -

root@proxmox4 ~ > zfs list -o name,avail,used,refer,lused,lrefer,compressratio -r zfs-test-ashift-9  zfs-test-ashift-12
NAME                           AVAIL   USED  REFER  LUSED  LREFER  RATIO
zfs-test-ashift-12             59,8G  1,68G    96K  1,57G     40K  1.01x
zfs-test-ashift-12/root-gzip9  59,8G  1,68G  1,68G  1,57G   1,57G  1.01x
zfs-test-ashift-9              60,7G   848M    19K  1,51G   9,50K  1.88x
zfs-test-ashift-9/root-gzip9   60,7G   848M   848M  1,51G   1,51G  1.88x

The difference is really huge!
 
The difference is really huge!
If you refer to the difference in used space this is obvious since the minimum storage unit with ashift=12 is 4k which is 8 times larger that ashift=9 (512B). If your pool is populated with a lot of small files the wasted storage is orders of magnitude larger with ashift=12 than with ashift=9.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!