ZFS Rewrite doesn’t fully move all metadata/small files to Special VDEV

BICEPS

Active Member
Aug 20, 2019
12
0
41
35
I do want to say my setup is a little complex as it doubles as the boot drive and a special vdev in a 3-way Mirror. I have partitioned 200G just for the special vdev itself. I am running on a fresh install of Proxmox 9.1 with the pool running on the host itself. I have set special_small_blocks to 64K which in my instance combined w/ metadata should be ~75G (8.67G + 66.3G) but my sVDEV is only filled to 23G.

Code:
Blocks  LSIZE   PSIZE   ASIZE     avg    comp   %Total  Type
  332K  41.5G   4.33G   8.67G   26.7K    9.58     0.09      L1 Total

  block   psize                lsize                asize
   size   Count   Size   Cum.  Count   Size   Cum.  Count   Size   Cum.
    512:  32.7K  16.4M  16.4M  32.7K  16.4M  16.4M      0      0      0
     1K:  49.6K  59.4M  75.7M  49.6K  59.4M  75.7M      0      0      0
     2K:  15.5K  41.1M   117M  15.5K  41.1M   117M      0      0      0
     4K:   483K  1.89G  2.01G  15.1K  82.4M   199M   113K   452M   452M
     8K:  63.8K   642M  2.64G  18.0K   208M   407M   513K  4.11G  4.55G
    16K:  83.0K  1.82G  4.46G   208K  3.38G  3.78G  93.4K  1.98G  6.53G
    32K:   293K  13.4G  17.8G  58.7K  2.86G  6.64G   218K  10.1G  16.6G
    64K:   486K  43.0G  60.9G  67.5K  5.73G  12.4G   566K  49.7G  66.3G
   128K:  79.1M  9.88T  9.94T  80.1M  10.0T  10.0T  79.1M  9.88T  9.95T

Code:
root@pve:~# zpool list -v
NAME                                                      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool                                                    28.5G  12.6G  15.9G        -         -    47%    44%  1.00x    ONLINE  -
  mirror-0                                               28.5G  12.6G  15.9G        -         -    47%  44.2%      -    ONLINE
    ata-SAMSUNG_MZ7KM480HAHP-00005_S2HSNX0H508033-part3  29.0G      -      -        -         -      -      -      -    ONLINE
    ata-SAMSUNG_MZ7KM480HAHP-00005_S2HSNX0H508401-part3  29.0G      -      -        -         -      -      -      -    ONLINE
    ata-SAMSUNG_MZ7KM480HAHP-00005_S2HSNX0H508422-part3  29.0G      -      -        -         -      -      -      -    ONLINE
tank                                                     25.6T  9.95T  15.7T        -         -     9%    38%  1.00x    ONLINE  -
  mirror-0                                               10.9T  4.35T  6.56T        -         -    23%  39.9%      -    ONLINE
    wwn-0x5000cca253c8e637-part1                         10.9T      -      -        -         -      -      -      -    ONLINE
    wwn-0x5000cca253c744ae-part1                         10.9T      -      -        -         -      -      -      -    ONLINE
  mirror-1                                               14.5T  5.59T  8.96T        -         -     0%  38.4%      -    ONLINE
    ata-WDC_WUH721816ALE6L4_2CGRLEZP                     14.6T      -      -        -         -      -      -      -    ONLINE
    ata-WUH721816ALE6L4_2BJMBDBN                         14.6T      -      -        -         -      -      -      -    ONLINE
special                                                      -      -      -        -         -      -      -      -         -
  mirror-2                                                199G  19.9G   179G        -         -    14%  10.0%      -    ONLINE
    wwn-0x5002538c402f3ace-part4                          200G      -      -        -         -      -      -      -    ONLINE
    wwn-0x5002538c402f3afc-part4                          200G      -      -        -         -      -      -      -    ONLINE
    wwn-0x5002538c402f3823-part4                          200G      -      -        -         -      -      -      -    ONLINE


Any ideas on why there is a size discrepancy? I have made sure to turn off SMB and NFS shares while zfs rewrite -rv /tank was running.

Posted on L1Techs Forum as well.


EDIT: clarified missing pool information
 
Last edited:
hello, a zfs special device is in my setup part of zfs vdevN with some HDDs.
Where can i see the zfs datapool?
 
zpool list -v

You list only a single pool named "special" consisting of a single (triple!) mirrored vdev! There is no "Special Device" involved!

Not sure what you want to achieve and what to recommend... sorry.
 
  • Like
Reactions: Kingneutron
You list only a single pool named "special" consisting of a single (triple!) mirrored vdev! There is no "Special Device" involved!

Not sure what you want to achieve and what to recommend... sorry.

Hi, on the second read it is confusing that I left out details of the whole pool.

Code:
root@pve:~/useful-scripts# zpool status
  pool: rpool
 state: ONLINE
  scan: resilvered 6.30G in 00:00:15 with 0 errors on Fri Dec  5 11:44:13 2025
config:

        NAME                                                     STATE     READ WRITE CKSUM
        rpool                                                    ONLINE       0     0     0
          mirror-0                                               ONLINE       0     0     0
            ata-SAMSUNG_MZ7KM480HAHP-00005_S2HSNX0H508033-part3  ONLINE       0     0     0
            ata-SAMSUNG_MZ7KM480HAHP-00005_S2HSNX0H508401-part3  ONLINE       0     0     0
            ata-SAMSUNG_MZ7KM480HAHP-00005_S2HSNX0H508422-part3  ONLINE       0     0     0

errors: No known data errors

  pool: tank
 state: ONLINE
  scan: resilvered 32.1G in 00:02:09 with 0 errors on Fri Dec  5 11:46:25 2025
config:

        NAME                                  STATE     READ WRITE CKSUM
        tank                                  ONLINE       0     0     0
          mirror-0                            ONLINE       0     0     0
            wwn-0x5000cca253c8e637-part1      ONLINE       0     0     0
            wwn-0x5000cca253c744ae-part1      ONLINE       0     0     0
          mirror-1                            ONLINE       0     0     0
            ata-WDC_WUH721816ALE6L4_2CGRLEZP  ONLINE       0     0     0
            ata-WUH721816ALE6L4_2BJMBDBN      ONLINE       0     0     0
        special
          mirror-2                            ONLINE       0     0     0
            wwn-0x5002538c402f3ace-part4      ONLINE       0     0     0
            wwn-0x5002538c402f3afc-part4      ONLINE       0     0     0
            wwn-0x5002538c402f3823-part4      ONLINE       0     0     0

Code:
$ zfs get special_small_blocks tank
NAME  PROPERTY              VALUE                 SOURCE
tank  special_small_blocks  64K                   local

$ zfs get special_small_blocks rpool
NAME   PROPERTY              VALUE                 SOURCE
rpool  special_small_blocks  0                     default

I want to reiterate that the special VDEV is on the same device as the boot drive (rpool) just on separate partitions if that matters.

Also it seems like metadata isn’t even on the sVDEV? Running find /tank -type f -print0 | xargs .... shows it taxing the HDD pool with zero activity on the sVDEV. iostat shows 80% util on the HDDs but my understanding is that this workload should be accelerated?