"Hybrid" pool understanding

pamudi12

New Member
Feb 2, 2026
4
0
1
Hello together.

I am a happy Proxmox user since 2021 and I am using it for my home server. I run only VMs, no LXCs. As time goes by, I have added SSDs and replaced SSDs with bigger SSDs. But now I am at the point that I need much more disk space, so I considered creating a "hybrid" pool consisting of HDDs (for media files) and SSDs (for OS disks, databases, etc.). I could have created separate HDD and SSD pools for this, but I thought, it would be nice if the HDDs would be accelerated storing the ZFS metadata also on the SSDs.

These was my pool layout before the change:
- ZFS Pool 1 for OS:
Mirror VDEV of 2x Intel D3-S4510 (240 GB S-ATA SSD)​
- ZFS Pool 2 for VMs (dozer):
Mirror VDEv of 2x Intel D3-S4520 (3,84 TB S-ATA SSD)​

This is my pool layout after the change:
- ZFS Pool 1 for OS:
Mirror VDEV of 2x Intel D3-S4510 (240 GB S-ATA SSD)​
- ZFS Pool 2 for VMs (dozer2):
Mirror VDev of 2x Western Digital WD120EFAX (12 TB S-ATA HDD)​
Special Mirror VDev of 2x Intel D3-S4520 (3,84 TB S-ATA SSD)​

I backuped all of my VMs, deleted the VM pool dozer, added the HDDs, created a new pool dozer2.

Bash:
zpool create -o ashift=12 dozer2 mirror /dev/disk/by-id/ata-WDC_WD120EFAX-68UNTN0_XXXXXXXX /dev/disk/by-id/ata-WDC_WD120EFAX-68UNTN0_YYYYYYYY
zfs set compression=on dozer2

I created new datasets:

Bash:
root@pve:~# mkdir /etc/zfs/keys
root@pve:~# chmod 700 /etc/zfs/keys
root@pve:~# openssl rand -hex -out /etc/zfs/keys/dozer2encrypted.key 32
root@pve:~# zfs create -o encryption=aes-256-gcm -o keyformat=hex -o keylocation=file:///etc/zfs/keys/dozer2encrypted.key dozer2/encrypted
root@pve:~# zfs create dozer2/encrypted/hybrid-016k
root@pve:~# zfs create dozer2/encrypted/hybrid-ssd-016k
root@pve:~# zfs create dozer2/encrypted/hybrid-128k

I added the SSDs as Special VDev:

Bash:
root@pve:~# zpool add dozer2 -o ashift=12 special mirror /dev/disk/by-id/ata-SSDSC2KB038TZR_XXXXXXXXXXXXXXXXXX /dev/disk/by-id/ata-SSDSC2KB038TZR_YYYYYYYYYYYYYYYYYY

I set the special_small_blocks property for the dataset hybrid-ssd-016k as I want to store all disks (ZVols) underneath purely on SSDs:

Bash:
root@pve:~# zfs set special_small_blocks=16K dozer2/encrypted/hybrid-ssd-016k

I added the datasets hybrid-016k, hybrid-ssd-016k and hybrid-128k to Proxmox GUI using different block sizes.

Then I restored all of my VMs to the new pool using different datasets as restore target.

Bash:
root@pve:~# zfs list -o name,type,used,special_small_blocks -r dozer2
NAME                                               TYPE         USED  SPECIAL_SMALL_BLOCKS
dozer2                                             filesystem  2.19T                     0
dozer2/encrypted                                   filesystem  2.19T                     0
dozer2/encrypted/hybrid-016k                       filesystem   463G                     0
dozer2/encrypted/hybrid-016k/vm-206-cloudinit      volume         6M                     -
dozer2/encrypted/hybrid-016k/vm-206-disk-0         volume      12.2G                     -
dozer2/encrypted/hybrid-016k/vm-206-disk-1         volume      5.08G                     -
dozer2/encrypted/hybrid-016k/vm-240-cloudinit      volume         6M                     -
dozer2/encrypted/hybrid-016k/vm-240-disk-0         volume      12.2G                     -
dozer2/encrypted/hybrid-016k/vm-240-disk-1         volume      2.03G                     -
dozer2/encrypted/hybrid-016k/vm-260-cloudinit      volume         6M                     -
dozer2/encrypted/hybrid-016k/vm-260-disk-0         volume      20.3G                     -
dozer2/encrypted/hybrid-016k/vm-260-disk-1         volume       355G                     -
dozer2/encrypted/hybrid-016k/vm-270-disk-0         volume      20.3G                     -
dozer2/encrypted/hybrid-016k/vm-300-cloudinit      volume         6M                     -
dozer2/encrypted/hybrid-016k/vm-300-disk-0         volume      10.2G                     -
dozer2/encrypted/hybrid-016k/vm-301-cloudinit      volume         6M                     -
dozer2/encrypted/hybrid-016k/vm-301-disk-0         volume      10.2G                     -
dozer2/encrypted/hybrid-016k/vm-320-cloudinit      volume         6M                     -
dozer2/encrypted/hybrid-016k/vm-320-disk-0         volume      10.2G                     -
dozer2/encrypted/hybrid-016k/vm-320-disk-1         volume      5.08G                     -
dozer2/encrypted/hybrid-128k                       filesystem  1.20T                     0
dozer2/encrypted/hybrid-128k/vm-270-cloudinit      volume         6M                     -
dozer2/encrypted/hybrid-128k/vm-270-disk-1         volume      30.1G                     -
dozer2/encrypted/hybrid-128k/vm-270-disk-2         volume      1.17T                     -
dozer2/encrypted/hybrid-ssd-016k                   filesystem   517G                   16K
dozer2/encrypted/hybrid-ssd-016k/base-900-disk-0   volume      13.6G                     -
dozer2/encrypted/hybrid-ssd-016k/base-910-disk-0   volume      12.0G                     -
dozer2/encrypted/hybrid-ssd-016k/vm-103-disk-0     volume       129G                     -
dozer2/encrypted/hybrid-ssd-016k/vm-200-cloudinit  volume         6M                     -
dozer2/encrypted/hybrid-ssd-016k/vm-200-disk-0     volume      15.2G                     -
dozer2/encrypted/hybrid-ssd-016k/vm-200-disk-1     volume      5.08G                     -
dozer2/encrypted/hybrid-ssd-016k/vm-210-cloudinit  volume         6M                     -
dozer2/encrypted/hybrid-ssd-016k/vm-210-disk-0     volume      18.3G                     -
dozer2/encrypted/hybrid-ssd-016k/vm-210-disk-1     volume      18.3G                     -
dozer2/encrypted/hybrid-ssd-016k/vm-220-cloudinit  volume         6M                     -
dozer2/encrypted/hybrid-ssd-016k/vm-220-disk-0     volume      12.2G                     -
dozer2/encrypted/hybrid-ssd-016k/vm-220-disk-1     volume       102G                     -
dozer2/encrypted/hybrid-ssd-016k/vm-221-cloudinit  volume         6M                     -
dozer2/encrypted/hybrid-ssd-016k/vm-221-disk-0     volume      12.2G                     -
dozer2/encrypted/hybrid-ssd-016k/vm-221-disk-1     volume      81.3G                     -
dozer2/encrypted/hybrid-ssd-016k/vm-230-cloudinit  volume         6M                     -
dozer2/encrypted/hybrid-ssd-016k/vm-230-disk-0     volume      12.2G                     -
dozer2/encrypted/hybrid-ssd-016k/vm-230-disk-1     volume      15.2G                     -
dozer2/encrypted/hybrid-ssd-016k/vm-251-cloudinit  volume         6M                     -
dozer2/encrypted/hybrid-ssd-016k/vm-251-disk-0     volume      10.2G                     -
dozer2/encrypted/hybrid-ssd-016k/vm-251-disk-1     volume      10.2G                     -
dozer2/encrypted/hybrid-ssd-016k/vm-280-cloudinit  volume         6M                     -
dozer2/encrypted/hybrid-ssd-016k/vm-280-disk-0     volume      15.2G                     -
dozer2/encrypted/hybrid-ssd-016k/vm-280-disk-1     volume      10.2G                     -
dozer2/encrypted/hybrid-ssd-016k/vm-290-cloudinit  volume         6M                     -
dozer2/encrypted/hybrid-ssd-016k/vm-290-disk-0     volume      15.2G                     -
dozer2/encrypted/hybrid-ssd-016k/vm-295-cloudinit  volume         6M                     -
dozer2/encrypted/hybrid-ssd-016k/vm-295-disk-0     volume      10.2G                     -
dozer2/encrypted/hybrid-ssd-016k/vm-900-cloudinit  volume         6M                     -
dozer2/encrypted/hybrid-ssd-016k/vm-910-cloudinit  volume         6M                     -

I expected that all ZVols underneath hybrid-ssd-016k would be stored on SSDs, but that does not happen. The disk usage of the SSDs is very low.

Bash:
root@pve:~# zpool list -v dozer2
NAME                                        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
dozer2                                     14.4T  1.72T  12.7T        -         -     0%    11%  1.00x    ONLINE  -
  mirror-0                                 10.9T  1.72T  9.19T        -         -     0%  15.7%      -    ONLINE
    ata-WDC_WD120EFAX-68UNTN0_XXXXXXXX     10.9T      -      -        -         -      -      -      -    ONLINE
    ata-WDC_WD120EFAX-68UNTN0_YYYYYYYY     10.9T      -      -        -         -      -      -      -    ONLINE
special                                        -      -      -        -         -      -      -      -         -
  mirror-1                                 3.48T  6.23G  3.48T        -         -     0%  0.17%      -    ONLINE
    ata-SSDSC2KB038TZR_XXXXXXXXXXXXXXXXXX  3.49T      -      -        -         -      -      -      -    ONLINE
    ata-SSDSC2KB038TZR_YYYYYYYYYYYYYYYYYY  3.49T      -      -        -         -      -      -      -    ONLINE

Did I something wrong? Or does Proxmox VE not support this kind of pool layout?

Thank you in advance.
pamudi12
 
Thank you for the reference.

I just tried and set the special_small_blocks to 1M for this particular dataset and added VolBlocksize to the output.

Bash:
root@pve:~# zfs set special_small_blocks=1M dozer2/encrypted/hybrid-ssd-016k
root@pve:~# zfs list -o name,type,volblocksize,special_small_blocks -r dozer2
NAME                                               TYPE        VOLBLOCK  SPECIAL_SMALL_BLOCKS
dozer2                                             filesystem         -                     0
dozer2/encrypted                                   filesystem         -                     0
dozer2/encrypted/hybrid-016k                       filesystem         -                     0
dozer2/encrypted/hybrid-016k/vm-206-cloudinit      volume           16K                     -
dozer2/encrypted/hybrid-016k/vm-206-disk-0         volume           16K                     -
dozer2/encrypted/hybrid-016k/vm-206-disk-1         volume           16K                     -
..
dozer2/encrypted/hybrid-128k                       filesystem         -                     0
dozer2/encrypted/hybrid-128k/vm-270-cloudinit      volume          128K                     -
dozer2/encrypted/hybrid-128k/vm-270-disk-1         volume          128K                     -
dozer2/encrypted/hybrid-128k/vm-270-disk-2         volume          128K                     -
dozer2/encrypted/hybrid-ssd-016k                   filesystem         -                    1M
dozer2/encrypted/hybrid-ssd-016k/base-900-disk-0   volume           16K                     -
dozer2/encrypted/hybrid-ssd-016k/base-910-disk-0   volume           16K                     -
dozer2/encrypted/hybrid-ssd-016k/vm-103-disk-0     volume           16K                     -
dozer2/encrypted/hybrid-ssd-016k/vm-200-cloudinit  volume           16K                     -
dozer2/encrypted/hybrid-ssd-016k/vm-200-disk-0     volume           16K                     -
dozer2/encrypted/hybrid-ssd-016k/vm-200-disk-1     volume           16K                     -
...

I moved disks from hybrid-ssd-016k to hybrid-016k and back. The data is still not stored on the SSDs.

Proxmox VE 9.1.4 uses ZFS 2.3.4.

Bash:
root@pve:~# zfs version
zfs-2.3.4-pve1
zfs-kmod-2.3.4-pve1

I think this feature requires ZFS Version >=2.4 to work for ZVols. The documentation has changed between 2.3.7 and 2.4.7

https://openzfs.github.io/openzfs-docs/man/v2.3/7/zfsprops.7.html#special_small_blocks
This value represents the threshold block size for including small file blocks into the special allocation class. Blocks smaller than or equal to this value will be assigned to the special allocation class while greater blocks will be assigned to the regular class. Valid values are zero or a power of two from 512 up to 1048576 (1 MiB). The default size is 0 which means no small file blocks will be allocated in the special class.
https://openzfs.github.io/openzfs-docs/man/v2.4/7/zfsprops.7.html#special_small_blocks
This value represents the threshold block size for including small file or zvol blocks into the special allocation class. Blocks smaller than or equal to this value after compression and encryption will be assigned to the special allocation class, while greater blocks will be assigned to the regular class. Valid values are from 0 to maximum block size ( 16 MiB ). The default size is 0 which means no small file or zvol blocks will be allocated in the special class.
There is also post on Phoronix: https://www.phoronix.com/news/OpenZFS_2.4-rc1-Released
Key Features in OpenZFS 2.4.0:
..
- Allow ZIL on special vdevs when available
- Extend special_small_blocks to land ZVOL writes on special vdevs (#14876), and allow non-power of two values
This seems to be the issue on GitHub: https://github.com/openzfs/zfs/pull/14876