Hello together.
I am a happy Proxmox user since 2021 and I am using it for my home server. I run only VMs, no LXCs. As time goes by, I have added SSDs and replaced SSDs with bigger SSDs. But now I am at the point that I need much more disk space, so I considered creating a "hybrid" pool consisting of HDDs (for media files) and SSDs (for OS disks, databases, etc.). I could have created separate HDD and SSD pools for this, but I thought, it would be nice if the HDDs would be accelerated storing the ZFS metadata also on the SSDs.
These was my pool layout before the change:
- ZFS Pool 1 for OS:
This is my pool layout after the change:
- ZFS Pool 1 for OS:
I backuped all of my VMs, deleted the VM pool dozer, added the HDDs, created a new pool dozer2.
I created new datasets:
I added the SSDs as Special VDev:
I set the special_small_blocks property for the dataset hybrid-ssd-016k as I want to store all disks (ZVols) underneath purely on SSDs:
I added the datasets hybrid-016k, hybrid-ssd-016k and hybrid-128k to Proxmox GUI using different block sizes.
Then I restored all of my VMs to the new pool using different datasets as restore target.
I expected that all ZVols underneath hybrid-ssd-016k would be stored on SSDs, but that does not happen. The disk usage of the SSDs is very low.
Did I something wrong? Or does Proxmox VE not support this kind of pool layout?
Thank you in advance.
pamudi12
I am a happy Proxmox user since 2021 and I am using it for my home server. I run only VMs, no LXCs. As time goes by, I have added SSDs and replaced SSDs with bigger SSDs. But now I am at the point that I need much more disk space, so I considered creating a "hybrid" pool consisting of HDDs (for media files) and SSDs (for OS disks, databases, etc.). I could have created separate HDD and SSD pools for this, but I thought, it would be nice if the HDDs would be accelerated storing the ZFS metadata also on the SSDs.
These was my pool layout before the change:
- ZFS Pool 1 for OS:
Mirror VDEV of 2x Intel D3-S4510 (240 GB S-ATA SSD)
- ZFS Pool 2 for VMs (dozer): Mirror VDEv of 2x Intel D3-S4520 (3,84 TB S-ATA SSD)
This is my pool layout after the change:
- ZFS Pool 1 for OS:
Mirror VDEV of 2x Intel D3-S4510 (240 GB S-ATA SSD)
- ZFS Pool 2 for VMs (dozer2): Mirror VDev of 2x Western Digital WD120EFAX (12 TB S-ATA HDD)
Special Mirror VDev of 2x Intel D3-S4520 (3,84 TB S-ATA SSD)
I backuped all of my VMs, deleted the VM pool dozer, added the HDDs, created a new pool dozer2.
Bash:
zpool create -o ashift=12 dozer2 mirror /dev/disk/by-id/ata-WDC_WD120EFAX-68UNTN0_XXXXXXXX /dev/disk/by-id/ata-WDC_WD120EFAX-68UNTN0_YYYYYYYY
zfs set compression=on dozer2
I created new datasets:
Bash:
root@pve:~# mkdir /etc/zfs/keys
root@pve:~# chmod 700 /etc/zfs/keys
root@pve:~# openssl rand -hex -out /etc/zfs/keys/dozer2encrypted.key 32
root@pve:~# zfs create -o encryption=aes-256-gcm -o keyformat=hex -o keylocation=file:///etc/zfs/keys/dozer2encrypted.key dozer2/encrypted
root@pve:~# zfs create dozer2/encrypted/hybrid-016k
root@pve:~# zfs create dozer2/encrypted/hybrid-ssd-016k
root@pve:~# zfs create dozer2/encrypted/hybrid-128k
I added the SSDs as Special VDev:
Bash:
root@pve:~# zpool add dozer2 -o ashift=12 special mirror /dev/disk/by-id/ata-SSDSC2KB038TZR_XXXXXXXXXXXXXXXXXX /dev/disk/by-id/ata-SSDSC2KB038TZR_YYYYYYYYYYYYYYYYYY
I set the special_small_blocks property for the dataset hybrid-ssd-016k as I want to store all disks (ZVols) underneath purely on SSDs:
Bash:
root@pve:~# zfs set special_small_blocks=16K dozer2/encrypted/hybrid-ssd-016k
I added the datasets hybrid-016k, hybrid-ssd-016k and hybrid-128k to Proxmox GUI using different block sizes.
Then I restored all of my VMs to the new pool using different datasets as restore target.
Bash:
root@pve:~# zfs list -o name,type,used,special_small_blocks -r dozer2
NAME TYPE USED SPECIAL_SMALL_BLOCKS
dozer2 filesystem 2.19T 0
dozer2/encrypted filesystem 2.19T 0
dozer2/encrypted/hybrid-016k filesystem 463G 0
dozer2/encrypted/hybrid-016k/vm-206-cloudinit volume 6M -
dozer2/encrypted/hybrid-016k/vm-206-disk-0 volume 12.2G -
dozer2/encrypted/hybrid-016k/vm-206-disk-1 volume 5.08G -
dozer2/encrypted/hybrid-016k/vm-240-cloudinit volume 6M -
dozer2/encrypted/hybrid-016k/vm-240-disk-0 volume 12.2G -
dozer2/encrypted/hybrid-016k/vm-240-disk-1 volume 2.03G -
dozer2/encrypted/hybrid-016k/vm-260-cloudinit volume 6M -
dozer2/encrypted/hybrid-016k/vm-260-disk-0 volume 20.3G -
dozer2/encrypted/hybrid-016k/vm-260-disk-1 volume 355G -
dozer2/encrypted/hybrid-016k/vm-270-disk-0 volume 20.3G -
dozer2/encrypted/hybrid-016k/vm-300-cloudinit volume 6M -
dozer2/encrypted/hybrid-016k/vm-300-disk-0 volume 10.2G -
dozer2/encrypted/hybrid-016k/vm-301-cloudinit volume 6M -
dozer2/encrypted/hybrid-016k/vm-301-disk-0 volume 10.2G -
dozer2/encrypted/hybrid-016k/vm-320-cloudinit volume 6M -
dozer2/encrypted/hybrid-016k/vm-320-disk-0 volume 10.2G -
dozer2/encrypted/hybrid-016k/vm-320-disk-1 volume 5.08G -
dozer2/encrypted/hybrid-128k filesystem 1.20T 0
dozer2/encrypted/hybrid-128k/vm-270-cloudinit volume 6M -
dozer2/encrypted/hybrid-128k/vm-270-disk-1 volume 30.1G -
dozer2/encrypted/hybrid-128k/vm-270-disk-2 volume 1.17T -
dozer2/encrypted/hybrid-ssd-016k filesystem 517G 16K
dozer2/encrypted/hybrid-ssd-016k/base-900-disk-0 volume 13.6G -
dozer2/encrypted/hybrid-ssd-016k/base-910-disk-0 volume 12.0G -
dozer2/encrypted/hybrid-ssd-016k/vm-103-disk-0 volume 129G -
dozer2/encrypted/hybrid-ssd-016k/vm-200-cloudinit volume 6M -
dozer2/encrypted/hybrid-ssd-016k/vm-200-disk-0 volume 15.2G -
dozer2/encrypted/hybrid-ssd-016k/vm-200-disk-1 volume 5.08G -
dozer2/encrypted/hybrid-ssd-016k/vm-210-cloudinit volume 6M -
dozer2/encrypted/hybrid-ssd-016k/vm-210-disk-0 volume 18.3G -
dozer2/encrypted/hybrid-ssd-016k/vm-210-disk-1 volume 18.3G -
dozer2/encrypted/hybrid-ssd-016k/vm-220-cloudinit volume 6M -
dozer2/encrypted/hybrid-ssd-016k/vm-220-disk-0 volume 12.2G -
dozer2/encrypted/hybrid-ssd-016k/vm-220-disk-1 volume 102G -
dozer2/encrypted/hybrid-ssd-016k/vm-221-cloudinit volume 6M -
dozer2/encrypted/hybrid-ssd-016k/vm-221-disk-0 volume 12.2G -
dozer2/encrypted/hybrid-ssd-016k/vm-221-disk-1 volume 81.3G -
dozer2/encrypted/hybrid-ssd-016k/vm-230-cloudinit volume 6M -
dozer2/encrypted/hybrid-ssd-016k/vm-230-disk-0 volume 12.2G -
dozer2/encrypted/hybrid-ssd-016k/vm-230-disk-1 volume 15.2G -
dozer2/encrypted/hybrid-ssd-016k/vm-251-cloudinit volume 6M -
dozer2/encrypted/hybrid-ssd-016k/vm-251-disk-0 volume 10.2G -
dozer2/encrypted/hybrid-ssd-016k/vm-251-disk-1 volume 10.2G -
dozer2/encrypted/hybrid-ssd-016k/vm-280-cloudinit volume 6M -
dozer2/encrypted/hybrid-ssd-016k/vm-280-disk-0 volume 15.2G -
dozer2/encrypted/hybrid-ssd-016k/vm-280-disk-1 volume 10.2G -
dozer2/encrypted/hybrid-ssd-016k/vm-290-cloudinit volume 6M -
dozer2/encrypted/hybrid-ssd-016k/vm-290-disk-0 volume 15.2G -
dozer2/encrypted/hybrid-ssd-016k/vm-295-cloudinit volume 6M -
dozer2/encrypted/hybrid-ssd-016k/vm-295-disk-0 volume 10.2G -
dozer2/encrypted/hybrid-ssd-016k/vm-900-cloudinit volume 6M -
dozer2/encrypted/hybrid-ssd-016k/vm-910-cloudinit volume 6M -
I expected that all ZVols underneath hybrid-ssd-016k would be stored on SSDs, but that does not happen. The disk usage of the SSDs is very low.
Bash:
root@pve:~# zpool list -v dozer2
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
dozer2 14.4T 1.72T 12.7T - - 0% 11% 1.00x ONLINE -
mirror-0 10.9T 1.72T 9.19T - - 0% 15.7% - ONLINE
ata-WDC_WD120EFAX-68UNTN0_XXXXXXXX 10.9T - - - - - - - ONLINE
ata-WDC_WD120EFAX-68UNTN0_YYYYYYYY 10.9T - - - - - - - ONLINE
special - - - - - - - - -
mirror-1 3.48T 6.23G 3.48T - - 0% 0.17% - ONLINE
ata-SSDSC2KB038TZR_XXXXXXXXXXXXXXXXXX 3.49T - - - - - - - ONLINE
ata-SSDSC2KB038TZR_YYYYYYYYYYYYYYYYYY 3.49T - - - - - - - ONLINE
Did I something wrong? Or does Proxmox VE not support this kind of pool layout?
Thank you in advance.
pamudi12