Limit size of ZFS partition with installer

mailinglists

Renowned Member
Mar 14, 2012
643
70
93
Hi,

i have 2 SSD disks, where I would like to install ZFS to. I would also like to use them for SLOG and L2ARC.
I have additional >2TB HDDs which will hold the VM data and use SLOGs and L2ARC on smaller SSDs.
Server will not boot from HDDs, because they are too big for his firmware to be able to boot. That is why I am installing to SSDs. I do not think it is possible to shrink ZFS either.

By default, there is no option like with LVM, to keep ZFS partitions below a certain size as far as I see it.

How can one install PM to SSD disks, while still leaving a bit of space for SLOG ans L2ARC partitions?
 
Hmm... here is a stupid idea or workaround, if there is no official option.
Use smaller disks for install and ZFS will create smaller partitions.
Then replace the disks with actual ones and copy the partion schema, install grub, replace in ZFS and voila.
Now we have room to grow existing partitions or add new ones for SLOG and L2ARC.

But hopefully there is an official and elegant solution. Le me know, please.
 
How can one install PM to SSD disks, while still leaving a bit of space for SLOG ans L2ARC partitions?

IMHO such setup does not make any sense. You need to put them on a separate disk, or you will not gain anything.
 
IMHO such setup does not make any sense. You need to put them on a separate disk, or you will not gain anything.

If the system disks are not used for anything else it still makes sense to use them as read cache though. Otherwise they idle a lot.
 
I did it using manual workarounds, which could be unnecessary, if you made such an option with installer (just limit max space used for partitions).
The result in my case is 26,6 times increase in IOPS! From 80 to 2138.
@dietmar would you now say that it makes sense?

Proof:
Code:
root@XYZ:~# zpool list vmpool -v
NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
vmpool  5.44T  1.00G  5.44T         -     0%     0%  1.00x  ONLINE  -
  mirror  2.72T   510M  2.72T         -     0%     0%
    sdc      -      -      -         -      -      -
    sdd      -      -      -         -      -      -
  mirror  2.72T   516M  2.72T         -     0%     0%
    sde      -      -      -         -      -      -
    sdf      -      -      -         -      -      -
root@XYZ:~# pveperf /vmpool/
CPU BOGOMIPS:      128009.88
REGEX/SECOND:      1483773
HD SIZE:           5394.00 GB (vmpool)
FSYNCS/SECOND:     80.39
DNS EXT:           50.17 ms
DNS INT:           1.50 ms (XYZ.si)
root@XYZ:~# zpool add vmpool log /dev/sda4 /dev/sdb4
root@XYZ:~# zpool list vmpool -v
NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
vmpool  5.44T  1.00G  5.44T         -     0%     0%  1.00x  ONLINE  -
  mirror  2.72T   510M  2.72T         -     0%     0%
    sdc      -      -      -         -      -      -
    sdd      -      -      -         -      -      -
  mirror  2.72T   516M  2.72T         -     0%     0%
    sde      -      -      -         -      -      -
    sdf      -      -      -         -      -      -
log      -      -      -         -      -      -
  sda4  9.94G      0  9.94G         -     0%     0%
  sdb4  9.94G      0  9.94G         -     0%     0%
root@XYZ:~# pveperf /vmpool/
CPU BOGOMIPS:      128009.88
REGEX/SECOND:      1545261
HD SIZE:           5394.00 GB (vmpool)
FSYNCS/SECOND:     2138.55
DNS EXT:           37.66 ms
DNS INT:           1.40 ms (XYZ)

I also added l2arc, but that's another story:
Code:
root@XYZ:~# zpool add vmpool cache /dev/sda5 /dev/sdb5
root@XYZ:~# zpool list vmpool -v
NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
vmpool  5.44T  1.00G  5.44T         -     0%     0%  1.00x  ONLINE  -
  mirror  2.72T   510M  2.72T         -     0%     0%
    sdc      -      -      -         -      -      -
    sdd      -      -      -         -      -      -
  mirror  2.72T   516M  2.72T         -     0%     0%
    sde      -      -      -         -      -      -
    sdf      -      -      -         -      -      -
log      -      -      -         -      -      -
  sda4  9.94G    12K  9.94G         -     0%     0%
  sdb4  9.94G      0  9.94G         -     0%     0%
cache      -      -      -         -      -      -
  sda5  11.1G     4K  11.1G         -     0%     0%
  sdb5  11.1G     4K  11.1G         -     0%     0%

Here is my partition table from sda and sdb, in case anyone wonders. And yes, I have 351.5 GB of free space between part nbr. 2 and 4.
Code:
root@XYZ:~# fdisk -l /dev/sda
Disk /dev/sda: 447.1 GiB, 480103981056 bytes, 937703088 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 2B009841-424C-4347-AA74-BA90804CE51A

Device         Start       End   Sectors  Size Type
/dev/sda1         34      2047      2014 1007K BIOS boot
/dev/sda2       2048 156285069 156283022 74.5G Solaris /usr & Apple ZFS
/dev/sda4  893427712 914399231  20971520   10G FreeBSD ZFS
/dev/sda5  914399232 937703054  23303823 11.1G FreeBSD ZFS
/dev/sda9  156285070 156301454     16385    8M Solaris reserved 1
 
Last edited:
  • Like
Reactions: DerDanilo