Search results for query: raidz1 padding

  1. Dunuin

    ZFS pool layout and limiting zfs cache size

    The problem is that there are so many factors you need to take into account, because everyone got a different workload, that that such a tool would be hard to understand how to use it right. And all the numbers I calculated above are just theoretical and based on formulas how ZFS should behave...
  2. F

    ZFS pool layout and limiting zfs cache size

    WOWZA!!!! I wish there was a tool that would offer recommended ZFS pool options e.g. Get user input i.e. Do you prefer performance or storage? How many drives do you have? Are they SSDs or Spindles? Will you be running databases or just hosting files? Are you hosting VMs or not? And then offer...
  3. Dunuin

    ZFS pool layout and limiting zfs cache size

    With just 5 drives your options would be: Raw storage: Parity loss: Padding loss: Keep 20% free: Real usable space: 8K random write IOPS: 8K random read IOPS: big sequential write throughput: big sequential read throughput: 5x 800 GB raidz1 @ 8K volblocksize: 4000 GB - 800 GB -1200 GB -400...
  4. F

    ZFS pool layout and limiting zfs cache size

    Ah! this makes sense, no wonder why I was experience lower read/write speeds thanks for the explanation! OK, I will need to recreate the zfs pool in a mirror config. So which option to choose? Mirror or RAID10? So the Default 8K would work perfectly for both MySQL and Postgres? Got...
  5. Dunuin

    ZFS pool layout and limiting zfs cache size

    Raidz IOPS doesn't scale with the number of drives. So no matter if you use 5, 10 or 100 drives in a raidz, that pool isn't faster than a single drive alone...atleast for IOPS. But with raidz you are forced to choose a bigger volblocksize compared to a single drive so possibly more overhead when...
  6. F

    ZFS pool layout and limiting zfs cache size

    Sorry I didn't understand what you mean by having IOPS slower than a single SSD? These SSDs are 12Gbps and a bit expensive ones to run in a homelab so I wanted to squeeze as much useable storage as I possibly could. I guess I could try them in a mirror config as suggested because I currently...
  7. Dunuin

    ZFS pool layout and limiting zfs cache size

    Keep in mind that using raidz you basically only get the IOPS performance of a nearly single SSD. So in terms of IOPS your pool will be slower than just using a single 800GB SSD. A VM storage likes IOPS, where a striped mirror (raid10) would be preferable. Also keep in mind that with raidz1 of 5...
  8. Dunuin

    Create ZFS fails on GUI with "unknown" - on commandline "raidz contains devices of different sizes"

    Not sure about PBS, but if you use PVEs webUI for creating ZFS pools it won't optimize anything. It will just use OpenZFS default values, if they make sense or not, no matter how your pool looks like. The volblocksize for example will always be 8K but as soon as you use any kind of raidz1/2/3...
  9. A

    Poor ZFS SSD IO benchmark: RAID-Z1 4 x SSD similar to RAID-Z10 12 x HDD

    With ashift=12 same - when FIO param "size" > 2G with "bs"=4k - performance of ZFS is dropdown: # fio --time_based --name=benchmark --size=1800M --runtime=30 --filename=/mnt/zfs/g-fio.test --ioengine=libaio --randrepeat=0 --iodepth=32 --direct=1 --invalidate=1 --verify=0 --verify_fatal=0...
  10. Dunuin

    Poor ZFS SSD IO benchmark: RAID-Z1 4 x SSD similar to RAID-Z10 12 x HDD

    You created your pool "zfs-p1" with a ashift of 13 so 8K blocksize is used. Then you got a raidz1 with 4 disks so you want a volblocksize of atleast 4 times your ashift so 32K volblocksize (4x 8K) to only loose 33% of your raw capacity instead of 50%. So each time you do a 4K read/write to a...
  11. Dunuin

    Cannot allocate 2TB disk space while over 3TB is free

    Dont use df, use zfs list to see how much space is left. And I guess you are using a raidz1/2/3 and didn't increased the volblocksize? In that case you get alot of padding overhead and it might be possible that storing a 2 TB virtual disk will consume 4TB on your pool so you are running out of...
  12. Dunuin

    [SOLVED] ZFS RAID-Z2 - Zu viel Speicherplatz?

    genau Das ist etwas tricky mit ZFS. Also erstmal sollte man wissen das alle Paritätsplatten wegfallen. Hast du raidz2 mit 8 disks sind 2 Platten für Parität und 6 Platten für Daten. Theoretisch könntest du also maximal 6x 4TB = 24TB nutzen. Dann ist die Frage wie du nach der Größe guckst. Der...
  13. Dunuin

    Noob: lvm-thin not mounted

    LVM-thin is not your usual partition. By default it should be set up as a VM/LXC storage only and every virtual disk will create its own blockdevice as a new LV. So its not meant to be mounted somewhere to store other data like backups on it. For your 3x 2TB raidz1 you want to increase the...
  14. Dunuin

    [SOLVED] Hilfe beim erstellen eines verschlüsselten Laufwerk

    LXCs nutzen Datasets, VMs nutzen Zvols. Beides kannst du recht leicht verschlüsseln. ZFS vererbt immer seine Eigenschaften an aller Kinder-Elemente, also auch die Verschlüsselung. Am besten erstellst du dir ein verschlüsseltes Dataset (z.B. zfs create -o encryption=aes-256-gcm -o...
  15. Dunuin

    [SOLVED] Welche Optionen zur Schonung von SSDs hat ZFS ?

    ZFS ist immer ziemlich übel für SSDs. Du hast halt eine recht hohe Write Amplification weil ZFS so komplex ist. Ich habe 3 DB lastige VMs auf meinem Homeserver von einem ZFS Pool auf ein LVM verlagert und das spart mir jetzt so 200-300GB an Writes pro Tag. Mit ZFS schreibst du halt je nach...
  16. Dunuin

    Im ZFS-Pool (raidz1) fehlt ca. 1/3 des freien Speicherplatzes

    Ja, wird am Padding Overhead liegen. Bei 4 disks als raidz1 musst du die volblocksize schon von 8K auf wenigstens 32K anheben, sonst ist alles 50% größer weil auf alle 2 Datenblocks ein Paddingblock kommt. Und Volblocksize kann nur bei der Erstellung eines zvol gesetzt werden, du musst deine...
  17. Dunuin

    All zfs disk degraded, is it possible to recover?

    Yes, you basically got more failed drives the pool can handle. Maybe for the future it would make sense to... 1.) use a raidz2 if you got 6 disks, so it can handle another failing disk (and should be less overhead because you can use a blocksize of 16K instead of 32K. If you are limited to max...
  18. Dunuin

    how to best benchmark SSDs?

    Padding is no problem if you use the right volblocksize. My 5 disk raidz1 with 32k volblocksize should only loose 20% of raw capacity to parity and 0% to padding. A striped mirror would loose 50% to parity and so needs to write everything twice. My raidz1 don't need to write it twice, it only...
  19. Dunuin

    how to best benchmark SSDs?

    Yes, for write amification it really made a difference. I had a really hard time deciding how I should setup my storage. I've got only 10 SATA ports or 9 SATA + 1 M.2 and I have no free PCIe slots to add any HBAs. For boot, root and swap I got two S3700 100GB SSDs running as a LUKS encrypted...