Search results for query: raidz padding

  1. Dunuin

    Adding new disks to Raidz

    Since last year it is possible to expand a raidz but if you want to have the maximum possible capacity you still need to rewrite all the existing data on that pool, because otherwise the old data is still using the old data-to-parity-ratio. And if you run VMs that pool you might need to recreate...
  2. I

    VMs disk growing beyond the allocated size

    My RAIDZ2 consists of 8 disks + logs and cache: root@san01[~]# zpool status pool01 pool: pool01 state: ONLINE scan: scrub repaired 0B in 6 days 06:19:08 with 0 errors on Fri Jan 21 06:28:27 2022 config: NAME STATE READ WRITE CKSUM...
  3. Dunuin

    VMs disk growing beyond the allocated size

    Whats your zpool status pool01 or how much disks your raidz2 consists of? There is a good explanation about padding overhead and volblocksize: https://www.delphix.com/blog/delphix-engineering/zfs-raidz-stripe-width-or-how-i-learned-stop-worrying-and-love-raidz With a 4K volblocksize you will...
  4. Dunuin

    (7.1) ZFS Performance issue

    You want your pools block size (or in other words your zvols volblocksize) way bigger than the sector size (or in other words the ashift you have chosen) of your disks or you will loose most of your capacity due to padding overhead. You can't directly see the padding overhead because its...
  5. Dunuin

    Create a 100% sized disk for a VM

    With a raidz you always got padding overhead when using a too small volblocksize. Using the defalt 8k volblocksize and ashift=12 and 3x 3TB disks in a raidz1 you bascially loose 33% of your raw storage to parity and additional 17% of your raw capacity to padding overhead. You can't directly see...
  6. Dunuin

    3x6to in RAIDZ1 and proxmox 100% full with a 4.7To copy file

    Google for 'volblocksize' and and padding overhead. If you got 3x 6TB as raidz1 with volblocksize=8k and ashift=12 you only got 9TB usable capacity for zvols of which only 7.2TB should be used because 10-20% of a ZFS pool always should be kept free or ZFS will get slow and finally stop...
  7. Dunuin

    Question regarding LVM/ZFS

    Basically with 4x 2TB disks in a raidz1 and ashift of 12 you got this: 8TB raw storage where you will loose 2TB because of parity so you only got 6TB. "zpool" always shows the raw storage (so the full 8TB even if 8TB aren't usable). "zfs" command will always show the raw storage - parity, so...
  8. Dunuin

    ZFS Raid 10 mirror and stripe or the opposite

    Yes, still need to summarize that in a less confusing way in a blog post. Jep. But as you said, its not that important if you just run the system on that SSDs. Then the SSDs are always idleing anyway. Correct benchmark would be to use the same fio test with just "--bs=16K" and run that on 4...
  9. Dunuin

    ZFS pool layout and limiting zfs cache size

    Sorry, did that in my head to there is no spreadsheet I could share. For datasets usable capacity is: (TotalNumberOfDrives - ParityDrives) * DiskSize * 0,8 The "* 0.8" is because 20 percent pool should be kept free. So a 8x 1.8TB disk raidz1 would be: (8 - 1) * 1.8TB * 0.8 = 10.08 TB And a 8x...
  10. Dunuin

    ZFS pool layout and limiting zfs cache size

    The problem is that there are so many factors you need to take into account, because everyone got a different workload, that that such a tool would be hard to understand how to use it right. And all the numbers I calculated above are just theoretical and based on formulas how ZFS should behave...
  11. F

    ZFS pool layout and limiting zfs cache size

    WOWZA!!!! I wish there was a tool that would offer recommended ZFS pool options e.g. Get user input i.e. Do you prefer performance or storage? How many drives do you have? Are they SSDs or Spindles? Will you be running databases or just hosting files? Are you hosting VMs or not? And then offer...
  12. Dunuin

    ZFS pool layout and limiting zfs cache size

    With just 5 drives your options would be: Raw storage: Parity loss: Padding loss: Keep 20% free: Real usable space: 8K random write IOPS: 8K random read IOPS: big sequential write throughput: big sequential read throughput: 5x 800 GB raidz1 @ 8K volblocksize: 4000 GB - 800 GB -1200 GB -400...
  13. F

    ZFS pool layout and limiting zfs cache size

    Ah! this makes sense, no wonder why I was experience lower read/write speeds thanks for the explanation! OK, I will need to recreate the zfs pool in a mirror config. So which option to choose? Mirror or RAID10? So the Default 8K would work perfectly for both MySQL and Postgres? Got...
  14. Dunuin

    ZFS pool layout and limiting zfs cache size

    Raidz IOPS doesn't scale with the number of drives. So no matter if you use 5, 10 or 100 drives in a raidz, that pool isn't faster than a single drive alone...atleast for IOPS. But with raidz you are forced to choose a bigger volblocksize compared to a single drive so possibly more overhead when...
  15. F

    ZFS pool layout and limiting zfs cache size

    Sorry I didn't understand what you mean by having IOPS slower than a single SSD? These SSDs are 12Gbps and a bit expensive ones to run in a homelab so I wanted to squeeze as much useable storage as I possibly could. I guess I could try them in a mirror config as suggested because I currently...
  16. Dunuin

    ZFS pool layout and limiting zfs cache size

    Keep in mind that using raidz you basically only get the IOPS performance of a nearly single SSD. So in terms of IOPS your pool will be slower than just using a single 800GB SSD. A VM storage likes IOPS, where a striped mirror (raid10) would be preferable. Also keep in mind that with raidz1 of 5...
  17. Dunuin

    Create ZFS fails on GUI with "unknown" - on commandline "raidz contains devices of different sizes"

    Volblocksize only effects zvols (so for example all virtual disks of VM run by PVE). For datasets (so what LXCs on PVE and your datastore on PBS should use) the "recordsize" is used instead. For volblocksize and padding overhead I can recommend this blog post of one of the ZFS engineers...
  18. A

    Poor ZFS SSD IO benchmark: RAID-Z1 4 x SSD similar to RAID-Z10 12 x HDD

    With ashift=12 same - when FIO param "size" > 2G with "bs"=4k - performance of ZFS is dropdown: # fio --time_based --name=benchmark --size=1800M --runtime=30 --filename=/mnt/zfs/g-fio.test --ioengine=libaio --randrepeat=0 --iodepth=32 --direct=1 --invalidate=1 --verify=0 --verify_fatal=0...
  19. Dunuin

    Poor ZFS SSD IO benchmark: RAID-Z1 4 x SSD similar to RAID-Z10 12 x HDD

    You created your pool "zfs-p1" with a ashift of 13 so 8K blocksize is used. Then you got a raidz1 with 4 disks so you want a volblocksize of atleast 4 times your ashift so 32K volblocksize (4x 8K) to only loose 33% of your raw capacity instead of 50%. So each time you do a 4K read/write to a...
  20. Dunuin

    Cannot allocate 2TB disk space while over 3TB is free

    Ok, so you got two pools with ashift 12. One is raidz2 with 8 disks and one raidz2 with 10 disks. I guess both pools use the default 8K volblocksize. tank1 will loose 67% of the raw capacity to parity+padding so only 33% would be usable. And because a pool shouldn't be filled up more than 80%...