Search results for query: raidz padding

  1. Dunuin

    ZFS pool size full with incorrect actual usage.

    I explained it atleast 100 times in this forum. Search this forum for "padding overhead". You probably need to destroy and recreate all zvols with a bigger volblocksize so your raidz1/2/3 isn't wasting too much space because of padding overhead. If you want to understand the padding overhead...
  2. Dunuin

    Confused ...

    When using raidz there is padding overhead when using zvols with a too small volblocksize. Using the default volblocksize of 8K you will loose 50% of your raw capacity, so only 4 of 8 drives capacity will be usable. 12TB you will loose for parity, 36TB you will loose to padding overhead and only...
  3. L

    Best Two Node Setup Sharing A ZFS Pool

    Awesome thanks for the detailed response! I think i will use VMs for the seedbox and media server and most of the other services as well that are internet facing to make them more isolated and allow me to directly mount SMB/NFS. That being said, in my case the media server is sent through a...
  4. Dunuin

    About ZFS, RAIDZ1 and disk space

    Thats because of padding overhead. See this article that describes it: https://www.delphix.com/blog/delphix-engineering/zfs-raidz-stripe-width-or-how-i-learned-stop-worrying-and-love-raidz With a 6 disk raidz1 using ashift=12 (ZFS will use 4K as the smallest block/sector size it can write to a...
  5. Dunuin

    Best Two Node Setup Sharing A ZFS Pool

    There is also a 4th solution. You use VMs that can directly mount NFS/SMB shares. For everything that is reachable from the internet or is very important I personally would use VMs anyway because of the better isolation, therefore better security and less problems. For LXCs with just access from...
  6. I

    Adding ZFS pool with ashift=12; which block size option?

    Ok that would be sufficient enough at first place. So I was trying to make everything (at least theoretically) 1:1. wIth an OS doing read/writes at 4k (possibly you could change that somehow but I don t care to go that direction) we need the underlaying storage to use 4k blocks as well. With...
  7. Dunuin

    Adding ZFS pool with ashift=12; which block size option?

    Writing with a higher to a lower blocksize is always fine, just not the other way round. You shouldn't loose any performance or capacity when using ashift 12 with a 512B/512B sectors disk. If you read https://www.delphix.com/blog/delphi...or-how-i-learned-stop-worrying-and-love-raidz by now you...
  8. I

    Adding ZFS pool with ashift=12; which block size option?

    Why would I want to do that at first place? One thing that it is not debatable (at least I am under that impression) is the disk - ashift relationship. 512/512 =ashift 9 (2x2x2x2x2x2x2x2x2=512) and 512/4096 or 4096/4096 (not quite sure if this exists) ashift =12(2x2x2x2x2x2x2x2x2x2x2x2)=4096 If...
  9. I

    Adding ZFS pool with ashift=12; which block size option?

    Thanks for the link (I ll read it later on) I think its not when it is why you use ashift =12. For 512e drives (4k physical -> 512 logical) you use ashift of 12 in order to avoid padding / shifting or how is that called. Is it <<you are using>> or your drives use 4k sectors physically? For...
  10. Dunuin

    Adding ZFS pool with ashift=12; which block size option?

    For raidz1/2/3 this blog post basically describes it perfectly: https://www.delphix.com/blog/delphix-engineering/zfs-raidz-stripe-width-or-how-i-learned-stop-worrying-and-love-raidz And for a mirror/stripe/striped mirror you just use the formula: "sectorsize * number of striped vdevs" So when...
  11. Dunuin

    Disk Performance Probleme bei KVM Server mit ZFS

    Eine Sache die man tun kann ist z.B. die volblocksize für seine Workload anzupassen. Gerade wenn man ein Raidz1/2/3 benutzt sollte man das immer tun, da man sonst massiv Padding-Overhead hat. Siehe hier für raidz1/2/3...
  12. Dunuin

    RAIDZ Calculator

    Du solltest dich mal mit der Volblocksize und Padding Overhead bei Raidz beschäftigen: https://www.delphix.com/blog/delphix-engineering/zfs-raidz-stripe-width-or-how-i-learned-stop-worrying-and-love-raidz Kurz: Nimmst du keine recht hohe Volblocksize, was die Performance von DBs wie MySQL und...
  13. Dunuin

    [SOLVED] RaidZ2 terrbile high IO delay

    Don't choose a too small blocksize or you get alot of padding overhead: https://www.delphix.com/blog/delphix-engineering/zfs-raidz-stripe-width-or-how-i-learned-stop-worrying-and-love-raidz And don't fill your pool more than 80% or it will get slow too. So 20% should always be kept free.
  14. Dunuin

    Newb, confused at raidz2 available space

    Read this to understand volblocksize and padding overhead: https://www.delphix.com/blog/delphix-engineering/zfs-raidz-stripe-width-or-how-i-learned-stop-worrying-and-love-raidz I guess you kept the default 8K volblocksize and with a 4 disk raidz2 you will loose 66,6% of the raw capacity to...
  15. Dunuin

    Newbie question on ZFS - using multiple devices as a single logical unit

    The L2ARC is just a cache and can be lost without a problem. But ZFS knows more layers then shown in your pyramid. There are some vdevs like the "special" for storing metadata (and optionally small data blocks), the "dedup" for storing deduplication tables, "spare" for hot spares...and then...
  16. Dunuin

    Reduce ram requirement/usage due to having ZFS

    Also read this: https://www.delphix.com/blog/delphix-engineering/zfs-raidz-stripe-width-or-how-i-learned-stop-worrying-and-love-raidz I guess you didn't increased your volblocksize so you are loosing 50% of your raw capacity. 25% loss of the raw capacity to parity and 25% loss to padding...
  17. Dunuin

    Installation: root + swap on SSD, data on ZFS

    First install PVE just to the SSD. You can create a ZFS pool with the HDDs later using CLI or WebUI. Root just needs about 16-32GB in case you don't want to store ISOs/templates/backups there, so there might be plenty of space left. In case you don't change the defaults in the installer (see...
  18. Dunuin

    Zfs summary usage - wrong?!

    And don't forget the overhead. Especially when using a raidz1/2/3 you often get alot of padding overhead. Lets say you write 1TB to a zvol and that zvol will then need 1.66 TB on the pool. That additional 0.66 TB come from the padding overhead when you use raidz1/2/3 without increasing the...
  19. Dunuin

    [SOLVED] zfs storage full but i don't understand why

    The problem should be padding overhead caused by using raidz with a too small volblocksize. Read this: https://www.delphix.com/blog/delphix-engineering/zfs-raidz-stripe-width-or-how-i-learned-stop-worrying-and-love-raidz You probably want atleast 8 times your ashift as volblocksize to only...
  20. Dunuin

    VM Disk not adding up in ZFS gui

    Most of the time when a virtual disks uses way more space than expected then its one of these three things: 1.) You got old snapshots which prevent that data can be removed or space can be freed up. Check it with zfs list -o space -r YourPoolName. If "USEDSNAP" is very high remove old...