Search results for query: raidz1 padding

  1. Dunuin

    Turning on ZFS compression on pool

    Raidz1 is still not a great option because you are either...: 1.) use the default 8K volblocksize where you loose 50% of your raw capacity, even if you don't see it. It will show you everywhere that you got 75% of your raw capacity as usable space but that is wrong as everything you write to a...
  2. Dunuin

    Turning on ZFS compression on pool

    So you are talking about ZFS as a storage for VMs/LXCs using replicatiopn between two PVE nodes and not as a storage for a PBS datastore synced between two PBS servers? 4 disk raidz1 is bad for MySQL as you would need to use a volblocksize of atleast 32K (in case of ashift=12) to not loose alot...
  3. Dunuin

    ZFS space consumption

    First I would run zfs list -o space so see how much of your pool is used up by snapshots and refreservation. Then you should check what your pools ashift and the volblocksize of your zvols are: zpool get ashift datastore zfs get volblocksize I guess you use defaults, so your ashift is 12 and...
  4. Dunuin

    ZFS pool size full with incorrect actual usage.

    I explained it atleast 100 times in this forum. Search this forum for "padding overhead". You probably need to destroy and recreate all zvols with a bigger volblocksize so your raidz1/2/3 isn't wasting too much space because of padding overhead. If you want to understand the padding overhead...
  5. Dunuin

    [SOLVED] Huge backup size when backup but small used disk on VM

    And beside not working discard/trim the padding overhead of a raidz1/2/3 could also increase the size when using a too low volblocksize. But paddin overhead alone shouldn't be more than 200%.
  6. L

    Best Two Node Setup Sharing A ZFS Pool

    Awesome thanks for the detailed response! I think i will use VMs for the seedbox and media server and most of the other services as well that are internet facing to make them more isolated and allow me to directly mount SMB/NFS. That being said, in my case the media server is sent through a...
  7. Dunuin

    About ZFS, RAIDZ1 and disk space

    Thats because of padding overhead. See this article that describes it: https://www.delphix.com/blog/delphix-engineering/zfs-raidz-stripe-width-or-how-i-learned-stop-worrying-and-love-raidz With a 6 disk raidz1 using ashift=12 (ZFS will use 4K as the smallest block/sector size it can write to a...
  8. Dunuin

    Best Two Node Setup Sharing A ZFS Pool

    There is also a 4th solution. You use VMs that can directly mount NFS/SMB shares. For everything that is reachable from the internet or is very important I personally would use VMs anyway because of the better isolation, therefore better security and less problems. For LXCs with just access from...
  9. I

    Adding ZFS pool with ashift=12; which block size option?

    Ok that would be sufficient enough at first place. So I was trying to make everything (at least theoretically) 1:1. wIth an OS doing read/writes at 4k (possibly you could change that somehow but I don t care to go that direction) we need the underlaying storage to use 4k blocks as well. With...
  10. Dunuin

    Adding ZFS pool with ashift=12; which block size option?

    Writing with a higher to a lower blocksize is always fine, just not the other way round. You shouldn't loose any performance or capacity when using ashift 12 with a 512B/512B sectors disk. If you read https://www.delphix.com/blog/delphi...or-how-i-learned-stop-worrying-and-love-raidz by now you...
  11. I

    Adding ZFS pool with ashift=12; which block size option?

    Thanks for the link (I ll read it later on) I think its not when it is why you use ashift =12. For 512e drives (4k physical -> 512 logical) you use ashift of 12 in order to avoid padding / shifting or how is that called. Is it <<you are using>> or your drives use 4k sectors physically? For...
  12. Dunuin

    Adding ZFS pool with ashift=12; which block size option?

    For raidz1/2/3 this blog post basically describes it perfectly: https://www.delphix.com/blog/delphix-engineering/zfs-raidz-stripe-width-or-how-i-learned-stop-worrying-and-love-raidz And for a mirror/stripe/striped mirror you just use the formula: "sectorsize * number of striped vdevs" So when...
  13. I

    Adding ZFS pool with ashift=12; which block size option?

    Since I am in the process of deciding the most suitable block size for my VMs (90% ofthem will be Win Servers) as well (I also have a Raid 10 created with 4 drives), I was convinced I had to use 4k in order to avoid the padding issue. Now I noticed that also the raid type comes into the equation...
  14. Dunuin

    Disk Performance Probleme bei KVM Server mit ZFS

    Eine Sache die man tun kann ist z.B. die volblocksize für seine Workload anzupassen. Gerade wenn man ein Raidz1/2/3 benutzt sollte man das immer tun, da man sonst massiv Padding-Overhead hat. Siehe hier für raidz1/2/3...
  15. Dunuin

    RAIDZ Calculator

    Padding Overhead hast du nur bei Raidz1/2/3 glaube ich. 4x 2TB SSDs im Raid10 (bei ZFS "Striped Mirror" genannt) hättest du: (8 TB Rohkapazität - 4 TB Parität) * 80% = 3,2 TB nutzbar Und bei 6x 2 TB SSDs im Raid10 wären es 4,8 TB. HDDs würde ich heute nicht mehr als Storage für VMs/LXCs...
  16. Dunuin

    RAIDZ Calculator

    Du solltest dich mal mit der Volblocksize und Padding Overhead bei Raidz beschäftigen: https://www.delphix.com/blog/delphix-engineering/zfs-raidz-stripe-width-or-how-i-learned-stop-worrying-and-love-raidz Kurz: Nimmst du keine recht hohe Volblocksize, was die Performance von DBs wie MySQL und...
  17. Dunuin

    Installation: root + swap on SSD, data on ZFS

    First install PVE just to the SSD. You can create a ZFS pool with the HDDs later using CLI or WebUI. Root just needs about 16-32GB in case you don't want to store ISOs/templates/backups there, so there might be plenty of space left. In case you don't change the defaults in the installer (see...
  18. Dunuin

    Zfs summary usage - wrong?!

    And don't forget the overhead. Especially when using a raidz1/2/3 you often get alot of padding overhead. Lets say you write 1TB to a zvol and that zvol will then need 1.66 TB on the pool. That additional 0.66 TB come from the padding overhead when you use raidz1/2/3 without increasing the...
  19. Dunuin

    VM Disk not adding up in ZFS gui

    With a 5 disk raidz1 you usually want a volblocksize of 32K to only loose (20% of your raw storage to parity/padding). With a volblocksize of 16K you will loose 33% and with a volblocksize of the default 8K even 50% of your raw capacity. Padding overhead only effects blockdevices and not file...
  20. Dunuin

    VM Disk not adding up in ZFS gui

    Most of the time when a virtual disks uses way more space than expected then its one of these three things: 1.) You got old snapshots which prevent that data can be removed or space can be freed up. Check it with zfs list -o space -r YourPoolName. If "USEDSNAP" is very high remove old...