Search results for query: raidz padding

  1. Dunuin

    [SOLVED] ZFS RAID-Z2 - Zu viel Speicherplatz?

    genau Das ist etwas tricky mit ZFS. Also erstmal sollte man wissen das alle Paritätsplatten wegfallen. Hast du raidz2 mit 8 disks sind 2 Platten für Parität und 6 Platten für Daten. Theoretisch könntest du also maximal 6x 4TB = 24TB nutzen. Dann ist die Frage wie du nach der Größe guckst. Der...
  2. Dunuin

    Noob: lvm-thin not mounted

    LVM-thin is not your usual partition. By default it should be set up as a VM/LXC storage only and every virtual disk will create its own blockdevice as a new LV. So its not meant to be mounted somewhere to store other data like backups on it. For your 3x 2TB raidz1 you want to increase the...
  3. Dunuin

    [SOLVED] Single VM volume filling 100% of ZFS pool

    You can look at the table I already linked above. It shows the parity+padding losses for all drive and volblocksize combinations. If you really want to calculate the padding overhead yourself here is a great blog post explaining it.
  4. Dunuin

    [SOLVED] Issue with adding vm to zpool

    Whats your output of zpool status and zfs list? I would guess you are using raidz, didn't changed the volblocksize so you got alot of padding overhead and you try to create a big zvol that is bigger than your pool because of the padding overhead.
  5. Dunuin

    Tuning ZFS 4+2 RAIDZ2 parameters to avoid size multiplication

    Thats not corrext. Padding blocks are only added to fill up the space to a multiple of parity + 1. So for raidz2 it has to be 3/6/9/12/... blocks. So with a 4K volblocksize zvol writing will always result in 1x 4K of data + 2x 4k of parity and no padding. No matter what the number of drives is...
  6. Dunuin

    how to best benchmark SSDs?

    Padding is no problem if you use the right volblocksize. My 5 disk raidz1 with 32k volblocksize should only loose 20% of raw capacity to parity and 0% to padding. A striped mirror would loose 50% to parity and so needs to write everything twice. My raidz1 don't need to write it twice, it only...
  7. C

    how to best benchmark SSDs?

    interesting that mirror has higher amplification especially when taking into account the padding on zvols with small blocks on raidz. As for the gap to lvm thin, I wonder if the impact of things like writing checksum and other zfs only metadata is quite big with 4k ashift.
  8. Dunuin

    zfs used size vs volsize

    You won't only loose the redundancy, you will also loose features like your bit rot protection that can't work without a form of parity. With a stripe your ZFS still can detect data corruptions and tell you "your data got corrupted" but it can't repair it anymore because there is no parity. With...
  9. Dunuin

    zfs used size vs volsize

    If you use a volblocksize of 8K with raidz1 you will loose 25% of your raw capacit to parity and 25% of your raw capacity to padding overhead (in other words for every 2 blocks of data you get 1 block of padding so eveything is 50% bigger). If you want to minimize padding overhead you need to...
  10. Dunuin

    Missing Storage in ZFS pool?

    If you want to use ZFS you need to learn how it works first. Just using default values will often result in bad performance, lost capacity or even total dataloss. That everything will be 33% bigger (so you are wasting a additional 17% of raw capacity) is totally expected and you can read here...
  11. Dunuin

    ZFS striped/mirror - volblocksize

    I'm right now doing alot of benchmarks. I already did the 4 disk striped mirror benchmarks. I would go with 8K volblocksize (or 16K volblocksize if you plan to extend the pool later to 6 or 8 SSDs). Write and read amplification gets really terrible as soon as you try to write/read something that...
  12. F

    ZFS striped/mirror - volblocksize

    Hi I'm new in Proxmox VE, need your advice about "ZFS striped/mirror - volblocksize". I already read a lot in this forum about guest blocksize, volblocksize comparing performance IOPS, write amplification, padding parity overhead. But its related to RAIDZ setup, I will use 1 SSD for proxmox VE...
  13. Dunuin

    ZFS unable to import on boot/unable to add VD on empty zfs pool...

    You don't use ZFS because you want raid. ZFS should be used if you want raid and you want to be sure that the data won't corrupt. And because you want CoW, a self-healing filesystem, compression on block level, deduplication, replication, snapshots and so on. All stuff that a HW raid controller...
  14. H

    z_wr_iss high CPU usage and high CPU load

    Ok thanks for your help. This will also adjust if I just migrate the machine too? rather than backup/restore.
  15. Dunuin

    z_wr_iss high CPU usage and high CPU load

    For striped pools you want blocksize of your pool (so 4K if ashift of 12 is used) multiplied by the number of data bearing disks. So if you got two mirrors of each 2 SSDs striped together using ashift of 12 that is 2 * 4K = 8K. For raidz it is more complex. Here is a table that show...
  16. H

    z_wr_iss high CPU usage and high CPU load

    No I have not. What do you suggest for a combination of Windows/Linux VMs SAMSUNG MZQLB1T9HAJR-00007 on RAIDZ (3 drives). Also the same but for RAID10 (4 drives).
  17. Dunuin

    z_wr_iss high CPU usage and high CPU load

    Thats normal. Raid10 is just writing stuff to multiple disks without any big computations. For raidz it is way more complex. You need to compute parity data and so on. So a raidz will always be more CPU heavy. Did you optimize your volblocksize? If you are just using the default 8K volblocksize...
  18. Dunuin

    Terrible Raid-Z2 Performance on Sequential Writes

    Did you change the volblocksize? Default is 8K for zvols and that is very bad for any raidz2. Look here for a table that shows padding+parity overhead vs volblocksize. Here is the blog explaining how raidz2 works in detail on block level. In short: For a raidz2 of 6 drives using 4K blocksize...
  19. Dunuin

    ashift, volblocksize, clustersize, blocksize

    Thats why I see atleast one person every week starting a new thread like "my storage is too small" and they don't get that just creating VMs without optimizing the volblocksize first is most of the time wasting TB or dozens of TBs of storage space. Especially if some kind of raidz is used where...
  20. Dunuin

    ZFS-2 Showing total usable storage at half of what it should be

    That the padding overhead. When using raidz1 with 4 drives, ashift=12 and a volblocksize of 8K you will loose 25% of the raw capacity to parity and another 25% of raw capacity to padding. So ZFS will tell you you got 1.5TB of usable capacity but everything is 150% in size again, so after using...