Search results for query: raidz padding

  1. Dunuin

    ZFS newbie question

    VMs don't use filesystems. VMs use zvols = block devices. LXCs use datasets = filesystems. With zvols on raidz1/2/3 you get padding overhead when the volblocksize is too low. Datasets are using a dynamic recordsize. So it matters what you store on it. There is an explanation of the padding...
  2. Dunuin

    ZFS with three drives

    Run zpool status to see how the pool is organized. If you see a "raidz-0" or "raidz1-0" then it is a raidz1. It would be totally normal for zpool list to show 5.45T for a 3x 2TB disk raidz1. The zpool command shows the raw capacity (so data+parity). The zfs command shows the capacity usable for...
  3. R

    Proxmox VE PreInstall Sanity Check

    That link seems to be broken but I did some research into the topic. What would be good volblocksize for our application? Is there any downside to going larger and larger? The 8TB I placed above was the total raw capacity, so there is only 8 1TB drives in that server. Same with the others. Only...
  4. Dunuin

    Proxmox VE PreInstall Sanity Check

    Did you read about padding overhead when using any raidz? You will moat likely need to increase the volblocksize to minimize padding overhead stealing capacity and this will make small random reads and all small writes very slow. So not great for running stuff like postgresql or MySQL DBs...
  5. M

    ZFS share

    Thank you for sharing this Neobin. Although it was still not enough information, that answer suggests it's something called "padding overhead", no confirmation, no digging. It seems that with ZFS in RaidZ you will lose space not only to the data parity but also to storage overhead (another 20%...
  6. Dunuin

    Platten durch größere ersetzten...

    Du kannst aus einem Raid0 oder einem Mirror kein raidz1 machen. Dazu müsstest du auch wieder den Pool erst zerstören und neu aufbauen. Was wohl möglich wäre, wenn du ein raid0 behalten aber größer haben willst, wäre aus dem Raid0 einen striped mirror (also raid10) machen. Wobei du dann zwei...
  7. Dunuin

    can not add hard disk: out of space(500)

    There is padding overhead. Of your 32TB of raw capacity you lose 25% because of parity data. Of the remaining 24TB you lose 33% because of padding overhead (when using 4 disk raidz1 with ashift=12 and default 8K volblocksize) so only 16TB left. And a ZFS pool should always have 20% of free...
  8. Dunuin

    ZFS Disk config help - New to proxmox from vmware

    Best would be to add 2 small mirrored SSDs as "special metdata devices". That way HDDs would only need to store data and all metadata would be stored on the small SSDs. This would increase IOPS for reads+async writes+sync write. I think it made my HDDs about 2-3 times faster, because the HDDs...
  9. Dunuin

    HowTo: Proxmox VE 7 With Software RAID-1

    You need to create different datasets and add them as different ZFS storages. One storage for each different volblocksize you want to use. Otherwise PVE will use the wrong volblocksize when doing a backup restore or migration between nodes. For my 5 disk raidz1 with a ashift of 12 its 32K...
  10. Dunuin

    ZFS + PVE Disk usage makes no sense

    I can recommend this article on why there is padding overhead and how to calculate the optimal volblocksize: https://www.delphix.com/blog/delphix-engineering/zfs-raidz-stripe-width-or-how-i-learned-stop-worrying-and-love-raidz
  11. T

    Where did my disk space go?

    "1.) check that you don't use raidz1, raidz2 or raidz3 with a too low volblocksize. Using raidz1/2/3 with the default 8k volblocksize will always waste alot of space when using zvols (or using VMs) because of padding overhead. Use the forums search function for more information, I explained that...
  12. Dunuin

    Choosing ZFS volblocksize for a container's storage: Same logic as for VMs?

    You can change the recordsize to optimize it for a specific workload. For example a 16k recordsize should be great for a dataset storing a mysql DB that only writes 16k blocks. Especially when using deduplication where dedupcication with 16k records should be more efficient than way bigger...
  13. Dunuin

    Welches NAS OS für Proxmox?

    Dem Disk Passthrough traue ich nicht mehr so ganz nach einigen Problemen. Virtuelle Disks kann man machen, aber PVE kann über die GUI nur HW raid oder ZFS. Bei ZFS nicht vergessen, dass du da bei Raidz noch Kapazität wegen Padding-Overhead verlierst wenn du die Volblocksize nicht anhebst. Dann...
  14. Dunuin

    Need to recover a file system on a QEMU harddisk stored on a ZFS pool

    Problem is padding overhead when using zvols on raidz. See here to learn how to calculate usable size and how to minimize capacity loss: https://www.delphix.com/blog/delphix-engineering/zfs-raidz-stripe-width-or-how-i-learned-stop-worrying-and-love-raidz Basically with a 8 disk raidz1 with...
  15. Dunuin

    Unterschiedliche Speicherkapazität-Anzeige

    Ist immer die Frage wie du die Kapazität betrachtest und mit welchem Befehl. Der "zpool" Befehl zeigt dir z.B. immer die Rohkapazität an (inkl Kapazität die für Parität benutzt wird), also knappe 6TB. Der "zfs" Befehl zeigt dir die "nutzbare" Kapazität mit abgezogener Paritätskapazität, also...
  16. Dunuin

    raidz out of space

    Thats because of padding overhead. You need to increase the volblocksize from 8K to something like 32K. Otherwise you will loose 50% of your raw capacity (20% parity overhead loss + 30% padding overhead loss). In other words...everything written to a zvol will consume 160% space. You can change...
  17. Dunuin

    [ZFS] Pool not showing in PVE

    With a raidz1 your IOPS performance will be only as fast as the single slowest disk. Your A400 are horrible slow as soon as the cache gets full. So your whole pool of 3 disks wouldn't be faster than a single A400 when it comes to IOPS performance. For throughput performance the performance will...
  18. Dunuin

    ZFS space consumption

    First I would run zfs list -o space so see how much of your pool is used up by snapshots and refreservation. Then you should check what your pools ashift and the volblocksize of your zvols are: zpool get ashift datastore zfs get volblocksize I guess you use defaults, so your ashift is 12 and...
  19. leesteken

    ZFS space consumption

    This post might be relevant to your raidz-1 pool. @Dunuin is celebrated on this forum for lots of testing and research on ZFS padding (and write amplification and performance).
  20. Dunuin

    ZFS pool size full with incorrect actual usage.

    Thats normal. Without manually optimizing stuff like the volblocksize a raidz pool won't give you more space than an striped mirror (raid10). On paper it will look like you got more space but all the additional space you got is wasted because of padding overhead. Then everything will be...