Search results for query: raidz1 padding

  1. J

    [SOLVED] ZFS pool almost full despite reducing zvol size

    I checked out the OpenZFS documentation to reaffirm what you are saying and I think I understand it better now, and will likely use this chart over the PVE GUI in the future: https://openzfs.github.io/openzfs-docs/Basic%20Concepts/RAIDZ.html. I wrongfully assumed that going from 80TB to 60TB...
  2. leesteken

    [SOLVED] ZFS pool almost full despite reducing zvol size

    A 4-drive RAIDz1 only has about 50% usable space because of the volblocksize mismatch. As discussed before: RAIDz1 is not like hardware RAID5. There is a lot of padding (and write multiplication, which it also makes it slow for running VMs) and ZFS does not show the expected usable size (when...
  3. leesteken

    [PVE] High I/O delay when transferring data

    Those drives appear to be Red Plus, which at least do not use SMR. For best performance use (second-hand) enterprise SSDs with PLP. How did you use those old drives with ESXi? If you were happy with the performance, maybe do that again. A ZFS stripe of two mirrors (which is like RAID10) will...
  4. N

    [SOLVED] Installation raidz-1

    which zfs raid you would recommend for VM storage? Thanks to all for reply!!! All the best from Serbia
  5. leesteken

    [SOLVED] Installation raidz-1

    It definitely looks like a RAIDz1, which is not exactly the same as (hardware) RAID5. It probably shows all space (including the parity) now but each file and virtual disk will take more space than you migh expect (due to padding and parity). Please be aware that RAIDz1 is terrible for running...
  6. leesteken

    IO Delay

    Change the RAIDz1 to a stripe of two mirrors (a in RAID10, which I suggested in my first reply)? RAIDz1 has a lot of padding (as also discussed before on this forum), so you might not even lose much space. Or use your hardware RAID controller instead of ZFS.
  7. leesteken

    ZFS on ZFS

    You'll have write amplifiaction on top of write amplication. And RAIDz1 has padding overhead with so few drives and you'll probably have a mismatching volblocksize, losing a lot of space. Maybe run the VM on LVM instead? Or run the software in a container (if it is based on Linux)? Or simply...
  8. I

    Windows VM Trimming

    That's fine. The setup is an experiment. Once I'm happy with it, I plan on changing them out for Micron's which do have PLP and handles the load better. But thanks for the advise :)
  9. Dunuin

    Windows VM Trimming

    Padding overhead only affects zvols as only zvols have a volblocksize. LXCs and PBS are using datasets which are using the recordsize instead (and therefore no padding overhead). One of the many reasons why you usually don't want to use a raidz1/2/3 for storing VMs but a striped mirror instead...
  10. I

    Windows VM Trimming

    Ok that's interesting. What would you recommend I do to resolve the issue? I don't want a massive performance loss issue, but I assume I need to tweak it to better the loss?
  11. Dunuin

    Windows VM Trimming

    Very hard to read those tables if you don't put it between CODE tags. Are you sure its 128K volblocksize and not just 128K recordsize? You could check that with zfs get volblocksize,recordsize. Because 16K (previously 8K) volblocksize and 128K recordsize would be the default. With a 7-disk...
  12. I

    Windows VM Trimming

    I just noticed the block size on zfs is 128K, we use ReFS with 64K block size. Would that difference be what's causing a lot of wasted space?
  13. I

    Windows VM Trimming

    I have added the output below, Unsure if what to look for in this regard @Dunadan? root@pm2:~# zpool list -v NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT rpool 135G 3.08G 132G -...
  14. Dunuin

    Windows VM Trimming

    Maybe you are using a raidz1/2/3 with default volblocksize and it is "padding overhead"? Output of zpool list -v and zfs list -o space would give some hints.
  15. leesteken

    PVE disk distribution (partitioning) and RAIDZ1 space confusion

    Just a few pointers (but I'm not an expert), since you did not get any answers yet: WD Red drives sometimes use SMR (unless they are WD Red Pro), which is not suitable for ZFS RAIDz1 is only similar to (hardware) RAID5 in the sense that it can cope with losing one drive. ZFS is slower as it has...
  16. leesteken

    I/O and Memory issues, losing my mind...

    Before the edit, I warned about RAIDz1 and you showed a RAIDZ1, so I expected you would put it together. It's only really similar in the sense that RAID5 and RAIDz1 can lose one drive without data loss. Each write is striped in pieces but there is a lot of padding (due to small volblocksize)...
  17. I

    RAIDz1 block size to maximize usable space?

    I'm trying to store a 14TB visk (VMware conversion) on a 9 x 2.4TB RAIDz1 pool. However, anytime I try to import the disk, it gets to about 90% and errors out because it says I'm out of space. I've been reading that this is a ZFS padding issue. So my question is, how can I configure this pool...
  18. Dunuin

    Best disk setup for config

    Once you decided what storage to use you will have to learn how it works and how to administrate it and have a good backup strategy and disaster recovery plan. If not you will probably lose or at least risk your data sooner or later. There is for example no way to replace a failed ZFS disk via...
  19. Dunuin

    ZFS + Raid: Aufteilung NVMe SSD's

    Ja, das klingt nach einem brauchbaren Kompromiss. Musst du mal benchmarken. Aber LZ4 hilft meistens mehr als es schadet. Aber bei NVMe SSDs kann das ei Grenzfall sein. Da verzichtet man dann ja oft z.B. auch schon auf Daten-Caching im RAM, weil die NVMes schnell genug sind. Beim 4-disk...
  20. Dunuin

    Recommendations - Proxmox Workstation

    Why separate them? More disks mean better performance for everything. Or are you passing NVMe through to prevent ZFS/virtualization overhead? Raidz won't be great for IOPS performance in case your workload care about such things and comes with some additional limitations (padding overhead, you...