Search results for query: raidz1 padding

  1. K

    Langsame 4k Performance in Windows 11 VM

    Ich würde dir vom umstellen auf vollblocksize=4k abraten. Bei RAIDZ1 mit ashift 12 (Default), würdest du nur 50% der Kapazität verwenden können. RAIDZ ist tricky und etwas komplizierter zu verstehen. Da kommt Padding mit ins Spiel. Bei 16K vollblocksize kannst du die vollen 66% Kapazität nutzen...
  2. leesteken

    Best RAID for ZFS in Small Cluster?

    I though the other thread explained that (stripes of) mirrors have better read performance and never worse redundancy. If you want more redundancy and more (read) performance then a 4-way mirror has quadruple the read performance and triple the redundancy. RAIDz1 with four drives has a lot of...
  3. A

    Best RAID for ZFS in Small Cluster?

    I get that its not great for VMs but the IT here used to have the servers on the baremetal with no redundancy for the data. I went with proxmox and RIAD Z1 because of redundancy. I can do hardware RAID as all the hosts are able to but went with software (Proxmox) RAID out of convenience of doing...
  4. leesteken

    Best RAID for ZFS in Small Cluster?

    RAID5 (with BBU) is almost, but not quite, entirely unlike RAIDz1 (with drives without PLP). RAIDz1/2/3 is not good for VMs but can give you more usable storage (but padding can be terrible for space): https://forum.proxmox.com/threads/fabu-can-i-use-zfs-raidz-for-my-vms.159923/#post-734639
  5. I

    Win11 Festplatte: gelöschte Bereiche werden nicht freigegeben

    Wenn du die Proxmox neugestartet hast oder die VM im stop mode sichern möchstest, dauern 16TB auch enorm lang, wenn sich gar nix an den Daten geändert hat und du nix überträgst. Einfach weil 16TB auch erst mal gelesen werden müssen. Viel wichtiger ist IMHO die Pool geometrie und padding. Nicht...
  6. UdoB

    ZFS storage is very full, we would like to increase the space but...

    Yeah, that's no secret :-) Unfortunately this does not mean, that I am a ZFS expert..., there are too many internal details I do not understand. Well, the most important point is that there is no automatic re-balancing. Already stored data will stay where it is. (For datasets there is "man...
  7. I

    ZFS storage is very full, we would like to increase the space but...

    That is correct, it is a oversimplification. But it does not get that much better if the data actually is compressible. What if a 16 block can be compressed to 4k? Then you have one padding & one data = 50% efficiency. Congrats you are now down to mirror efficiency. What if it can be...
  8. M

    ZFS storage is very full, we would like to increase the space but...

    Your clarifications are helpful, and I agree on several important points: We are talking about zvols, not datasets. Zvols have a static volblocksize, and unlike datasets they cannot coalesce multiple logical blocks into a single larger record. TXG batching does not change the on-disk RAIDZ...
  9. M

    ZFS storage is very full, we would like to increase the space but...

    Sorry, but IMHO the article you cite is mostly incorrect or at least misleading in practice: The article is irrelevant because its calculations are based on unrealistic assumptions, treating every write as a tiny 4 kB block and ignoring how ZFS actually combines multiple blocks into stripes...
  10. I

    ZFS storage is very full, we would like to increase the space but...

    Sorry, I misunderstood. In that case you can leave it. Sure. So Proxmox uses the good default of 16k volblocksize. That means that all your VMs ZFS raw disks, are offered 16k blocks. Now lets look how ZFS provides 16k blocks. You have RAIDZ1. The sector size of your SSDs is 4k. So each 16k...
  11. C

    zfs storage woes

    was transfering files. no way to resume the frozen VMs? it's for cold storage. 50% overhead? only from block size? could you link the posts?
  12. leesteken

    zfs storage woes

    Restore from backup and re-run the action that failed. How can we know what the VM was doing at the time. It's probably padding (and maybe a little ZFS meta-data overhead) due to number of drives being a poor match for the block size. Assuming that you used RAIDz1 (or RAIDz2 or RAIDz3), which...
  13. J

    Dauer zpool replace

    Hi! Sorry für die lange Wartezeit. Das hab ich nie gesagt, nur das Wiederherstellen eines defekten Laufwerks war jetzt sehr langsam. Aber wann kommt das mal vor? Natürlich kann ich schwer einen Vergleich ziehen, wie großartig ein RAID10 wäre. Das ist jetzt für mich auch nur ein akademisches...
  14. I

    Dauer zpool replace

    Kann ich nicht sagen. ZFS kommt ja eigentlich aus dem Oracle Server Bereich, vermutlich gab es einfach nie Bedarf. Ersetzten ist das falsche Wort. Aber ein special macht ein L2ARC in fast allen use cases überflüssig. Beschreib mal deine use case. Nur VMs? Wie ich bereits gesagt habe: Obwohl...
  15. I

    Dauer zpool replace

    Kommt darauf an. Du kannst deine VMs auch auf einen NVME mirror legen und die Daten auf ein RAIDZ2 dataset. Dann hast du - blitzschnelle VMs - viel Speicher für Daten - Kein Problem mit Fragmentierung - Dadurch viel schnellere resilver - Daten profitieren von einer besseren Performance und...
  16. M

    ZFS RAIDZ Pool tied with VM disks acts strange

    So does this mean I should just use RAID10 for VM's?
  17. leesteken

    ZFS RAIDZ Pool tied with VM disks acts strange

    RAIDz probably does not have the space you think it has and it tells you. Due to padding and metadata overhead, people are often disappointment (on this forum) by the usable space on a RAIDz1/2/3. This is a common ZFS thing. (d)RAIDz1/2/3 is also often disappointing for running VMs on as people...
  18. leesteken

    Proxmox VE best practices

    It's your choice. Test it and in case it does not provide enough IOPS, you'll know what might be done about it. I just wanted you to be aware of that thread since a lot of people are disappointed by RaidZ1/2/3. You say you are going to run "a single PVE for multiple production application and...
  19. A

    Proxmox VE best practices

    Using mirrored stripe setup for VM will leave me half the original space (3.9TB), as compared to n-1 capacity (6.7TB) or less because of the block size padding, but not to the extent of the mirrored drive. The stripe won't give much advantage as SSDs are fast enough, I think.
  20. UdoB

    [TUTORIAL] FabU: Can I use ZFS RaidZ for my VMs?

    Assumption: you use at least four identical devices for that. Mirrors, RaidZ, RaidZ2 are possible - theoretically. Technically correct answer: yes, it works. But the right answers is: no, do not do that! The recommendation is very clear: use “striped mirrors”. This results in something similar...