Search results for query: raidz padding

  1. LnxBil

    ZFS/ProxMox Disk Size Weirdness

    Yes, but it's the only way to get real numbers. RaidZ is complicated and the usage space changes constantly with the data you store. If you e.g. store small volblock sizes, you got a lot of waste due to padding overhead on ashift=12 and if you use large recordsizes, you can store a lot without...
  2. C

    zfs storage woes

    was transfering files. no way to resume the frozen VMs? it's for cold storage. 50% overhead? only from block size? could you link the posts?
  3. leesteken

    zfs storage woes

    Restore from backup and re-run the action that failed. How can we know what the VM was doing at the time. It's probably padding (and maybe a little ZFS meta-data overhead) due to number of drives being a poor match for the block size. Assuming that you used RAIDz1 (or RAIDz2 or RAIDz3), which...
  4. J

    Dauer zpool replace

    Hi! Sorry für die lange Wartezeit. Das hab ich nie gesagt, nur das Wiederherstellen eines defekten Laufwerks war jetzt sehr langsam. Aber wann kommt das mal vor? Natürlich kann ich schwer einen Vergleich ziehen, wie großartig ein RAID10 wäre. Das ist jetzt für mich auch nur ein akademisches...
  5. I

    Dauer zpool replace

    Kann ich nicht sagen. ZFS kommt ja eigentlich aus dem Oracle Server Bereich, vermutlich gab es einfach nie Bedarf. Ersetzten ist das falsche Wort. Aber ein special macht ein L2ARC in fast allen use cases überflüssig. Beschreib mal deine use case. Nur VMs? Wie ich bereits gesagt habe: Obwohl...
  6. S

    [TUTORIAL] Understanding Proxmox ZFS HD and Disk Usage Display

    Hi, I tried to answer a question with a link explaining Proxmox disk usage displays in the Web GUI, but I did not find a posting explaining it (possibly because I searched wrongly), so I decided to write a brief overview. I try to explain simple, even if sometimes not 100% accurate...
  7. I

    [SOLVED] Performance comparison between ZFS and LVM

    Here is my best guess. The Kingston SSD is a pretty mediocre, default SSD. It uses an off the shelf Phison controller. So I am pretty sure that it is 4k. Even if it is 512b or 16k internally, the Phison controller was tuned for the default 4k and you won't see much off a difference. In my...
  8. I

    ZFS und verwirrende Angaben bzgl der Größe

    RAIDZ ist für ZVOL (also VM disks) nicht geeignet. Du hast die gleiche Kapazität wie zwei mirrors. Aber halt alle Nachteile von RAIDZ. Was für ein "cache"? Du hast aber nicht deduplication aktiv?! Ist bisschen fies von ZFS, aber diese Erwartung ist falsch! Das eine ist die Grösse der VM disk...
  9. M

    ZFS RAIDZ Pool tied with VM disks acts strange

    So does this mean I should just use RAID10 for VM's?
  10. leesteken

    ZFS RAIDZ Pool tied with VM disks acts strange

    RAIDz probably does not have the space you think it has and it tells you. Due to padding and metadata overhead, people are often disappointment (on this forum) by the usable space on a RAIDz1/2/3. This is a common ZFS thing. (d)RAIDz1/2/3 is also often disappointing for running VMs on as people...
  11. A

    Proxmox VE best practices

    Using mirrored stripe setup for VM will leave me half the original space (3.9TB), as compared to n-1 capacity (6.7TB) or less because of the block size padding, but not to the extent of the mirrored drive. The stripe won't give much advantage as SSDs are fast enough, I think.
  12. UdoB

    [TUTORIAL] FabU: Can I use ZFS RaidZ for my VMs?

    Assumption: you use at least four identical devices for that. Mirrors, RaidZ, RaidZ2 are possible - theoretically. Technically correct answer: yes, it works. But the right answers is: no, do not do that! The recommendation is very clear: use “striped mirrors”. This results in something similar...
  13. VictorSTS

    [SOLVED] ZFS Eating Storage like nothing

    ZFS RADIz's padding in action. There are dozens of forum posts about this [1]. You will have to find the right balance between volblocksize and write amplification (which depends on you applications). For me RAIDz is a no-go for VM storage, unless the performance requirement is low. AFAIK...
  14. D

    ZFS reporting drastically different numbers

    My initial research is pointing at padding overhead (and it is possible thin provisioning didn't get enabled at creation), but this also seems wildly off compared to other examples. The drive called "zfs" is a made up of six 1.96TB SSD drives in a RAIDZ. When I go to the summary for the zfs...
  15. I

    Why Does ZFS Hate my Server

    If you don't care or don't wanna bother with volblocksize, recodersize, fragmentation, padding or rw amplification, that is totally fine. Just be warned that it CAN lead to poor performance and storage efficiency. You can avoid said problems by sticking to two simple rules. And that is why...
  16. J

    [SOLVED] ZFS pool almost full despite reducing zvol size

    I checked out the OpenZFS documentation to reaffirm what you are saying and I think I understand it better now, and will likely use this chart over the PVE GUI in the future: https://openzfs.github.io/openzfs-docs/Basic%20Concepts/RAIDZ.html. I wrongfully assumed that going from 80TB to 60TB...
  17. I

    Welcher Raid Controller ist mit Proxmox kompatibel

    Noch ein Mythos. Wer auf RAIDZ verzichtet und mirror benutzt, läuft nicht in padding Probleme und muss sich nicht um Details wie volblocksize kümmern. Die Belastung fast identisch mit ext4. Nur sync writes belasten ein ZFS ohne SLOG halt doppelt. Ist aber bei einer anständigen TBW auch kein...
  18. I

    Windows VM Trimming

    I am playing with it now thank you. Side question though, Does this then not happen with Proxmox Backup servers too? On that scale, with 3 stripes of 12 drives on raidz it would be hard for me to pick up by myself, How can one check if the padding is an issue in that array too?
  19. leesteken

    I/O and Memory issues, losing my mind...

    Before the edit, I warned about RAIDz1 and you showed a RAIDZ1, so I expected you would put it together. It's only really similar in the sense that RAID5 and RAIDz1 can lose one drive without data loss. Each write is striped in pieces but there is a lot of padding (due to small volblocksize)...
  20. I

    RAIDz1 block size to maximize usable space?

    It's not an option unfortunately. I would have done that in a heartbeat but I'm forced to swap hosts as well and the newer hosts are coming from being VMware VSAN nodes that didn't come with or need hardware raid controllers. So...I'm here now. I don't have any preference as to what FS I use...