Search results for query: raidz1 padding

  1. LnxBil

    ZFS reported space vs used space

    In RAID5 it's true, but there is no RAID5 in ZFS. RAIDz1 is no RAID5 and that's the wrong assumption. Yes, the padding overhead and best volblocksize for a given set of disks. Just search again with this and it also has been discussed plenty of times
  2. Dunuin

    RaidZ1 performance ZFS on host vs VM

    And like already said...there is padding overhead. PVE7 uses 8K volblocksize by default. This combined with a 4-disk raidz1 and ashift=12 means you lose 50% and not 25% of the raw capacity as everything written to a zvol will be 150% in size. So on top of your data blocks there will be...
  3. P

    RaidZ1 performance ZFS on host vs VM

    The speed should still be about 3 x the slowest SSD, shouldn't it? According to this formula Streaming write speed: (N - p) * Streaming write speed of single drive
  4. Dunuin

    Doubt of space usage zfs pool

    So 32GB used by Snapshots. Padding overhead isn't a problem as it is a mirror and no raidz1/2/3. But there are 454GB used by refreservation which means you either: A.) forgot to check the "thin" checkbox when creating that ZFSpool storage so its thick provisioned and the virtual disks always...
  5. leesteken

    RaidZ1 performance ZFS on host vs VM

    Raidz1(/2/3) has to wait for the slowest drive (because it has to wait for all drives and since they are different makes and models the "slowest one" might change during the work-load) and has additional write multiplication due to padding.
  6. Dunuin

    question about ZFS UI vs. cmdline

    My guess would be that you are using a raidz1/2 without increasing the volblocksize. Then it wouldn't be uncommon that storing something like 300GB on a VMs virtual disk would consume something like 562GB of actual space on the pool. Search this forum for "padding overhead".
  7. W

    Virtual Disk with Real disks in RAIDZ1 for Truenas

    Thanks for the understanding my case. Currently I don't need extra space and I can go for higher capacity NVMEs in future. So the best way to run NextCloud is to: 1. Use primary disk for the VM install (debian with Casaos) 2. Use 3 way mirrored zpool from 3 NVME as data storage and pass it to...
  8. leesteken

    Virtual Disk with Real disks in RAIDZ1 for Truenas

    For reliability use a three-way mirror, so that your data is still redundant when you replace a broken drive. I like ZFS because of the self-healing and bitrot detection but raidz1 is not the same as RAID5 and ashift, volblocksize and padding need to be balanced to get the most space (and...
  9. W

    Virtual Disk with Real disks in RAIDZ1 for Truenas

    So is it better to go with LVM pool using both NVME SSDs with Casaos or something lightweight for network sharing and instead of going for redundancy go with daily backup? I want my NextClould storage to be as reliable as I can :(
  10. leesteken

    Virtual Disk with Real disks in RAIDZ1 for Truenas

    66% usable space is more difficult than it looks: https://forum.proxmox.com/search/6587155/?q=raidz1+padding&o=date
  11. Dunuin

    ZFS vs Single disk configuration recomendation

    Recordsize is a "up to" value. Even with the default 128K recordsize ZFS can write small files as for example a 4K, 8K, 16K, ... sized record. So no, IO amplification of small files shouldn't be that bad. Its more about IO amplification of big files. Like when writing a 8MB file and using a 8K...
  12. leesteken

    (beginner) NVMe drive setup: will I be stupid?

    raidZ1 will not give you anything near 4TB when using three drives of 2TB. Please search the forum for ZFS raidz padding; you might be better of with a 2x2TB mirror. Also, if you device to buy cheap consumer QLC SSDs, you'll waste your money. Please search the forum about QLC and learn about...
  13. R

    [SOLVED] Frage zu den Snapshots und dem Backup-Server

    Upsi. Sry ;) Also aktuell sind es 6 3TB Platten. Diese sollen natürlich später mal ausgetauscht werden in größere. Es werden aber nie mehr als 6, da das System nicht mehr her gibt. Wie gesagt hauptsächlich liegen da Office-Dokumente, Pdf und Images rum. Quasi der ganze Plunder, der unter eigene...
  14. Dunuin

    [SOLVED] Frage zu den Snapshots und dem Backup-Server

    Genau (also mal abgesehen von TB statt GB ;) ). Ob Raidz1/2 eine Option für dich ist hängt von verschiedenem ab: - Willst du einfach Disks hinzufügen können? Bei Raidz schwierig. - Willst du Disks entfernen können? Bei Raidz nicht möglich. - Willst du viele kleine Dateien speichern? Mit Raidz...
  15. E

    [SOLVED] Windows VM I/O problems only with ZFS

    Even if using LZ4 compression? I've read many posts that argue that with compression on ZFS, everything changes on this subject. (even the common suggested volblocksize tuning to the ashift and the number of disks of the pool minus parity, seems to looses meaning with compression on) On these...
  16. VictorSTS

    [SOLVED] Windows VM I/O problems only with ZFS

    Agree, but I'm curious too as I don't know if that would change anything in the original problem. Theory tells that you are right, I just would like to test it somehow. RAIDz works but has terrible write amplification and padding overhead + low performance. There are tons of threads regarding...
  17. Dunuin

    CLUSTER 2 NIDE

    Search this forum for "padding overhead". Every Zvols gets created with a volblocksize that can't be changed later. What volblocksize will be used, for creating new zvols, is defined in PVE by the "block size" field of your ZFS storage. It defaults to 8K and that is always bad when running any...
  18. Dunuin

    Where's my storage gone

    You can't do that using the webUI. It will only allow you to use a empty whole Disk and format it with ext4 to create a new directory storage. What you could do is doing the stuff manually. First create a new dataset (zfs create YourPool/ISOs). Then add the mountpoint of that dataset as a...
  19. leesteken

    Big difference in speed tests on the proxmox host and on the virtual machine

    There is overhead in virtual disks. Ext4 uses 4k blocks, while QEMU shows 512 bytes sectors, but ZFS uses some volblocksize (check with zfs get volblocksize) and that causes amplification. Then there is also your 3-drive raidz1, which adds padding and more amplification. All the extra bytes the...
  20. leesteken

    Big difference in speed tests on the proxmox host and on the virtual machine

    raidz1 has a lot of padding, and especially when your volblocksize of the virtual disk is not optimal, this amplifies reads and writes, which lowers performance. I don't know enough of the details to explain it better. There are quite a lot of threads about ZFS raidz1 on this forum, but mostly...