Search results for query: padding overhead

  1. I

    Windows VM Trimming

    Ok that's interesting. What would you recommend I do to resolve the issue? I don't want a massive performance loss issue, but I assume I need to tweak it to better the loss?
  2. Dunuin

    Windows VM Trimming

    ...512K/1M volblocksize. This lost raw capacity is indirect. ZFS will tell you that you got 6 of 7 disks usable for data but because of padding overhead every zvol will be bigger. So a 1TB of data on a zvol consuming for example 1.66TB of space. Also keep in mind that the volblocksize can only...
  3. I

    Windows VM Trimming

    I just noticed the block size on zfs is 128K, we use ReFS with 64K block size. Would that difference be what's causing a lot of wasted space?
  4. I

    Windows VM Trimming

    I have added the output below, Unsure if what to look for in this regard @Dunadan? root@pm2:~# zpool list -v NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT rpool 135G 3.08G 132G -...
  5. Dunuin

    Windows VM Trimming

    Maybe you are using a raidz1/2/3 with default volblocksize and it is "padding overhead"? Output of zpool list -v and zfs list -o space would give some hints.
  6. Z

    How to configure ZFS, 3 drives total - 2 + 1 hot spare

    Do I need to do a VM backup in this situation? Isn't it enough to migrate the VM to another resource and then migrate it back?
  7. leesteken

    PVE disk distribution (partitioning) and RAIDZ1 space confusion

    ...it can cope with losing one drive. ZFS is slower as it has more features and therefore more overhead. RAIDz1(/2/3) can have a lot of padding overhead (especially with so few drives), which reduces the available space (and you want to keep it below 80% full) and the (duplicated) ZFS metadata...
  8. leesteken

    I/O and Memory issues, losing my mind...

    Before the edit, I warned about RAIDz1 and you showed a RAIDZ1, so I expected you would put it together. It's only really similar in the sense that RAID5 and RAIDz1 can lose one drive without data loss. Each write is striped in pieces but there is a lot of padding (due to small volblocksize)...
  9. C

    [SOLVED] ZFS Level und Upgrade-Fragen

    Ich glaub tatsächlich ehr wegen IOPs... wobei das natürlich auch ein Ausschlaggebender Punkt wäre :D - Einfach zu lange her :D Die Dikussion was man nimmt ist sowieso immer wieder vorhanden... Doof ist halt wenn bei RAID 10 zwei der MAIN-Platten ausfällt, ich hab dann keine Ahnung was ich mache...
  10. A

    [SOLVED] ZFS Level und Upgrade-Fragen

    Vielleicht wegen dem Padding overhead von z1/z2? Die Wiki-Seite hier hat alles wichtige drin (englisch): https://pve.proxmox.com/wiki/ZFS_on_Linux
  11. S

    Best disk setup for config

    I will learn enough to manage the storage. But I just don't have the time to become an expert in the various configurations and benefits or drawbacks to each. Writing scripts for monitoring won't be a problem for me, I've worked in software engineering for over 20 years so when it comes to any...
  12. Dunuin

    Best disk setup for config

    ...that you shouldn't fill your pool too much. Usual recommendation is to only fill it to 80%. You should also search this forum for "padding overhead" as most people don't understand that any raidz might waste tons of space if the blocksize was chosen too small and then you might loose those...
  13. Dunuin

    [SOLVED] Intel SSDPF2KX076TZ 7,68TB

    Nein, geht erst ab 3 Nodes und besser viel mehr. Kann man machen. Du bekommat aber keine vollen 75% wegen Padding Overhead und die IOPS-Performance ist halbiert.
  14. Dunuin

    ZFS + Raid: Aufteilung NVMe SSD's

    ...dann ja oft z.B. auch schon auf Daten-Caching im RAM, weil die NVMes schnell genug sind. Beim 4-disk raidz1 nicht vergessen, dass man die Blockgröße vom ZFS Storage auf Minimum 16K setzen sollte. Eventuell sogar 64K, wenn du weitere 6% Rohkapazität nicht wegen Padding Overhead verlieren willst.
  15. N

    RAIDZ mit TrueNAS und NextCloud

    Ich hab das ZFS Pool anhand dieses Tutorials hier angelegt: https://youtu.be/oSD-VoloQag?t=554. Da ich das noch nicht nutze, kann ich das Storage ja entfernen und als ZFS anlegen. Sollte ich das auch für die backups und images machen? Ich denke mit ZFSpool ist ZFS gemeint. Ja, daher nutze ich...
  16. Dunuin

    RAIDZ mit TrueNAS und NextCloud

    ...Storage aber auch mit mit einer "Block Size" von 32K anlegen, statt der standardmäßigen von 8K/16K, weil sonst wirst du wegen Padding Overhead ordentlich Platz verschwenden. Idee bei einem Hypervisor ist es ja alles getrennt laufen zu lassen für einfachere Verwaltbarkeit, weniger...
  17. Dunuin

    Recommendations - Proxmox Workstation

    ...Raidz won't be great for IOPS performance in case your workload care about such things and comes with some additional limitations (padding overhead, you can't remove vdevs, ...). Your CoreNVME needs a volblocksize of 16K or even 64k to not waste capacity. Your CoreSpin needs a volblocksize of...
  18. Dunuin

    Disk Konfiguration fürs HomeLab

    ...alle von Gast zur Disk immer nur kleiner aber nicht größer werden. Passende ashift und volblocksize wegen physischen Sektoren und Padding Overhead wählen. Keine Verschlüsselung. Am besten nur große sequentielle Async Writes und keine kleinen Random Sync Writes. Gucken das die Daten gut an die...
  19. LnxBil

    ZFS reported space vs used space

    In RAID5 it's true, but there is no RAID5 in ZFS. RAIDz1 is no RAID5 and that's the wrong assumption. Yes, the padding overhead and best volblocksize for a given set of disks. Just search again with this and it also has been discussed plenty of times
  20. Dunuin

    RaidZ1 performance ZFS on host vs VM

    And like already said...there is padding overhead. PVE7 uses 8K volblocksize by default. This combined with a 4-disk raidz1 and ashift=12 means you lose 50% and not 25% of the raw capacity as everything written to a zvol will be 150% in size. So on top of your data blocks there will be...