Search results for query: padding overhead

  1. Dunuin

    ZFS + PVE Disk usage makes no sense

    Please search the forum for "padding overhead". With the default ashift=12 + volblocksize=8K and 3 12TB disks in raidz1 you only get 14.4TB of usable storage for VM disks: 3x 12TB = 36TB raw storage -12TB parity data (-33%) = 24 TB usable storage Everything written to a zvol will be 133% in...
  2. T

    Where did my disk space go?

    ...Using raidz1/2/3 with the default 8k volblocksize will always waste alot of space when using zvols (or using VMs) because of padding overhead. Use the forums search function for more information, I explained that dozens of times." I just created a 1TB VMdisk on a completely new RAIDZ-1...
  3. Dunuin

    Windows VM : poor disk latency

    You are probably still wasting 17% of your raw capacity due to padding overhead when not using a volblocksize of atlwast 16K. Compare your used space on the ZFS pool with the sum of all guest filesystems. Everything should consume 33% more space. So storing 1TB of data would result in 1.33TB...
  4. Dunuin

    Windows VM : poor disk latency

    Raidz1 isn't great for latency/IOPS. Using Raidz1 of 3 disks would also mean that you are wasting lot of capacity due to padding overhead in case you didn'T increased the volblocksize from the default 8K to 16K. And keep in mind that a ZFS pool should always have atleast 20% of its capacity free...
  5. E

    Disk (SSD )Performance Question

    Sounds promising, I was proposed to use 4 way mirror whith nvme drives, using a separate vdev for the databases! Does a separate SLOG make sense in connection with 4-way-mirror?
  6. Dunuin

    Disk (SSD )Performance Question

    ...a single drive and you would need to increase the volblocksize way too high for DBs in order to not loose too much capacity due to padding overhead. 2, 4, or 8 enterprise/datacenter grade NVMes for mixed or write intense workloads in a striped mirror would give plenty of performance...even...
  7. Dunuin

    Choosing ZFS volblocksize for a container's storage: Same logic as for VMs?

    ...I can recommend this blog post: https://www.delphix.com/blog/delphix-engineering/zfs-raidz-stripe-width-or-how-i-learned-stop-worrying-and-love-raidz It explains in detail how stuff works on the lowest level and explains what volblocksize to use to not waste space because of padding overhead.
  8. Dunuin

    ARC size suggestions

    It's a striped mirror so padding overhead isn't the problem. And no snapshots are used, so this also isn't a problem. But there is a lot of refreservation, so your discard/TRIM isn't working. So ZFS won't free up space when your guest OS deletes or overwrites something. After choosing a...
  9. Dunuin

    ARC size suggestions

    ...So you might want the volblocksize as small as possible. But the smaller you choose it, the more space you will waste because of padding overhead when chossing a raidz1/2/3 (but no problem with striped mirrors). So in case you for example got alot of MySQL in your workload with its 16K...
  10. Dunuin

    ARC size suggestions

    ...up 4.) if you use a raidz1/2/3 and you didn`t changed the default volblocksize you are probably wasting alot of capacity due to padding overhead. This will result in everything written to a zvol being way bigger than needed For point 2 and 3 you can run zfs list -o space -r YourPoolname. If...
  11. Dunuin

    Proxmox Server aufrüsten/umrüsten

    ...du den Pool auf einer Minimalen Blockgröße (volblocksize) von 32K laufen lassen, sonst verschwendest du zu viel Kapazität wegen Padding Overhead. Das sorgt dann aber für ordentlich SSD-Abnutzung und schlechte Performance sobald du irgendwie was schreiben willst was kleiner als 32K ist. MySQL...
  12. Dunuin

    Ceph vs ZFS - Which is "best"?

    ...increase the volblocksize to atleast 16K (atleast as long as using ashift=12) if you don't want to waste alot of capacity because of padding overhead. So a 3 disk raidz1 should be terrible for postgres with its 8K writes because volblocksize has to be atleast 16K. And MySQL with its 16K...
  13. Dunuin

    2tb of vms 4.4tb of storage used.

    So snapshots and discard isn't the problem but padding overhead. With your ashift of 12 and volblocksize of 16K with a 6 disk raidz3 that means that only 33% of the raw storage is usable (and of that 20% should be kept free, so actually only 26% of raw storage usable for virtual disks). So it's...
  14. LnxBil

    2tb of vms 4.4tb of storage used.

    just to precise this: not an exclusive either. It could also be all three of them and that is my guess too.
  15. Dunuin

    2tb of vms 4.4tb of storage used.

    Its usually either snapshots, padding overhead because of using raidz1/2/3 with too low volblocksize or you didn'T correctly setup discard/TRIM. Output of zfs list -o space would also be useful to see if snapshots or missing discard is the problem. As well as zpool get ashift and zfs get...
  16. Dunuin

    ZFS for VMs - where did my hdd space go?

    Like apoc already said its padding overhead. With ashift=12, a default volblocksize of 8K and a four disk raidz1 you only get 40% of the raw capacity of usable storage for VM disks. So with 16TB of raw storage only 6.4 TB or 5.82 TiB. Raw storage is 16TB. You loose 25% for parity so ZFS will...
  17. A

    ZFS for VMs - where did my hdd space go?

    had a similar issue with my setup and padding overhead. Maybe this helps: https://forum.proxmox.com/threads/unexpected-pool-usage-of-zvol-created-on-raidz3-pool-vs-mirrored-pool.65018/post-293908
  18. Dunuin

    Welches NAS OS für Proxmox?

    Dem Disk Passthrough traue ich nicht mehr so ganz nach einigen Problemen. Virtuelle Disks kann man machen, aber PVE kann über die GUI nur HW raid oder ZFS. Bei ZFS nicht vergessen, dass du da bei Raidz noch Kapazität wegen Padding-Overhead verlierst wenn du die Volblocksize nicht anhebst. Dann...
  19. Dunuin

    Need to recover a file system on a QEMU harddisk stored on a ZFS pool

    Problem is padding overhead when using zvols on raidz. See here to learn how to calculate usable size and how to minimize capacity loss: https://www.delphix.com/blog/delphix-engineering/zfs-raidz-stripe-width-or-how-i-learned-stop-worrying-and-love-raidz Basically with a 8 disk raidz1 with...
  20. Dunuin

    Unterschiedliche Speicherkapazität-Anzeige

    Ist immer die Frage wie du die Kapazität betrachtest und mit welchem Befehl. Der "zpool" Befehl zeigt dir z.B. immer die Rohkapazität an (inkl Kapazität die für Parität benutzt wird), also knappe 6TB. Der "zfs" Befehl zeigt dir die "nutzbare" Kapazität mit abgezogener Paritätskapazität, also...