Search results for query: padding overhead

  1. Dunuin

    [TUTORIAL] If you are new to PVE, read this first; it might assist you with choices as you start your journey

    ...least using modern 4K disks) to be increased when using any raidz1/2/3 if you don't want to waste a big portion of your space due to padding overhead. Every pool layout needs a different volblocksize, so you would need to create multiple big tables so they could pick a volblocksize from if...
  2. Dunuin

    [SOLVED] Testing ZFS performance inside lxc container (mysql)

    ...PS: Would be great if PVE could do something similar when creating a new ZFSPool storage. Would reduce a lot of support needed answering peoples threads complaining about pools being smaller than expected, becasue they don't understand that padding overhead.
  3. Dunuin

    zfs thin provision space usage discrepancies

    ...like ZFS, so CoW on top of CoW) as well as the overhead of that additional filesystem from the ZFS dataset. To lower the padding overhead I would either: A.) increase the volblocksize in case you only got data that does big async sequential reads/writes. Would be really bad when using DBs...
  4. Dunuin

    zfs thin provision space usage discrepancies

    Search this forum for "padding overhead". You can't use any raidz1/2/3 with the defualt 8K volblocksize or you will get massive padding overhead causing to everything written to zvol consuming more space. The smaller your volblocksize or the more disks your raidz1/2/3 consists of, the more space...
  5. Dunuin

    RAIDZ1 shows wrong space?

    ...Those Directory Storages are probably using datasets and datasets use recordsize instead of volblocksize. And padding overhead only affects zvols, not datasets.
  6. Dunuin

    RAIDZ1 shows wrong space?

    ...parity) while the "zfs" command will show the size with parity already subtracted. You might also want to search this forum for "padding overhead" because with default values your 20TB pool will only allow you to store 10TB of VM virtual disks (or only 8-9TB in case you care about...
  7. Dunuin

    RAIDZ1 resizing

    Search this forum for "padding overhead". When storing a zvol with a too low volblocksize on an raidz1/2/3 with too much disks, you will get padding overhead and everything will consume more space. And this volblocksize can only be set once at creation of a zvol.
  8. Dunuin

    ubuntu 20 VM - SSD emulation is recognised as HDD

    ...here, as there is no "usedrefreserv" and no "usedsnap". But you are using a raidz2 and you probably didn't increase the volblocksize, so everything you write to a zvol will be way bigger because of padding overhead. Thats why your 7TB used inside the VM will consume 13TB of the pools capacity.
  9. D

    ubuntu 20 VM - SSD emulation is recognised as HDD

    That would be bad for me - almost doubled storage consumption, I haven't had anything like that with ZFS yet.
  10. Dunuin

    ubuntu 20 VM - SSD emulation is recognised as HDD

    What does zpool list -v and zfs list -o space return? Bigger zvol size also could be because of padding overhead.
  11. Dunuin

    ZFS: Storage space for zvols almost doubles when transferred to raid-z3 pool

    ...You lose 75% of your raw capacity, so 75% of those 150TB. 25% of those 150TB is actually usable, 30% lost because of parity and 45% lost because of padding overhead. So in theory your zvols should be +180% in size. Yes No, all raidz1/raidz2/raidz3 got this padding overhead. A raidz2 won't help.
  12. Dunuin

    ZFS: Storage space for zvols almost doubles when transferred to raid-z3 pool

    ...consists of, the bigger your volblocksize will have to be when using raidz1/2/3, otherwise everything will be bigger because of the padding overhead. How much disks does your raidz3 consist of? Example: 9 disk raidz3 with ashift=12 would mean you lose 75% of the raw capacity when using the...
  13. A

    Windows Server - Festplatte verkleinern fordert gesamten Host

    Was würdest du denn für eine ZFS Option empfehlen, bei den vorhandenen Platten?
  14. Dunuin

    Windows Server - Festplatte verkleinern fordert gesamten Host

    ...wie nur eine einzelne Disk, da das nur mit der Anzahl der vdevs aber nicht mit der Anzahl der Disks skaliert. Außerdem massig Padding Overhead (und damit weniger nutzbare Kapazität als mit einem Raid10), sofern du die Volblocksize nicht manuell auf mindestes 16K vergrößert hast. Consumer...
  15. Dunuin

    28GB Backup, Restore has Taken 4hr 15min

    ...be +100%. With a raidz1 you will still get some additional overhead because of the bigger volblocksize that is needed to minimize padding overhead, but this overhead was still less than writing every block of data twice to the disks. But if you care more about IOPS performance than capacity...
  16. Dunuin

    Where is my 1.2TB goes?

    Yes, thats padding overhead. With 4 disks in a raidz1 using he default ashift=12 and default volblocksize=8K you will lose 50% of your raw capacity (or even 60% if you care about performance) when using VMs. To not lose that much space to padding overhead you would need to increase your...
  17. Dunuin

    Where is my zpool storage???

    Because of padding overhead, everything written to a zvol will be 160% in size. So when writing 7.5TB of data/metadata to zvols there also will be additional 4.5TB of padding blocks, resulting in 12TB of consumed space. So you only got 7.5TB of usable space. And then keep in mind that a pool...
  18. Neobin

    Where is my zpool storage???

    Search the forum for "padding overhead", especially from @Dunuin, e.g.: https://forum.proxmox.com/search/5731854/?q=padding+overhead&t=post&c[users]=Dunuin&o=date
  19. Dunuin

    Festplatten und andere Hardware Konfiguration

    Schlechte Idee für DBs wie PostgreSQL. Bei einem Raidz1/2/3 musst du nämlich die Blocksize erhöhen, weil du sonst Padding Overhead bekommst und Kapazitätsverlust auch nicht besser als bei einem Mirror wäre. Bei ashift=12 und 3 Disks im Raidz1 müsste die Volblocksize z.B. mindestens 16K sein und...
  20. P

    Slow disk freezing the system randomly

    Thanks for your suggestions, for the hardware I cannot invest a lot, I don't have any critical data and all my work gets anyway backed up. Following your valuable suggestion I configured a simple raid1 with the two NVME drives and any hiccups seem to be gone. Now I would like to keep using the...