Search results for query: padding overhead

  1. VictorSTS

    [SOLVED] Windows VM I/O problems only with ZFS

    Not a real expert here, but AFAIK compression doesn't matter on padding, as padding is applied after compressing data. Using compression will potentially make the data to write smaller, so zfs apply padding to the amount of data you really write. Even if that write will need to use a few extra...
  2. E

    [SOLVED] Windows VM I/O problems only with ZFS

    Even if using LZ4 compression? I've read many posts that argue that with compression on ZFS, everything changes on this subject. (even the common suggested volblocksize tuning to the ashift and the number of disks of the pool minus parity, seems to looses meaning with compression on) On these...
  3. VictorSTS

    [SOLVED] Windows VM I/O problems only with ZFS

    ...Theory tells that you are right, I just would like to test it somehow. RAIDz works but has terrible write amplification and padding overhead + low performance. There are tons of threads regarding this, i.e...
  4. Dunuin

    CLUSTER 2 NIDE

    Search this forum for "padding overhead". Every Zvols gets created with a volblocksize that can't be changed later. What volblocksize will be used, for creating new zvols, is defined in PVE by the "block size" field of your ZFS storage. It defaults to 8K and that is always bad when running any...
  5. LnxBil

    [SOLVED] Allocating a virtual disk on a zpool

    There are no absolutes. It depends most importantly on the padding overhead that does indeed correspond to the number of drives in a vdev and the used volblocksize.
  6. Dunuin

    Where's my storage gone

    ...and when not using at least 16K (or even 64K) every VM will be 150% in size with a ashift=12 4-disk raidz1. Search this forum for "padding overhead" for more details. So solution would be to edit your ZFS storage and increase the "block size" to something like 64K and then destroy and...
  7. leesteken

    Big difference in speed tests on the proxmox host and on the virtual machine

    There is overhead in virtual disks. Ext4 uses 4k blocks, while QEMU shows 512 bytes sectors, but ZFS uses some volblocksize (check with zfs get volblocksize) and that causes amplification. Then there is also your 3-drive raidz1, which adds padding and more amplification. All the extra bytes the...
  8. Dunuin

    zfs raid-1 -> raidz-1 : 50% mehr Platzverbrauch

    Container nutzen immer Datasets und da kommt die Recordsize zum Tragen und du hast generell kein Verlust (also größere vDisk) durch Padding Overhead. VMs nutzen immer Zvols und da kommt die Volblocksize zum Tragen und du hast Padding Overhead wenn deine Volblocksize zu klein gewählt ist (und...
  9. Dunuin

    zfs raid-1 -> raidz-1 : 50% mehr Platzverbrauch

    ...5-Disk Raidz1 mit ashift=12 müsstest du deine ZFS Storages wenigstens mit einer Blockgröße von 32K betreiben, damit du nicht all den Padding Overhead hast. Und wenn du für den Storage die Blockgröße änderst, dann ändert das nicht die Volblocksize der Zvols, da diese nur zum Zeitpunkt der...
  10. Neobin

    ZFS RaidZ1 Help

    Yes, the padding overhead: https://forum.proxmox.com/search/6247295/?q=padding+overhead&t=post&c[users]=Dunuin&o=date
  11. L

    zfs TB eater

    ok, so what you recommend me to do ? (sorry i'm not a zfs expert as u can see :-( ) redo ZFS on my 4 x 4Tb disk (3.6TB) in RAIDZ1 but which option must i choose ?
  12. Dunuin

    zfs TB eater

    Don't blame ZFS. You lose that additional 4 TB because you don't set that pool up well. Thats just a user error. Read about padding overhead and the volblocksize and you could use your 12TB. But I still wouldn't use the full 12TB as ZFS always needs some free space for proper operation. I...
  13. SInisterPisces

    Choosing ZFS volblocksize for a container's storage: Same logic as for VMs?

    Hello again. I had not originally planned to do it this way, but I find myself bringing up a MariaDB instance in a container. I want to store the DB itself in an appropriate filesystem for best performance on what is already kind of a potato node. Based on our prior conversation, I think what...
  14. Dunuin

    ZFS Layout 8 Disks

    ...Cold-Storage. Außerdem nicht vergessen, dass du mit Raidz1/2/3 die Blockgröße erheblich anheben musst um nicht massig Kapazität am Padding Overhead zu verlieren. Minimum 16K und für mehr Kapazität vielleicht sogar hoch auf 32/64/128K. VMs würde ich schon wenigstens auf einem Enterprise SSD...
  15. EllyMae

    [SOLVED] Allocating a virtual disk on a zpool

    I would think that the overhead and padding between the two almost-identically-configured zpools (except for the number of drives) would be the same. Therefore, the space used in the first zpool should be the same as the space used in the second zpool. But that's all in the past, albeit...
  16. leesteken

    [SOLVED] Allocating a virtual disk on a zpool

    You are probably using raidz1 (or 2 or 3) and this has a lot of overhead and padding with typical volblocksizes. Several threads about that on this forum but this is also a good overview...
  17. S

    Slow Dual ZFS Mirror Write Performance

    ...the default 8K to 16K or otherwise you will loose 50% of your raw capacity and not just 33% because you would get additional 17% padding overhead. And volblocksize can't be changed after creation of your virtual disks, so you would need to destroy and recreate all virtual disks so the new...
  18. LnxBil

    [SOLVED] ZFS show different sizes!

    We still need a good go-to-guide for ZFS that we can reference here. ZFS is sadly just too complicated in the beginning :(
  19. Dunuin

    [SOLVED] ZFS show different sizes!

    The zpool command shows raw size incl. parity that isn't usable. And don't forget to increase the storages block size or you will only be able to use something like 1.35 TB for VMs because of padding overhead and space that should be kept free.
  20. Dunuin

    RPool don't display good size

    ...and recreate all those VMs (easiest would be to backup and restore them with the same VMID). =Dunuin&o=date']Search this forum for "padding overhead" for more info. And VM 90806 got 48.5TB of refreservation. So you either didn't checked the "thin" checkbox when creating the ZFS storage so...