Search results for query: padding overhead

  1. Dunuin

    ZFS Raid array eats alot of space

    ...-> Edit" and set the "Block size" to at least 16K for a 3-disk raidz1. Then backup and restore all your VMs, so new VMs get created overwriting the old VMs, as the volblocksize can only be set at the creation of a zvol. For more please =Dunuin&o=date']search this forum for "padding overhead".
  2. leesteken

    ZFS Raid array eats alot of space

    Indeed, ZFS raidZ1 has huge padding overhead, especially with a small number of drives and a small volblocksize. See Dunuin's excellent analysis and tests about this on this forum.
  3. M

    [SOLVED] Cannot store on ZFS RaidZ volume "out of space"

    Thanks for the tip, looks like this is indeed my issue. Shouldn't i go for a dRaid instead ? looks to be more in line with my use case, as it looks to be a good fit for larger amount of drive raid : https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysadmin_zfs_raid_considerations...
  4. Neobin

    [SOLVED] Cannot store on ZFS RaidZ volume "out of space"

    Have a search/read about "padding overhead" with raidZ, e.g.: https://forum.proxmox.com/search/5932181/?q=padding+overhead&t=post&c[users]=Dunuin&o=date
  5. LnxBil

    ZFS usage incorrect for a VM in RAIDZ1

    We still a page we can refer to ... instead of answering the same question over and over again.
  6. Dunuin

    ZFS usage incorrect for a VM in RAIDZ1

    Also search this forum for "padding overhead". If you didn't manually changed the blocksize of the raidz1 storage, every virtual disk will consume way more space.
  7. Dunuin

    Replikation auf 2. Node verdoppelt sich?!?

    Meine Glaskugel sagt mir du hast auf NodeA einen ZFS (Striped) Mirror und auf NodeB ein Raidz1 oder Raidz2? Dann wäre es Padding Overhead und du müsstest die Volblocksize erhöhen, alle virtuellen Disks löschen und neu erstellen. Wenn du kein Raidz1/2 auf NodeB hast, dann solltest du mit zfs...
  8. L

    [SOLVED] Can only use 7TB from newly created 12TB ZFS Pool?

    Thank you Dunuin! This was really helpful. I've set the blocksize to 16k and was able to use ~9750 GB of my pool! Thank you for the references, I will use refer to this guide first next time :) I was aware of TB vs TiB. It seems that because of the zfs pool blocksize, I am more restricted than...
  9. Dunuin

    [SOLVED] Can only use 7TB from newly created 12TB ZFS Pool?

    Search this forum for "padding overhead". In short: When using a 4 disk raidz1 with the default 8K volblocksize you will lose half of the raw capacity when using VM virtual disks (zvols)...
  10. Neobin

    When i'm create a VM , if i m alocate 4 TB space , VM take 4 TB space from main node , how fixed this?

    Check the sparse checkbox on the ZFS storage: [1]. Additional info: Search the forum for: "padding overhead" with your raidZ, if you not already know it, to prevent possible surprises in the future... [1] https://pve.proxmox.com/pve-docs/chapter-pvesm.html#storage_zfspool
  11. Dunuin

    Disk exceeds size defined

    Check for common user errors: 1.) storing VMs disks on a raidz1/2/3 ZFS pool and not increasing the volblocksize resulting in massive padding overhead so that every zvol will consume way more space 2.) not checking the "discard" checkbox of the vitual disk, using a protocol like IDE that doesn't...
  12. Dunuin

    local storage used instead of local-zfs

    Search this forum for "padding overhead". You will find dozens of posts of me explaining that. In short: When storing VM (or better its zvols) on a raidz1/2/3 ZFS pool everything will be way bigger because of padding overhead, if your zvols were created with a too low volblocksize. Solution...
  13. Dunuin

    Out of space: really ?

    ...a 16K volblocksize, easier to add more storage when needed, better reliability, resilvering time would be way lower and there is no padding overhead with zvols, so the full 4TB are really usable. Padding overhead, by the way, also only effects zvols and not datasets, so LXCs could use the...
  14. Dunuin

    Out of space: really ?

    Let me guess... You storage is a raidz1/2/3 and you didn't increase the blocksize before creating your first zvols? If yes then this is normal because of padding overhead every zvol will consume way more space.
  15. Dunuin

    Correction of ZFS write amplification

    ...writes so the SSDs can't optimize the writes for less wear - a raidz1/2/3 isn't great as a VM storage (less IOPS and problem with padding overhead) but total write amplification will be lower, as not everythign will have to be written twice (5 disk raidz1 will only write those additional +25%...
  16. Dunuin

    Where is my zpool storage???

    Padding overhead only effects zvols on raidz1/2/3 (maybe draid too, not sure about that). But a PBS usually uses datasets, so even with a raidz1/2/3 this wouldn't be a problem. But PBS needs IOPS performance and no matter how many disks your raidz1/2/3 would consist of, it will always be as slow...
  17. P

    Where is my zpool storage???

    ...of two vdevs (each consisting of two mirrored hdds) plus a special device of two mirrored ssds. But I am wondering whether the padding overhead is an issue independent of the pool composure, i.e. would happen also with mirrored vdevs (you are saying above that it has to do with the metrics...
  18. Dunuin

    default block size 8k

    Ok, so basically padding overhead, metdata overhead and compression factor like I thought.
  19. habitats-tech

    [TUTORIAL] If you are new to PVE, read this first; it might assist you with choices as you start your journey

    I will incorporate your suggestions onto the forthcoming subject of storage. I have another two docs in the works for networking and clustering. All targeting small scale setups.