Search results for query: raidz1 padding

  1. Dunuin

    zfs raid-1 -> raidz-1 : 50% mehr Platzverbrauch

    Container nutzen immer Datasets und da kommt die Recordsize zum Tragen und du hast generell kein Verlust (also größere vDisk) durch Padding Overhead. VMs nutzen immer Zvols und da kommt die Volblocksize zum Tragen und du hast Padding Overhead wenn deine Volblocksize zu klein gewählt ist (und...
  2. Dunuin

    zfs raid-1 -> raidz-1 : 50% mehr Platzverbrauch

    Bei einem 5-Disk Raidz1 mit ashift=12 müsstest du deine ZFS Storages wenigstens mit einer Blockgröße von 32K betreiben, damit du nicht all den Padding Overhead hast. Und wenn du für den Storage die Blockgröße änderst, dann ändert das nicht die Volblocksize der Zvols, da diese nur zum Zeitpunkt...
  3. S

    ZFS volblocksize per VM disk instead of pool

    So this also mean that if a VM uses different disks on different ZFS pools I may also use different volblocksizes - am I right? E.g. Ubuntu , root partition with a Postgres DB = 8k volblocksize + 2nd partition used for SMB storage on a different PVE ZFS pool 1M volblock size
  4. leesteken

    ZFS volblocksize per VM disk instead of pool

    Indeed, but you usually don't run ZFS on top of ZFS. I do think this point is valid and your are smart to select a volblocksize that matches the workload inside the VM. But as people with raidz1/2/3 found out: it is also a trade-off with padding, wasted space, IOPS per drive, etc., which is...
  5. L

    zfs TB eater

    ok, so what you recommend me to do ? (sorry i'm not a zfs expert as u can see :-( ) redo ZFS on my 4 x 4Tb disk (3.6TB) in RAIDZ1 but which option must i choose ?
  6. SInisterPisces

    Choosing ZFS volblocksize for a container's storage: Same logic as for VMs?

    Hello again. I had not originally planned to do it this way, but I find myself bringing up a MariaDB instance in a container. I want to store the DB itself in an appropriate filesystem for best performance on what is already kind of a potato node. Based on our prior conversation, I think what...
  7. Dunuin

    ZFS Layout 8 Disks

    HDDs? Damit wirst du nicht wirklich Spaß haben als VM/LXC Storage. Da brauchst du ja primär IOPS Performance für all die OSs die da dann parallel laufen. Selbst mit 2x 4 Disk Raidz1 kommst du ja nur auf gut 200 IOPS. HDDs im Raidz1/2 wären meiner Meinung nach höchstens was für zusätzlichen...
  8. leesteken

    [SOLVED] Allocating a virtual disk on a zpool

    You are probably using raidz1 (or 2 or 3) and this has a lot of overhead and padding with typical volblocksizes. Several threads about that on this forum but this is also a good overview...
  9. Dunuin

    RPool don't display good size

    You got ashift=12 (so 4K "sectors") and default volblocksize=8K with a 8 disk raidz1. That means you will indirectly lose 50% of the raw capacity. 12,5% raw capacity loss because of parity data and every zvol (so VM virtual disks) will be 175% in size, as for each 1 TB of data blocks there will...
  10. Dunuin

    ZFS Raid array eats alot of space

    Short: Go to "Datacenter -> Storage -> YourStorageName -> Edit" and set the "Block size" to at least 16K for a 3-disk raidz1. Then backup and restore all your VMs, so new VMs get created overwriting the old VMs, as the volblocksize can only be set at the creation of a zvol. For more please...
  11. leesteken

    ZFS Raid array eats alot of space

    Indeed, ZFS raidZ1 has huge padding overhead, especially with a small number of drives and a small volblocksize. See Dunuin's excellent analysis and tests about this on this forum.
  12. LnxBil

    ZFS usage incorrect for a VM in RAIDZ1

    We still a page we can refer to ... instead of answering the same question over and over again.
  13. Dunuin

    ZFS usage incorrect for a VM in RAIDZ1

    Also search this forum for "padding overhead". If you didn't manually changed the blocksize of the raidz1 storage, every virtual disk will consume way more space.
  14. Dunuin

    Replikation auf 2. Node verdoppelt sich?!?

    Meine Glaskugel sagt mir du hast auf NodeA einen ZFS (Striped) Mirror und auf NodeB ein Raidz1 oder Raidz2? Dann wäre es Padding Overhead und du müsstest die Volblocksize erhöhen, alle virtuellen Disks löschen und neu erstellen. Wenn du kein Raidz1/2 auf NodeB hast, dann solltest du mit zfs...
  15. L

    [SOLVED] Can only use 7TB from newly created 12TB ZFS Pool?

    Thank you Dunuin! This was really helpful. I've set the blocksize to 16k and was able to use ~9750 GB of my pool! Thank you for the references, I will use refer to this guide first next time :) I was aware of TB vs TiB. It seems that because of the zfs pool blocksize, I am more restricted than...
  16. Dunuin

    [SOLVED] Can only use 7TB from newly created 12TB ZFS Pool?

    Search this forum for "padding overhead". In short: When using a 4 disk raidz1 with the default 8K volblocksize you will lose half of the raw capacity when using VM virtual disks (zvols)...
  17. Dunuin

    Disk exceeds size defined

    Check for common user errors: 1.) storing VMs disks on a raidz1/2/3 ZFS pool and not increasing the volblocksize resulting in massive padding overhead so that every zvol will consume way more space 2.) not checking the "discard" checkbox of the vitual disk, using a protocol like IDE that doesn't...
  18. Dunuin

    local storage used instead of local-zfs

    Search this forum for "padding overhead". You will find dozens of posts of me explaining that. In short: When storing VM (or better its zvols) on a raidz1/2/3 ZFS pool everything will be way bigger because of padding overhead, if your zvols were created with a too low volblocksize. Solution...
  19. Dunuin

    Out of space: really ?

    Yes, so 163% size in practice and 171% in theory. In case you are running DBs or you got similar workloads that do a lot of small IO I would highly recommend to create a striped mirror (raid10). 8x 1TB disks in a striped mirror would give you 4 times the IOPS performance, you could use a 16K...