Search results for query: raidz padding

  1. E

    [SOLVED] Windows VM I/O problems only with ZFS

    Even if using LZ4 compression? I've read many posts that argue that with compression on ZFS, everything changes on this subject. (even the common suggested volblocksize tuning to the ashift and the number of disks of the pool minus parity, seems to looses meaning with compression on) On these...
  2. VictorSTS

    [SOLVED] Windows VM I/O problems only with ZFS

    Agree, but I'm curious too as I don't know if that would change anything in the original problem. Theory tells that you are right, I just would like to test it somehow. RAIDz works but has terrible write amplification and padding overhead + low performance. There are tons of threads regarding...
  3. Dunuin

    CLUSTER 2 NIDE

    Search this forum for "padding overhead". Every Zvols gets created with a volblocksize that can't be changed later. What volblocksize will be used, for creating new zvols, is defined in PVE by the "block size" field of your ZFS storage. It defaults to 8K and that is always bad when running any...
  4. leesteken

    Big difference in speed tests on the proxmox host and on the virtual machine

    There is overhead in virtual disks. Ext4 uses 4k blocks, while QEMU shows 512 bytes sectors, but ZFS uses some volblocksize (check with zfs get volblocksize) and that causes amplification. Then there is also your 3-drive raidz1, which adds padding and more amplification. All the extra bytes the...
  5. S

    ZFS volblocksize per VM disk instead of pool

    So this also mean that if a VM uses different disks on different ZFS pools I may also use different volblocksizes - am I right? E.g. Ubuntu , root partition with a Postgres DB = 8k volblocksize + 2nd partition used for SMB storage on a different PVE ZFS pool 1M volblock size
  6. leesteken

    ZFS volblocksize per VM disk instead of pool

    Indeed, but you usually don't run ZFS on top of ZFS. I do think this point is valid and your are smart to select a volblocksize that matches the workload inside the VM. But as people with raidz1/2/3 found out: it is also a trade-off with padding, wasted space, IOPS per drive, etc., which is...
  7. Dunuin

    zfs TB eater

    Don't blame ZFS. You lose that additional 4 TB because you don't set that pool up well. Thats just a user error. Read about padding overhead and the volblocksize and you could use your 12TB. But I still wouldn't use the full 12TB as ZFS always needs some free space for proper operation. I...
  8. SInisterPisces

    Choosing ZFS volblocksize for a container's storage: Same logic as for VMs?

    Hello again. I had not originally planned to do it this way, but I find myself bringing up a MariaDB instance in a container. I want to store the DB itself in an appropriate filesystem for best performance on what is already kind of a potato node. Based on our prior conversation, I think what...
  9. leesteken

    [SOLVED] Allocating a virtual disk on a zpool

    You are probably using raidz1 (or 2 or 3) and this has a lot of overhead and padding with typical volblocksizes. Several threads about that on this forum but this is also a good overview...
  10. Neobin

    When i'm create a VM , if i m alocate 4 TB space , VM take 4 TB space from main node , how fixed this?

    Check the sparse checkbox on the ZFS storage: [1]. Additional info: Search the forum for: "padding overhead" with your raidZ, if you not already know it, to prevent possible surprises in the future... [1] https://pve.proxmox.com/pve-docs/chapter-pvesm.html#storage_zfspool
  11. habitats-tech

    [TUTORIAL] If you are new to PVE, read this first; it might assist you with choices as you start your journey

    I will incorporate your suggestions onto the forthcoming subject of storage. I have another two docs in the works for networking and clustering. All targeting small scale setups.
  12. Dunuin

    [TUTORIAL] If you are new to PVE, read this first; it might assist you with choices as you start your journey

    At least here it is an issue. ZFS killed 4 consumer SSD in my homeservers the last year. And two of those are indeed Crucial TLC SSDs not a single year in use. And they all were just used as pure system/boot disks without any writes from guests. I don't really care about always replacing them...
  13. Dunuin

    [SOLVED] Testing ZFS performance inside lxc container (mysql)

    And best not to fill up your pool too much. The more filled your pool is, the fast it will fragment. Are you sure, you are not mixing this with the 128K recordsize? As far as I know TrueNAS will default to different volblocksizes depending on your pool layout...
  14. Dunuin

    zfs thin provision space usage discrepancies

    Search this forum for "padding overhead". You can't use any raidz1/2/3 with the defualt 8K volblocksize or you will get massive padding overhead causing to everything written to zvol consuming more space. The smaller your volblocksize or the more disks your raidz1/2/3 consists of, the more space...
  15. Dunuin

    RAIDZ1 shows wrong space?

    See here: https://web.archive.org/web/20210312232106/https://www.delphix.com/blog/delphix-engineering/zfs-raidz-stripe-width-or-how-i-learned-stop-worrying-and-love-raidz Those Directory Storages are probably using datasets and datasets use recordsize instead of volblocksize. And padding...
  16. Dunuin

    use old qcow2 drive for new vm

    I would guess you used a raidz1/2/3 and then there is padding overhead when not increasing the volblocksize first so you end up with something like 25% to 50% of the raw capacity as usable storage. With 5x 3,8TB in a raidz1 and an ashift of 12 you should get 9,5TB of usable capacity with the...
  17. P

    ZFS drive using 40% more available space than given

    Thank you! Did not know about this padding overhead. My block size is currently set at 8KB, and I do not have compression enabled. You recommend to delete the zpool, recreate it with 32KB block size and enable compression? All 5 HDDs are 8TB Seagate Ironwolf NAS drives and I intend on running...
  18. Dunuin

    ZFS drive using 40% more available space than given

    It's padding overhead, which causes your zvols to consume more space because you use a too low volblocksize. I explained that a dozen of times. Just search this forum for "padding overhead". Also a good lecture on that topic to understand that padding overhead...
  19. Dunuin

    Out of space but VM storage doesn't add up?

    You are probably using a raidz1/2/3 with a too low volblocksize. Then everything stored on a zvol will consume more space than it should because of padding overhead. See here...
  20. Dunuin

    ZFS newbie question

    When working with zvols on raidz1/2/3 pool you also have to take padding overhead into account. When not increasing the volblocksize you will lose the same 60% of raw capacity you would lose with a 6 disk raid10. With an ashift of 12 and 6 disks in raidz1 the volblocksize should be increased to...