Search results for query: raidz1 padding

  1. Dunuin

    VMs disk growing beyond the allocated size

    Is that "pool01" ZFS pool a raidz1/2/3? Then it might be padding overhead if your volblocksize is too small. Whats zpool list pool01 and zfs get volblocksize pool01/vm-100-disk-1?
  2. Dunuin

    Storage

    In case your pool "ALMACENDATOS" was created with an ashift=12 (you can check that with zfs get ashift ALMACENDATOS) and 4 disks in a raidz1 your volblocksize needs to be atleast 16K. It basically looks like this: Parity+Padding loss: Usable raw capacity for zvols: Volblocksize: 4K/8K 50%...
  3. T

    Storage

    Thank you very much for your help, I can't see what the correct values are... ********** root@ALMACENDATOS:~# zpool list NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT ALMACENDATOS 14.5T 13.2T 1.35T - 39% 90% 1.00x ONLINE - rpool...
  4. F

    Strong i/o delays on ZFS VMs

    Thank you for responding. This is what I know: zpool status pool: zshare state: ONLINE scan: scrub repaired 0B in 01:11:52 with 0 errors on Sun Feb 13 01:35:54 2022 config: NAME STATE READ WRITE CKSUM zshare...
  5. Dunuin

    Strong i/o delays on ZFS VMs

    ZFS got alot of overhead and the low IOPS performance of the HDDs can easily be the bottleneck causing a very high IO delay. Especially if you use a raidz1/2/3 instead of a striped mirror as there the IOPS performance won't scale with the number of disks. What does your pool look like (zpool...
  6. Dunuin

    Storage

    I would guess you are using a raidz1/2/3 and didn't increased the blocksize of your pool (or in other words you are using a too low volblocksize for your zvols). In that case you get alot of padding overhead and everything will need way more storage. If thats the case search the forum for...
  7. Dunuin

    High SSD wear after a few days

    It also highly depends on your workload. There are alot of people here running their homeservers with consumer SSDs and ZFS for years without that much disk wear. Whats really killing SSDs are small sync writes. If you just got async writes and guests that don't write that much you might be...
  8. Dunuin

    Where did my disk space go?

    You should read more about how ZFS works. VMs are using zvols which are block devices so they are not part of any filesystem so you can't find them with ls and df/du also can't see them. LXCs use datasets which are filesystems and you can see them because they are mounted at your root...
  9. Dunuin

    (7.1) ZFS Performance issue

    You want your pools block size (or in other words your zvols volblocksize) way bigger than the sector size (or in other words the ashift you have chosen) of your disks or you will loose most of your capacity due to padding overhead. You can't directly see the padding overhead because its...
  10. Dunuin

    local-zfs volume is full and I don't understand why

    I guess you are using a raidz1/2/3? There it is normal that the virtual disk (zvol) needs a multiple (like 133%/150%/200%) of the space used by the VM if your volblocksize is too low because of padding overhead.
  11. Dunuin

    Create a 100% sized disk for a VM

    With a raidz you always got padding overhead when using a too small volblocksize. Using the defalt 8k volblocksize and ashift=12 and 3x 3TB disks in a raidz1 you bascially loose 33% of your raw storage to parity and additional 17% of your raw capacity to padding overhead. You can't directly see...
  12. Dunuin

    3x6to in RAIDZ1 and proxmox 100% full with a 4.7To copy file

    Google for 'volblocksize' and and padding overhead. If you got 3x 6TB as raidz1 with volblocksize=8k and ashift=12 you only got 9TB usable capacity for zvols of which only 7.2TB should be used because 10-20% of a ZFS pool always should be kept free or ZFS will get slow and finally stop...
  13. Dunuin

    optimal use of 6 x 2TB nvme

    Posgres writes with a 8K blocksize. With with 6 disks raidz1 you would need to increase the volblocksize from 8K to 32K and with raidz2 to 16K (both in case of ashift=12) because otherwise you would loose 50% of your total storage for raidz1 or 67% for raidz2, because of padding overhead...
  14. V

    Question regarding LVM/ZFS

    This is the first time I've played with software raid, so this is all new to me. Thanks for the explanation. I'm going to have to wrap my mind around it. That being said, at this moment, do I need to fix anything? Did I over provision storage? And if I overfill it, will it stop me from causing...
  15. Dunuin

    Question regarding LVM/ZFS

    Basically with 4x 2TB disks in a raidz1 and ashift of 12 you got this: 8TB raw storage where you will loose 2TB because of parity so you only got 6TB. "zpool" always shows the raw storage (so the full 8TB even if 8TB aren't usable). "zfs" command will always show the raw storage - parity, so...
  16. Dunuin

    ZFS Raid 10 mirror and stripe or the opposite

    Yes, still need to summarize that in a less confusing way in a blog post. Jep. But as you said, its not that important if you just run the system on that SSDs. Then the SSDs are always idleing anyway. Correct benchmark would be to use the same fio test with just "--bs=16K" and run that on 4...
  17. I

    ZFS Raid 10 mirror and stripe or the opposite

    Thank you for your quick response and great insight about my considerations. I ll add to what you have answered beginning from bottom to the top since this way I ll end up with the main topic of this post raid. So you agree with me that in my use case scenario ashift 9 is the best option. I...
  18. Dunuin

    ZFS Raid 10 mirror and stripe or the opposite

    A striped mirror (zfs term for raid10) are multiple mirrors (don't need to be 2 disks, you can also have 3 or 4 disks in a mirror so that 2 or 3 disks of the mirror might fail without lossing data) striped together. But striping works different compared with traditional raid. I believe it is...
  19. Dunuin

    ZFS pool layout and limiting zfs cache size

    Sorry, did that in my head to there is no spreadsheet I could share. For datasets usable capacity is: (TotalNumberOfDrives - ParityDrives) * DiskSize * 0,8 The "* 0.8" is because 20 percent pool should be kept free. So a 8x 1.8TB disk raidz1 would be: (8 - 1) * 1.8TB * 0.8 = 10.08 TB And a 8x...
  20. F

    ZFS pool layout and limiting zfs cache size

    Yes but people like me who are new to ZFS, it will give them a good starting point? I mean something is better than absolute nothing? We can incorporate the questions you raised as well as mine and give them some recommendations to look at? It's like a decision tree type Q&A and then boom, here...