Search results for query: raidz1 padding

  1. Dunuin

    Out of space: really ?

    Let me guess... You storage is a raidz1/2/3 and you didn't increase the blocksize before creating your first zvols? If yes then this is normal because of padding overhead every zvol will consume way more space.
  2. Dunuin

    Correction of ZFS write amplification

    I did a lot of testing over the years and never got the ZFS write amplification significantly down. - try to avoid encryption if possible (doubles write amplification for whatever reason) - try to avoid CoW on top of CoW - try to avoid nested filesystems - don't use consumer SSDs without PLP as...
  3. Dunuin

    Where is my zpool storage???

    Padding overhead only effects zvols on raidz1/2/3 (maybe draid too, not sure about that). But a PBS usually uses datasets, so even with a raidz1/2/3 this wouldn't be a problem. But PBS needs IOPS performance and no matter how many disks your raidz1/2/3 would consist of, it will always be as slow...
  4. Dunuin

    [SOLVED] Testing ZFS performance inside lxc container (mysql)

    8K is the ZFS default. And ZFS even warns you if you try to create a 4K volblocksize zvol and recommends to use at least 8K. Yeah, raidz1/2/3 is primarily useful as cold storage using datasets. When using zvols you usually also want the IOPS performance and low volblocksize you only get with...
  5. habitats-tech

    [TUTORIAL] If you are new to PVE, read this first; it might assist you with choices as you start your journey

    I will incorporate your suggestions onto the forthcoming subject of storage. I have another two docs in the works for networking and clustering. All targeting small scale setups.
  6. Dunuin

    [TUTORIAL] If you are new to PVE, read this first; it might assist you with choices as you start your journey

    At least here it is an issue. ZFS killed 4 consumer SSD in my homeservers the last year. And two of those are indeed Crucial TLC SSDs not a single year in use. And they all were just used as pure system/boot disks without any writes from guests. I don't really care about always replacing them...
  7. Dunuin

    zfs thin provision space usage discrepancies

    Search this forum for "padding overhead". You can't use any raidz1/2/3 with the defualt 8K volblocksize or you will get massive padding overhead causing to everything written to zvol consuming more space. The smaller your volblocksize or the more disks your raidz1/2/3 consists of, the more space...
  8. Dunuin

    RAIDZ1 shows wrong space?

    The "zpool" command will always show raw capacity (incl. parity) while the "zfs" command will show the size with parity already subtracted. You might also want to search this forum for "padding overhead" because with default values your 20TB pool will only allow you to store 10TB of VM virtual...
  9. Dunuin

    RAIDZ1 resizing

    Search this forum for "padding overhead". When storing a zvol with a too low volblocksize on an raidz1/2/3 with too much disks, you will get padding overhead and everything will consume more space. And this volblocksize can only be set once at creation of a zvol.
  10. Dunuin

    ZFS: Storage space for zvols almost doubles when transferred to raid-z3 pool

    Not just +75% size of your zvols. You lose 75% of your raw capacity, so 75% of those 150TB. 25% of those 150TB is actually usable, 30% lost because of parity and 45% lost because of padding overhead. So in theory your zvols should be +180% in size. Yes No, all raidz1/raidz2/raidz3 got this...
  11. Dunuin

    ZFS: Storage space for zvols almost doubles when transferred to raid-z3 pool

    The more disks your pool consists of, the bigger your volblocksize will have to be when using raidz1/2/3, otherwise everything will be bigger because of the padding overhead. How much disks does your raidz3 consist of? Example: 9 disk raidz3 with ashift=12 would mean you lose 75% of the raw...
  12. Dunuin

    28GB Backup, Restore has Taken 4hr 15min

    Depends. I benchmarks a lot of possible pool layouts 8/6/4 disk raid10, 3/5/7 disk raidz1, 4/6/8 disk raidz2, ... for write amplification and finally have chosen a 5 disk raidz1 for important guests + single disk LVM-Thin for unimportant guests as this caused less SSD wear than a 6 disk raid10...
  13. Dunuin

    Where is my 1.2TB goes?

    Yes, thats padding overhead. With 4 disks in a raidz1 using he default ashift=12 and default volblocksize=8K you will lose 50% of your raw capacity (or even 60% if you care about performance) when using VMs. To not lose that much space to padding overhead you would need to increase your...
  14. Dunuin

    Festplatten und andere Hardware Konfiguration

    Schlechte Idee für DBs wie PostgreSQL. Bei einem Raidz1/2/3 musst du nämlich die Blocksize erhöhen, weil du sonst Padding Overhead bekommst und Kapazitätsverlust auch nicht besser als bei einem Mirror wäre. Bei ashift=12 und 3 Disks im Raidz1 müsste die Volblocksize z.B. mindestens 16K sein und...
  15. Dunuin

    use old qcow2 drive for new vm

    I would guess you used a raidz1/2/3 and then there is padding overhead when not increasing the volblocksize first so you end up with something like 25% to 50% of the raw capacity as usable storage. With 5x 3,8TB in a raidz1 and an ashift of 12 you should get 9,5TB of usable capacity with the...
  16. Dunuin

    Out of space but VM storage doesn't add up?

    You are probably using a raidz1/2/3 with a too low volblocksize. Then everything stored on a zvol will consume more space than it should because of padding overhead. See here...
  17. Dunuin

    ZFS newbie question

    When working with zvols on raidz1/2/3 pool you also have to take padding overhead into account. When not increasing the volblocksize you will lose the same 60% of raw capacity you would lose with a 6 disk raid10. With an ashift of 12 and 6 disks in raidz1 the volblocksize should be increased to...
  18. Dunuin

    ZFS newbie question

    VMs don't use filesystems. VMs use zvols = block devices. LXCs use datasets = filesystems. With zvols on raidz1/2/3 you get padding overhead when the volblocksize is too low. Datasets are using a dynamic recordsize. So it matters what you store on it. There is an explanation of the padding...
  19. Dunuin

    ZFS newbie question

    6x 1TB raidz1 with default ashift=12 and 8K volblocksize would result in a usable capacity of 2.4TB for VMs or 4TB for LXCs. Don't forget the padding overhead when using raidz1/2/3 with zvols and the 20% of capacity that should be kept free. To get the padding loss down you would need to...
  20. Dunuin

    ZFS with three drives

    Run zpool status to see how the pool is organized. If you see a "raidz-0" or "raidz1-0" then it is a raidz1. It would be totally normal for zpool list to show 5.45T for a 3x 2TB disk raidz1. The zpool command shows the raw capacity (so data+parity). The zfs command shows the capacity usable for...