Search results for query: raidz1 padding

  1. Dunuin

    ZFS Pool space utilization

    Search the forum for "padding overhead". When using 5 disks in raidz1 with the default ashift of 12 and default volblocksize of 8K you will lose 60% of the total capacity when using virtual disks for VMs. 5x 1TB disks = 5TB raw capacity (this is what zpool list will show you as capacity) -20%...
  2. Z

    Proxmox with HDD SAS disks

    Thank you for that detailed answer. Still struggling to understand as much as I can... As for the disk info, I've run an fdisk -l command and retrieved this results on the HP 146Gb and HGST 600gb disks: Disk /dev/sdd: 136.73 GiB, 146815737856 bytes, 286749488 sectors Disk model: EH0146FARWD...
  3. Dunuin

    Recommendations on Promox install, ZFS/mdadm/somehting else

    Jup, already optimized DBs and ZFS as good as I can. Ideally I would for example lower the volblocksize to 16K to match the 16K writes of MySQL, but then my raidz1 would write more and I lose a lot capacity because of the increasing padding overhead. To decrease the volblocksize without adding...
  4. Dunuin

    ZFS ashift and SAS 512e vs 4Kn

    Block level compression will be worse and VMs on raidz1/2/3 will waste more space because of padding overhead when using ashift=12 and 8K volblocksize. For both you want the volblocksize to be a multiple of the sectorsize/ashift. So there a smaller ashift would help, as the vollbocksize then...
  5. Dunuin

    Help please to sort out my storage

    What is zpool get ashift ZFS-Data and zfs get volblocksize reporting? I think I see two problems: 1.) you use a raidz1 but probably didn`t increased the block size of your ZFS storage before creating your zvols. So you are probably wasting capacity, as every zvol will be bigger than needed...
  6. Dunuin

    Proxmox with HDD SAS disks

    Yes, but ashift=9 also got its benefits when all disks are really 512B native. You will have less write and read amplification as you don't need to increase the volblocksize that much. Lets for example say you want to run da 5 disk raidz1. Here you would need to atleast have a volblocksize that...
  7. Dunuin

    Platten durch größere ersetzten...

    Ja, aber die Disks die jetzt das Raid0 bilden, sollen dann ja teil des raidz1 werden. Seine idee war ja das 1TB + 1TB raid0 durch zwei zusätzliche 2TB disks zu einem raidz1 zu erweitern. Das mit dem "zpool remove" der Vdevs, dass dessen Daten auf andere Vdevs verteilt werden, klappt ja denke...
  8. Dunuin

    Platten durch größere ersetzten...

    Du kannst aus einem Raid0 oder einem Mirror kein raidz1 machen. Dazu müsstest du auch wieder den Pool erst zerstören und neu aufbauen. Was wohl möglich wäre, wenn du ein raid0 behalten aber größer haben willst, wäre aus dem Raid0 einen striped mirror (also raid10) machen. Wobei du dann zwei...
  9. Dunuin

    High I/O delay

    As an addition to what rason wrote: Only throughput performance will scale with the number of disks and there maybe the PCIe bandwidth might be the bottleneck. No matter how much SSDs you got, a raidz1/2/3 will only have IOPS performance of a single disk. As IOPS performance won't scale with...
  10. Dunuin

    "Move Disk" ZVOL to other zpool, only allocated contents?

    Did you check the "Thin provisioning" checkbox of your storage that is pointing to your ssd_pool ZFS pool? Is any of the ZFS pool using a raidz1/2/3, because then zvols might be bigger because of padding overhead when using a too small volblocksize. Easiest to move virtual disks between pools...
  11. Dunuin

    ZFS pool layout and limiting zfs cache size

    Datasets got no padding loss. Only zvols do. And where do you get the number of "total usable space @ 100% datasets" from? When you get the usable space using the zfs command, then that is not the raw capapacity (which is just the sum of capacity of all your disks forming the raidz1 vdev). The...
  12. Dunuin

    ZFS pool layout and limiting zfs cache size

    "5x 800 GB raidz1 @ 32K volblocksize" in the spreadsheet would be a raidz1 of 5 disk with 8 sectors (32K volblocksize / 4K (because 2^12 for ashift of 12)). Have a look at that table Cell D11 and you will see that it reports a parity+padding loss of 20%. That 20% parity+padding loss consists of...
  13. F

    ZFS pool layout and limiting zfs cache size

    This is awesome!! Thank you so much for explaining it. However, it's not tallying up what we already discussed previously? You mentioned that there was no `padding loss` if you had `5x 800 GB raidz1 @ 32K volblocksize`. According to the spreadsheet, Column C (because the total number of disks...
  14. Dunuin

    ZFS pool layout and limiting zfs cache size

    I reversed engeneered that formula once, but can't find the result. We can try that again: B4 Raidz1 formula: =((CEILING($A4+$A$3*FLOOR(($A4+B$3-$A$3-1)/(B$3-$A$3)),2))/$A4-1)/((CEILING($A4+$A$3*FLOOR(($A4+B$3-$A$3-1)/(B$3-$A$3)),2))/$A4) B4 Raidz2 formula...
  15. F

    ZFS pool layout and limiting zfs cache size

    Sorry for being a pain but how do you work out which part is parity and which part is padding?? Here is the formula for example: =((CEILING($A4+$A$3*FLOOR(($A4+B$3-$A$3-1)/(B$3-$A$3)),2))/$A4-1)/((CEILING($A4+$A$3*FLOOR(($A4+B$3-$A$3-1)/(B$3-$A$3)),2))/$A4) What baffling me is that the...
  16. Dunuin

    ZFS pool layout and limiting zfs cache size

    You can calculate that. The formulas for padding overhead and parity overhead are in the spreadsheet, as the spreadsheet will calculate those based on the number of data disks, parity disks and number of sectors. But if you want it easy...subtract the parity loss (which is easy to find out) from...
  17. F

    ZFS pool layout and limiting zfs cache size

    Hi after a whole year again :) I am not sure if you're still around helping people but I am trying to recreate your numbers using a spreadsheet. I do not understand (now) how you were able to separate out parity from padding from the spreadsheet? Lastly, how do you come up with 8K random write...
  18. Dunuin

    Running a video management server as a VM?

    There is no raw file. When using ZFS without qcow2 it will use zvols, which are block devices using raw format, that you could format with NTFS or whatever you want. But keep in mind to increase you volblocksize or you will waste a lot of capacity due to padding overhead when using a raidz1/2/3...
  19. Dunuin

    can not add hard disk: out of space(500)

    There is padding overhead. Of your 32TB of raw capacity you lose 25% because of parity data. Of the remaining 24TB you lose 33% because of padding overhead (when using 4 disk raidz1 with ashift=12 and default 8K volblocksize) so only 16TB left. And a ZFS pool should always have 20% of free...
  20. Dunuin

    HowTo: Proxmox VE 7 With Software RAID-1

    You need to create different datasets and add them as different ZFS storages. One storage for each different volblocksize you want to use. Otherwise PVE will use the wrong volblocksize when doing a backup restore or migration between nodes. For my 5 disk raidz1 with a ashift of 12 its 32K...