Search results for query: padding overhead

  1. M

    ZFS share

    Thank you for sharing this Neobin. Although it was still not enough information, that answer suggests it's something called "padding overhead", no confirmation, no digging. It seems that with ZFS in RaidZ you will lose space not only to the data parity but also to storage overhead (another 20%...
  2. Dunuin

    ZFS ashift and SAS 512e vs 4Kn

    Block level compression will be worse and VMs on raidz1/2/3 will waste more space because of padding overhead when using ashift=12 and 8K volblocksize. For both you want the volblocksize to be a multiple of the sectorsize/ashift. So there a smaller ashift would help, as the vollbocksize then...
  3. Dunuin

    Help please to sort out my storage

    What is zpool get ashift ZFS-Data and zfs get volblocksize reporting? I think I see two problems: 1.) you use a raidz1 but probably didn`t increased the block size of your ZFS storage before creating your zvols. So you are probably wasting capacity, as every zvol will be bigger than needed...
  4. Dunuin

    VM Langsam - obwohl Ressourcen frei sind

    Das ist von der Performance her aber auch nicht gerade toll. Nicht vergessen, dass da ein Hypervisor ordentlich IOPS Performance braucht und HDDs schrecklich sind, was das angeht. Und IOPS performance skaliert nur mit der Anzahl der Vdevs, nicht mit der Anzahl der Disks. Trotz 4 HDDs hast du...
  5. Dunuin

    Proxmox with HDD SAS disks

    ...you would need to atleast have a volblocksize that is 8 times the ashift or otherwise you will lose too much capacity because of padding overhead. Ashift=9 would allow you to use 8x 512B, so a a 4K volblocksize. With ashift=12 it would be 8x 4K, so a 32K volblocksize. Now lets say you do a...
  6. Dunuin

    Platten durch größere ersetzten...

    ...disk5 ...sollte gehen, sofern Disks 3 bis 5 nicht kleiner sind als Disks 1 und 2 und man eine volblocksize wählt die hoch genug ist, weil sonst wegen Padding Overhead das raidz1 selbst bei einer Disk mehr und gleicher Diskgröße weniger nutzbare Kapazität, als das Raid0 aus Disks 1 und 2 , hätte.
  7. Dunuin

    Platten durch größere ersetzten...

    ...Wären dann in der Summe also auch nur 2TB nutzbar und du müsstest die Blockgröße auf mindestens 16K anheben, weil es sonst wegen Padding Overhead sogar noch weniger Platz wäre, als du jetzt schon hast. Wenn du 4TB mit Bit Rot Protection haben willst, brauchtest du schon entweder: 1.) 4x 2TB...
  8. Dunuin

    High I/O delay

    ...crippling the performance and cause massive SSD wear. And yes, I know. You can't go that much lower with the volblocksize or padding overhead will increase even more, because your raidz3 is too big. Are all 3 nodes running the same workload? Maybe that slow one is just doing more random sync...
  9. Dunuin

    "Move Disk" ZVOL to other zpool, only allocated contents?

    ...that is pointing to your ssd_pool ZFS pool? Is any of the ZFS pool using a raidz1/2/3, because then zvols might be bigger because of padding overhead when using a too small volblocksize. Easiest to move virtual disks between pools would be the "Move Disk" button of the webUI. If you want to...
  10. Dunuin

    ZFS pool layout and limiting zfs cache size

    ...FLOOR( ( Sectors + DataDisks - 1) / DataDisks ) , CeilFactor) ) / Sectors ) With that you can calculate the parity+padding overhead of any number of disks and any amount of sectors (in other words any volblocksize) of a raidz1: Lets for examle use Sectors = 4, DataDisks = 8...
  11. F

    ZFS pool layout and limiting zfs cache size

    Sorry for being a pain but how do you work out which part is parity and which part is padding?? Here is the formula for example: =((CEILING($A4+$A$3*FLOOR(($A4+B$3-$A$3-1)/(B$3-$A$3)),2))/$A4-1)/((CEILING($A4+$A$3*FLOOR(($A4+B$3-$A$3-1)/(B$3-$A$3)),2))/$A4) What baffling me is that the...
  12. Dunuin

    ZFS pool layout and limiting zfs cache size

    You can calculate that. The formulas for padding overhead and parity overhead are in the spreadsheet, as the spreadsheet will calculate those based on the number of data disks, parity disks and number of sectors. But if you want it easy...subtract the parity loss (which is easy to find out) from...
  13. Dunuin

    Running a video management server as a VM?

    ...devices using raw format, that you could format with NTFS or whatever you want. But keep in mind to increase you volblocksize or you will waste a lot of capacity due to padding overhead when using a raidz1/2/3. And a ZFS pool should always have 20% of free space, so another 20% of capacity lost.
  14. Dunuin

    can not add hard disk: out of space(500)

    There is padding overhead. Of your 32TB of raw capacity you lose 25% because of parity data. Of the remaining 24TB you lose 33% because of padding overhead (when using 4 disk raidz1 with ashift=12 and default 8K volblocksize) so only 16TB left. And a ZFS pool should always have 20% of free...
  15. Dunuin

    Raidz1 but all space available?

    ...virtual disks. With default values those 6TB will only result in 2.4TB of real usable storage for VMs. Because 33% of raw capacity is lost because of parity data, another 17% because of padding overhead and because a ZFS pool should always have 20% of free space you loose another 20% of that 50%.
  16. Dunuin

    ZFS Disk config help - New to proxmox from vmware

    ...slow. And that raidz needs a volblocksize of 16K or higher. Otherwise you will lose 50% of your raw capacity (-33% parity and -17% padding overhead). And of that 50% you again lose 80% because 20% should always be kept free. So only 40% of the raw capacity actually usable. For a mirror or...
  17. Dunuin

    HowTo: Proxmox VE 7 With Software RAID-1

    ...But read amplification will at least not wear the SSDs ;) Smaller than 32K and you will lose a lot of capacity because of padding overhead. Bigger and the write amplification + read amplification will be even worse when running any workloads like DBs that do writes smaller than your...
  18. Dunuin

    Questions about ZFS/Ceph...How should I move forward?

    Just don't use any raidz1/2/3 with a too low volblocksize and this shouldn't be a problem. If you can't increase the volblocksize because of your workload, don't use raidz1/2/3 at all and use a striped mirror. ZFS is local storage, that is synced via replication every minute or so. Ceph is a...
  19. J

    Questions about ZFS/Ceph...How should I move forward?

    ...wanted to created a highly available system for my vms. I dont know much about ZFS or Ceph. Ive used ZFS once and had an issue with padding overhead so my 2tb of data was taking 4.4tb of space. I’m used to hardware raid this is my first time getting into software raid. I’m also not always...
  20. Dunuin

    ZFS + PVE Disk usage makes no sense

    I can recommend this article on why there is padding overhead and how to calculate the optimal volblocksize: https://www.delphix.com/blog/delphix-engineering/zfs-raidz-stripe-width-or-how-i-learned-stop-worrying-and-love-raidz