Search results for query: padding overhead

  1. Neobin

    Slow disk freezing the system randomly

    As a sidenote: The CT1000P1SSD8 as well as the CT1000BX500SSD1 are both hot garbage for things like ZFS. Not only are those cheap consumer SSDs without PLP, but they also use QLC-nand. Additionally: Also keep the cons of a raidZ in mind, like e.g. padding overhead and IOPS of only a single drive...
  2. Dunuin

    use old qcow2 drive for new vm

    I would guess you used a raidz1/2/3 and then there is padding overhead when not increasing the volblocksize first so you end up with something like 25% to 50% of the raw capacity as usable storage. With 5x 3,8TB in a raidz1 and an ashift of 12 you should get 9,5TB of usable capacity with the...
  3. Dunuin

    Zfs Speicherplatz?

    ...deine ganzen Zvols massig Platz, da alle Zvols doppelt so viel Platz wegnehmen wie nötig wäre. Musst du mal das Forum nach "Padding Overhead" durchsuchen. Weil mit einer Blockgröße von 8K würden weniger Zvols auf den Pool passen als wenn du einfach nur ein deutlich schnelleres Raid10 erstellt...
  4. P

    ZFS drive using 40% more available space than given

    I really appreciate the help. Thank you! That makes everything clear to me. Have a great day.
  5. Dunuin

    ZFS drive using 40% more available space than given

    With 5 disk raidz2 I would at least set the volblocksize to 32K. But with that you are still losing some capacity due to padding overhead. To really get rid of that padding overhead the volblocksize would have to be way higher like 128K but that also would be a bad idea because everything...
  6. P

    ZFS drive using 40% more available space than given

    Thank you! Did not know about this padding overhead. My block size is currently set at 8KB, and I do not have compression enabled. You recommend to delete the zpool, recreate it with 32KB block size and enable compression? All 5 HDDs are 8TB Seagate Ironwolf NAS drives and I intend on running...
  7. Dunuin

    ZFS drive using 40% more available space than given

    It's padding overhead, which causes your zvols to consume more space because you use a too low volblocksize. I explained that a dozen of times. Just search this forum for "padding overhead". Also a good lecture on that topic to understand that padding overhead...
  8. Dunuin

    Out of space but VM storage doesn't add up?

    ...using a raidz1/2/3 with a too low volblocksize. Then everything stored on a zvol will consume more space than it should because of padding overhead. See here...
  9. Dunuin

    ZFS newbie question

    When working with zvols on raidz1/2/3 pool you also have to take padding overhead into account. When not increasing the volblocksize you will lose the same 60% of raw capacity you would lose with a 6 disk raid10. With an ashift of 12 and 6 disks in raidz1 the volblocksize should be increased to...
  10. Dunuin

    ZFS newbie question

    VMs don't use filesystems. VMs use zvols = block devices. LXCs use datasets = filesystems. With zvols on raidz1/2/3 you get padding overhead when the volblocksize is too low. Datasets are using a dynamic recordsize. So it matters what you store on it. There is an explanation of the padding...
  11. Dunuin

    ZFS newbie question

    ...with default ashift=12 and 8K volblocksize would result in a usable capacity of 2.4TB for VMs or 4TB for LXCs. Don't forget the padding overhead when using raidz1/2/3 with zvols and the 20% of capacity that should be kept free. To get the padding loss down you would need to increase the...
  12. Dunuin

    ZFS with three drives

    ...a zpool list should show 6TB and a zfs list should show 4TB. Also keep in mind that you will probably lose another 1TB because of padding overhead when using zvols and not increasing your volblocksize to atleast 16K. So only 3TB usable for zvols (but 4TB for datasets). And a ZFS pool...
  13. V

    ZFS Pool space utilization

    I did not know about padding overhead. ZFS is still relatively new to me. I'll look into that. Thank you Dunuin.
  14. Dunuin

    ZFS Pool space utilization

    Search the forum for "padding overhead". When using 5 disks in raidz1 with the default ashift of 12 and default volblocksize of 8K you will lose 60% of the total capacity when using virtual disks for VMs. 5x 1TB disks = 5TB raw capacity (this is what zpool list will show you as capacity) -20%...
  15. Z

    Proxmox with HDD SAS disks

    Thank you for that detailed answer. Still struggling to understand as much as I can... As for the disk info, I've run an fdisk -l command and retrieved this results on the HP 146Gb and HGST 600gb disks: Disk /dev/sdd: 136.73 GiB, 146815737856 bytes, 286749488 sectors Disk model: EH0146FARWD...
  16. R

    Proxmox VE PreInstall Sanity Check

    mr44er posted a waybackmachine link. I clearly need to do some more research on the best combination of settings here. The SSD I'm using is 4096 block size and according to what I've seen so far, ashift 12 is correct for it. I found one article saying that matching your block size to the...
  17. Dunuin

    Proxmox VE PreInstall Sanity Check

    That's sad. That was an article by the ZFS head developer explaining padding overhead on the block level :( That depends on the sector size of the disks, the ashift you choose, the number of disks per vdev, the number of striped vdevs, if you care more about performance or capacity, ... The...
  18. R

    Proxmox VE PreInstall Sanity Check

    That link seems to be broken but I did some research into the topic. What would be good volblocksize for our application? Is there any downside to going larger and larger? The 8TB I placed above was the total raw capacity, so there is only 8 1TB drives in that server. Same with the others. Only...
  19. Dunuin

    Proxmox VE PreInstall Sanity Check

    Did you read about padding overhead when using any raidz? You will moat likely need to increase the volblocksize to minimize padding overhead stealing capacity and this will make small random reads and all small writes very slow. So not great for running stuff like postgresql or MySQL DBs...
  20. Dunuin

    Recommendations on Promox install, ZFS/mdadm/somehting else

    ...to 16K to match the 16K writes of MySQL, but then my raidz1 would write more and I lose a lot capacity because of the increasing padding overhead. To decrease the volblocksize without adding more padding overhead I would need to switch to a striped mirror. But then I get 50% instead of only...