Search results for query: padding overhead

  1. Dunuin

    Windows VM Trimming

    Padding overhead only affects zvols as only zvols have a volblocksize. LXCs and PBS are using datasets which are using the recordsize instead (and therefore no padding overhead). One of the many reasons why you usually don't want to use a raidz1/2/3 for storing VMs but a striped mirror instead...
  2. Dunuin

    Windows VM Trimming

    So with a 8K volblocksize and ashift=12 you would lose 50% raw capacity (14% because of parity, 36% because of padding overhead). Everything on those virtual disks should consume 71% more space. To fix that you would need to destroy and recreate those virtual disks. Easiest would be to change...
  3. Dunuin

    Windows VM Trimming

    ...512K/1M volblocksize. This lost raw capacity is indirect. ZFS will tell you that you got 6 of 7 disks usable for data but because of padding overhead every zvol will be bigger. So a 1TB of data on a zvol consuming for example 1.66TB of space. Also keep in mind that the volblocksize can only...
  4. Dunuin

    Windows VM Trimming

    Maybe you are using a raidz1/2/3 with default volblocksize and it is "padding overhead"? Output of zpool list -v and zfs list -o space would give some hints.
  5. Dunuin

    Best disk setup for config

    ...that you shouldn't fill your pool too much. Usual recommendation is to only fill it to 80%. You should also search this forum for "padding overhead" as most people don't understand that any raidz might waste tons of space if the blocksize was chosen too small and then you might loose those...
  6. Dunuin

    [SOLVED] Intel SSDPF2KX076TZ 7,68TB

    Nein, geht erst ab 3 Nodes und besser viel mehr. Kann man machen. Du bekommat aber keine vollen 75% wegen Padding Overhead und die IOPS-Performance ist halbiert.
  7. Dunuin

    ZFS + Raid: Aufteilung NVMe SSD's

    ...dann ja oft z.B. auch schon auf Daten-Caching im RAM, weil die NVMes schnell genug sind. Beim 4-disk raidz1 nicht vergessen, dass man die Blockgröße vom ZFS Storage auf Minimum 16K setzen sollte. Eventuell sogar 64K, wenn du weitere 6% Rohkapazität nicht wegen Padding Overhead verlieren willst.
  8. Dunuin

    RAIDZ mit TrueNAS und NextCloud

    ...Storage aber auch mit mit einer "Block Size" von 32K anlegen, statt der standardmäßigen von 8K/16K, weil sonst wirst du wegen Padding Overhead ordentlich Platz verschwenden. Idee bei einem Hypervisor ist es ja alles getrennt laufen zu lassen für einfachere Verwaltbarkeit, weniger...
  9. Dunuin

    Recommendations - Proxmox Workstation

    ...Raidz won't be great for IOPS performance in case your workload care about such things and comes with some additional limitations (padding overhead, you can't remove vdevs, ...). Your CoreNVME needs a volblocksize of 16K or even 64k to not waste capacity. Your CoreSpin needs a volblocksize of...
  10. Dunuin

    Disk Konfiguration fürs HomeLab

    ...alle von Gast zur Disk immer nur kleiner aber nicht größer werden. Passende ashift und volblocksize wegen physischen Sektoren und Padding Overhead wählen. Keine Verschlüsselung. Am besten nur große sequentielle Async Writes und keine kleinen Random Sync Writes. Gucken das die Daten gut an die...
  11. Dunuin

    RaidZ1 performance ZFS on host vs VM

    And like already said...there is padding overhead. PVE7 uses 8K volblocksize by default. This combined with a 4-disk raidz1 and ashift=12 means you lose 50% and not 25% of the raw capacity as everything written to a zvol will be 150% in size. So on top of your data blocks there will be...
  12. Dunuin

    Doubt of space usage zfs pool

    So 32GB used by Snapshots. Padding overhead isn't a problem as it is a mirror and no raidz1/2/3. But there are 454GB used by refreservation which means you either: A.) forgot to check the "thin" checkbox when creating that ZFSpool storage so its thick provisioned and the virtual disks always...
  13. Dunuin

    Proxmox VE 8.1 released!

    Yes. (Striped) Mirrors don't got padding overhead.
  14. Dunuin

    question about ZFS UI vs. cmdline

    My guess would be that you are using a raidz1/2 without increasing the volblocksize. Then it wouldn't be uncommon that storing something like 300GB on a VMs virtual disk would consume something like 562GB of actual space on the pool. Search this forum for "padding overhead".
  15. Dunuin

    Recommendations on the best storage configuration

    When doing that keep in mind that any raidz requires you to increase the blocksize if you don'T want to lose too much capacity due to padding overhead. In this case 16K or even 64K so running stuff like DBs won't be great that are doing small IO (8K = 50% capacity loss; 16K = 33% loss; 64K = 27%...
  16. Dunuin

    [SOLVED] ZFS Size Difference

    LXCs use datasets (so filesystems without block devices underneath it) and padding overhead only affects block devices (zvols). So LXC won't be affected and you can keep them.
  17. Dunuin

    [SOLVED] ZFS Size Difference

    Please also search this forum for "padding overhead". When using the defaults and not increasing the volblocksize before creating your first VM you will waste tons of capacity (= only 20% of those 7TB would be actually usable for VM disks. 75% loss because of padding and parity. And of the...
  18. Dunuin

    Switching from HW RAID TO SW RAID

    ...from 8K to something like 256K in case you don't want to lose tons of capacity (only 38% raw capacity lost instead of 75%) due to padding overhead when running a 8 disk raidz3 with ashift=12. If you don't care that much about performance and more about data integrity, yes...
  19. Dunuin

    How to configure ZFS, 3 drives total - 2 + 1 hot spare

    ...first VM make sure to increase the "Block size" of the ZFS storage from 8K to 16K. Otherwise you will lose an additional TB due to padding overhead even if you can't see this directly. If you already created that VM you would need to do a backup+restore after increasing the block size so the...
  20. Dunuin

    ZFS vs Single disk configuration recomendation

    ...has to write 1000x 8K records (+ 2000x metadata) instead of single a big 8MB record (+2x metadata). And datasets are not affected by padding overhead. Thats only a zvol thing when used in combination with raidz1/2/3. PVE is just not optimizing anything and using the ZFS defaults everywhere...