Search results for query: padding overhead

  1. Dunuin

    Doubt of space usage zfs pool

    So 32GB used by Snapshots. Padding overhead isn't a problem as it is a mirror and no raidz1/2/3. But there are 454GB used by refreservation which means you either: A.) forgot to check the "thin" checkbox when creating that ZFSpool storage so its thick provisioned and the virtual disks always...
  2. SInisterPisces

    Proxmox VE 8.1 released!

    Excellent! Y'know, I'm just about to turn 40; this is one of the few areas in my life where I'm not having to be increasingly concerned with "padding overhead." My doctor would like me to lose about ten pounds of it.
  3. Dunuin

    Proxmox VE 8.1 released!

    Yes. (Striped) Mirrors don't got padding overhead.
  4. Dunuin

    question about ZFS UI vs. cmdline

    My guess would be that you are using a raidz1/2 without increasing the volblocksize. Then it wouldn't be uncommon that storing something like 300GB on a VMs virtual disk would consume something like 562GB of actual space on the pool. Search this forum for "padding overhead".
  5. S

    Recommendations on the best storage configuration

    I didn't understand what you want to tell me. raidz 10 is different from raid 10? what I want to do is a striped mirror
  6. Dunuin

    Recommendations on the best storage configuration

    When doing that keep in mind that any raidz requires you to increase the blocksize if you don'T want to lose too much capacity due to padding overhead. In this case 16K or even 64K so running stuff like DBs won't be great that are doing small IO (8K = 50% capacity loss; 16K = 33% loss; 64K = 27%...
  7. Dunuin

    [SOLVED] ZFS Size Difference

    LXCs use datasets (so filesystems without block devices underneath it) and padding overhead only affects block devices (zvols). So LXC won't be affected and you can keep them.
  8. M

    [SOLVED] ZFS Size Difference

    thanks, saw this post by you https://forum.proxmox.com/threads/how-to-configure-zfs-3-drives-total-2-1-hot-spare.138369/post-617345 Now if i already have VMs, i assume i need to backup and remove all the VMs and restore after I change the blocksize, right?
  9. Dunuin

    [SOLVED] ZFS Size Difference

    Please also search this forum for "padding overhead". When using the defaults and not increasing the volblocksize before creating your first VM you will waste tons of capacity (= only 20% of those 7TB would be actually usable for VM disks. 75% loss because of padding and parity. And of the...
  10. Dunuin

    Switching from HW RAID TO SW RAID

    ...from 8K to something like 256K in case you don't want to lose tons of capacity (only 38% raw capacity lost instead of 75%) due to padding overhead when running a 8 disk raidz3 with ashift=12. If you don't care that much about performance and more about data integrity, yes...
  11. J

    How to configure ZFS, 3 drives total - 2 + 1 hot spare

    Great, thanks! I tried re-creating the raid with ashift = 16 size, however the total storage was only 2.90TB when trying to add an HDD to the VM, with ashift = 12 the total storage was 3.87TB. I'm only going to be using the ZFS storage for a data disk on the VM, as I already have the VM...
  12. Dunuin

    How to configure ZFS, 3 drives total - 2 + 1 hot spare

    ...first VM make sure to increase the "Block size" of the ZFS storage from 8K to 16K. Otherwise you will lose an additional TB due to padding overhead even if you can't see this directly. If you already created that VM you would need to do a backup+restore after increasing the block size so the...
  13. Dunuin

    ZFS vs Single disk configuration recomendation

    ...has to write 1000x 8K records (+ 2000x metadata) instead of single a big 8MB record (+2x metadata). And datasets are not affected by padding overhead. Thats only a zvol thing when used in combination with raidz1/2/3. PVE is just not optimizing anything and using the ZFS defaults everywhere...
  14. Z

    ZFS vs Single disk configuration recomendation

    ...you can decrease the "recordsize" all the way to about 8kB. Any less than that doesn't really make sense. And at 8kB you have huge padding overhead. Things get more complicated when you use your ZFS drives not to store files, but to carve them up for use by virtual disk devices. This is what...
  15. H

    ZFS vs Single disk configuration recomendation

    I'm quiat newby on this kind fo solutions I'm always use single disk. The best option is run ZFS with Mirror setup for the 8 NVME?
  16. Dunuin

    ZFS vs Single disk configuration recomendation

    ...require that you increase the block size from 8K (75% capacity loss) to 64K (43% capacity loss) or even 256K (38% capacity loss) or padding overhead will be big. And IOPS performance only scales with the number of vdevs, not the number of disks. So only IOPS performance like a single disk and...
  17. Dunuin

    Proxmox 8.1 - Weird restore actions

    Really bad idea. Raidz is pretty terrible for storing VMs. You either have to increase the Volblocksize so running DBs will suck or padding overhead will waste tons of space. Also won't help with IOPS performance as this scales with number of vdevs and not number of disks. And consumer SSDs...
  18. R

    [SOLVED] Frage zu den Snapshots und dem Backup-Server

    Upsi. Sry ;) Also aktuell sind es 6 3TB Platten. Diese sollen natürlich später mal ausgetauscht werden in größere. Es werden aber nie mehr als 6, da das System nicht mehr her gibt. Wie gesagt hauptsächlich liegen da Office-Dokumente, Pdf und Images rum. Quasi der ganze Plunder, der unter eigene...
  19. Dunuin

    [SOLVED] Frage zu den Snapshots und dem Backup-Server

    ...deine Blockgröße auf mindestens 32K anheben müsstest da du sonst auch nicht mehr Platz hättest als mit einem Raid10 (Stichpunkt: Padding Overhead). Also Raidz1/2 kann man schon machen (habe ich hier selbst auch weil das Budget für Raid10 nicht drinne war) aber ideal ist das halt auch nicht...
  20. LnxBil

    Proxmox 8.0 Tuning and best practices for production/colocated servers.

    ...(when it known the actual recordsize up to 128k) then if it does not know what is going on with volblocksize 8 with respect to the padding overhead with raidzX. In a performance setting, I'd always and exclusively go with RAID10 (or stripped mirrors in the ZFS sense). Also a single ZFS...