Search results for query: padding overhead

  1. leesteken

    Best RAID for ZFS in Small Cluster?

    ...performance then a 4-way mirror has quadruple the read performance and triple the redundancy. RAIDz1 with four drives has a lot of padding overhead and will probably only give you two (instead of three) drives of usable space. A stripe of two mirrors (also four drives) gives twice the...
  2. M

    ZFS storage is very full, we would like to increase the space but...

    So I conducted my experiment: My raidz2 zpool is 8x18 TByte at a volblocksize of 128k. I added a virtio 1GByte raw disk to one of my VMs. From inside the VM, I tried these commands (/dev/urandom is being used to ensure none of the data is compressible and the oflag=direct,sync ensures that the...
  3. J

    Watchdog reboots on Proxmox cluster due to Ceph/Corosync MTU weirdness (drops to 8885)

    Thank you for your reply. After further debugging, I discovered that an OpenStack node had taken the same IP address as a Proxmox node. This caused the other two servers to compete for master status. Additionally, Corosync traffic was running over the same link as the storage network, which we...
  4. fba

    Watchdog reboots on Proxmox cluster due to Ceph/Corosync MTU weirdness (drops to 8885)

    Hey, some guessing: KNET reports the data MTU, meaning after deduction of all required headers and padding. When I look at my test cluster KNET reports 1397 as data MTU. This makes an overhead of 103 bytes. Maybe KNET does align the data at 64 byte boundary, that's why data MTU in your example...
  5. LnxBil

    [SOLVED] Confused about discrepancy in reported disk usage for same drive

    ...thin-provisioning, compression and deduplication features. Usage or free space cannot be computed so easily, especially the padding overhead. This always leads to confusion. There are two commands that operate at different levels: zpool and zfs, so the difference in the UI come down to the...
  6. P

    ZFS/ProxMox Disk Size Weirdness

    I think I see what you're saying. I'm new to ZFS and trying to learn about the things you are mentioning.
  7. LnxBil

    ZFS/ProxMox Disk Size Weirdness

    ...the usage space changes constantly with the data you store. If you e.g. store small volblock sizes, you got a lot of waste due to padding overhead on ashift=12 and if you use large recordsizes, you can store a lot without padding overhead. this cannot be known before, therefore they don't...
  8. C

    zfs storage woes

    was transfering files. no way to resume the frozen VMs? it's for cold storage. 50% overhead? only from block size? could you link the posts?
  9. leesteken

    zfs storage woes

    Restore from backup and re-run the action that failed. How can we know what the VM was doing at the time. It's probably padding (and maybe a little ZFS meta-data overhead) due to number of drives being a poor match for the block size. Assuming that you used RAIDz1 (or RAIDz2 or RAIDz3), which...
  10. leesteken

    full virtual (pbs)disk.

    The disk was probably thin provisioned and the VM wanted to use space that wasn't actually available on the storage, which causes I/O errors. This happens to VMs of people every so often on this forum. Depending on the storage type, the available space might be less than expected because of...
  11. S

    [TUTORIAL] Understanding Proxmox ZFS HD and Disk Usage Display

    ...1 TB you can still only use 1 TB, but on 13 x 1 TB , you use 10 TB (this is physical disk space, not what actually will be usable) "Padding overhead" can arise (I liked this writeup); also for example, RAIDZ3 is recommended to be used with 11 disks or more for less (in the details it is way...
  12. M

    ZFS RAIDZ Pool tied with VM disks acts strange

    So does this mean I should just use RAID10 for VM's?
  13. leesteken

    ZFS RAIDZ Pool tied with VM disks acts strange

    RAIDz probably does not have the space you think it has and it tells you. Due to padding and metadata overhead, people are often disappointment (on this forum) by the usable space on a RAIDz1/2/3. This is a common ZFS thing. (d)RAIDz1/2/3 is also often disappointing for running VMs on as people...
  14. I

    Blocksize / Recordsize / Thin provision options

    Trying to make into an example all parameter values that can have an effect on the zvol, I came up with the following example: Even though compression is enabled I won't include it in the calculation even though I should (I don't know how though). Also the drives are SSDs, so we just simulate...
  15. D

    ZFS reporting drastically different numbers

    My initial research is pointing at padding overhead (and it is possible thin provisioning didn't get enabled at creation), but this also seems wildly off compared to other examples. The drive called "zfs" is a made up of six 1.96TB SSD drives in a RAIDZ. When I go to the summary for the zfs...
  16. leesteken

    ZFS on ZFS

    You'll have write amplifiaction on top of write amplication. And RAIDz1 has padding overhead with so few drives and you'll probably have a mismatching volblocksize, losing a lot of space. Maybe run the VM on LVM instead? Or run the software in a container (if it is based on Linux)? Or simply...
  17. I

    Windows VM Trimming

    That's fine. The setup is an experiment. Once I'm happy with it, I plan on changing them out for Micron's which do have PLP and handles the load better. But thanks for the advise :)
  18. Dunuin

    Windows VM Trimming

    Padding overhead only affects zvols as only zvols have a volblocksize. LXCs and PBS are using datasets which are using the recordsize instead (and therefore no padding overhead). One of the many reasons why you usually don't want to use a raidz1/2/3 for storing VMs but a striped mirror instead...
  19. I

    Windows VM Trimming

    I am playing with it now thank you. Side question though, Does this then not happen with Proxmox Backup servers too? On that scale, with 3 stripes of 12 drives on raidz it would be hard for me to pick up by myself, How can one check if the padding is an issue in that array too?
  20. Dunuin

    Windows VM Trimming

    So with a 8K volblocksize and ashift=12 you would lose 50% raw capacity (14% because of parity, 36% because of padding overhead). Everything on those virtual disks should consume 71% more space. To fix that you would need to destroy and recreate those virtual disks. Easiest would be to change...