Search results for query: raidz padding

  1. H

    zvol using much more space than the imputable to parity + padding

    Hello everybody, I have a zvol with a volsize of 3.91T. This zvol is attached as a data disk to a VM that works as fileserver (SAMBA/NFS). Here you have the details of df -hT from inside the VM: 2.4T used from 3.9. The zvol has volblocksize = 8K, and is on a pool with only one RAIDz-1 of four...
  2. Dunuin

    [SOLVED] Speicherplatz im Hypervisor größer als in VM (ZFS RAID-Z2, keine SSDs), Trim-Problem (Discard-Option lässt sich unter cryptLUKS-Disk nicht aktvieren)

    Parity sind die Extra-Daten die du haben willst, damit nichts verloren geht, wenn dir eine HDD ausfällt. Padding ist der Overhead den du hast, weil die Daten aus der VM ja irgendwie auf 4 HDDs aufgeteilt werden müssen und weil die Blockgrößen zwischen LBA von der HDD, Volblocksize und ashift von...
  3. A

    ZFS wrong space available

    Don't know but to me it sounds more like the padding issue on RAIDZ which I have experienced as well... Need to wait until we get some clarification ;)
  4. Stoiko Ivanov

    Bei einer VM ein ZFS RaidZ hinzufügen

    Ist das die maximale Kapazität? - funktioniert es uU auch mit 7 T? Beim raidZ kommt noch die blocksize des zvols ins spiel, welche zu mehr padding (und damit weniger verfügbarem platz führen kann) siehe uA folgenden artikel...
  5. D

    [SOLVED] Unexpected pool usage of ZVOL created on RAIDZ3 Pool vs. Mirrored Pool

    This is due to padding when using small volblocksize with raidz. See https://www.reddit.com/r/zfs/comments/b6dm4y/raidz2_used_size_double_logical_size_in_proxmox_53/?utm_source=amp&utm_medium=&utm_content=post_body for example. Try using 16k volblocksize (or whatever best value for your raidz...
  6. J

    zfs/zvol recordsize vs zvolblocksize

    4k NTFS ZVOL on RAID10 2x2Disk (4k), witch volblocksize: 8k or 16k option is better?
  7. S

    VM ZFS dataset consumes all available space

    Thank you! Well, that's the case, I guess. root@ftp:~# zfs list -o name,used,lused,refer,ratio NAME USED LUSED REFER RATIO rpool 7.04T 5.30T 139K 1.00x rpool/ROOT 1.60G 2.59G 128K 1.87x rpool/ROOT/pve-1 1.60G...
  8. D

    VM ZFS dataset consumes all available space

    On RAIDZ, depending on the volblocksize, you can have more used disk space than logical space, du to padding. See https://www.mail-archive.com/freebsd-virtualization@freebsd.org/msg05685.html for example Check with zfs list -o name,used,lused,refer,ratio to have a better picture of the space...
  9. guletz

    zfs/zvol recordsize vs zvolblocksize

    {4k} will need to be split on all disks data from your pool. But if you have let say 3x HDD raidz, then 4k will be split in 2 stripe(2k and 2k) any each will be landing on 2 HDD(2k/HDD). But your disk use 4k, so it will be write 4k insted of 2k(2k of data, ant the rest of 2k with padding). So...
  10. guletz

    Understanding disk usage of a VM with ZFS

    You must know that zfs is a cow(copy on write) system. For this reason when you try to modify a file, there is no modification of any previous data that is all ready on pool. Insted you will have new data blocks with only new data. This is very usefull because when you make a new zfs snaphot...
  11. G

    [SOLVED] ZFS replica ~2x larger than original

    That's very interesting... but the problem only affects RAIDZ2? So if I use a 4 disk RAIDZ, 8k is fine? Or should I set the volblocksize 16k as well?
  12. guletz

    [SOLVED] ZFS eating more poolspace than allocated

    The problem is facing you is that you run a cow (btrfs) over another cow (zfs). Also the 8k zvol sector size. This could be amplified if you use many small files that are very often modified. And maybe your partition inside this zvol are missaligned ?! By the way, when you create a zvol...