Search results for query: raidz padding

  1. G

    ZFS-2 Showing total usable storage at half of what it should be

    Thank you i reworked my system for a raidz and not a raidz2 ZFS pool. Final two questions. When giving a virtual machine a drive of lets say 750GB, the summary page says I have a total of 1,2 TB used even though the only storage being used is that 750GB drive. Last question. Can I resize the...
  2. Dunuin

    Windows VM shows MUCH less space used than ZFS pool its on

    No, you read the table wrong. Rows are volblocksize in sectors. If you got a disk with 512B LBA and have chosen ashift=9 for the pool the 8K volblocksize is the row with "16" (8K volblocksize / 512B sectorsize = 16 sectors). But your pool was probably created with a ashift=12 so your sectors are...
  3. Dunuin

    Mounting Large ZFS disk but cant use it 100%

    You need to increase the volblocksize if using raidz or you will waste alot of space due to bad padding. A 8x disk raidz2 with the default volblocksize of 8k would loose 2/3 of the raw capacity to parity+padding. If you increase the volblocksize to 16k you should only loose 1/3 of the raw...
  4. B

    Advise for raid ZFS data

    several things here. #1 raidz-1 isnt optimal at all. its like a raid5 and with such big disks not that fault tollerant as you might think. theres a chance loosing a junk. also 4disks are suboptimal for either. problem is not total capacity but high capacity over 1.5tb a drive. in any case you...
  5. Dunuin

    [SOLVED] Proxmox ZFS storage available not matching

    Ok, but same problem. All raidz arent great as VM storage. And with 6 drives and raidz1 you are still loosing 3 and not 1 drive in capacity because of bad padding if you use 8k vollbocksize, because everything stored will be 66% bigger. Look here. So for example 32K would be a better...
  6. Dunuin

    Diagnose ZFS Slow Random 4k Read/Write Speed, nvme SSD, RAID-Z2

    First you are using consumer SSDs. To quote the staffs ZFS NVMe benchmark paper: Consumer SSDs are mostly really crappy at sync or continous random 4K writes. The second point is that you use raidz2. Raidz got alot of overhead because of all the additional parity calculations and write speed...
  7. I

    zfs/zvol recordsize vs zvolblocksize

    Since you seem to have a way better understanding on that subject and me getting lost trying to figure it out by myself reading random posts here and there which in most cases makes things worst, in case you could help here is my case The Server will have 4 (6gps sas 10K rpm) 1.2Tb disks in zfs...
  8. Dunuin

    Proxmox writing my SSDs with about 40GB/h for no reason

    It is normal that ZFS will write alot to SSDs. Thats why you should always buy enterprise grade SSDs that can handle some petabytes of writes. Some things that willl cause write amplification: - sync writes (especially if you SSDs don't got powerloss protection and therefore can't use the...
  9. Dunuin

    Hardware für Vorhaben ausreichend und Vorgehen korrekt?

    CPU Kerne kannst du gerne Überprovisionieren. Also wenn du nur 2 physische Kerne hast, dann kannst du auch 4 VMs je 1 Kern geben. Das ist relativ unproblematisch, solange die VMs/LXCs die Kerne nur selten auslasten. Eigentlich sollte man aber immer einen Kern frei lassen für proxmox selbst, da...
  10. Y

    [SOLVED] ZFS replica ~2x larger than original

    @guletz Can you tell why you think the blocksize is divided by number of data disk? Because data does not have to be spread into every disk in vdev? The only requirement is that the write size should be multiple of (number of parity + 1)...
  11. Dunuin

    ZFS lieber mit PVE oder FreeNAS VM?

    Raidz1 ist echt nicht ideal wenn du darauf VMs laufen lassen willst. Gerade wenn du HDDs und keine SSDs nutzt, wo dann ja die IOPs schnell an die Grenzen kommen. Raidz1 liefert dir keine sonderlich erhöhte Schreibleistung oder IOPS. Mit einem Stripped Mirror (ala Raid10) hättest du da die...
  12. Dunuin

    Proxmox VE 6.3 und ZFS - VMs sterben immer wieder

    Wenn du raidz1 benutzt, dann werden im Idealfall nur 25% der Kapazität für Parität draufgehen. Hast du 4x 2,73TiB wäre also theoretisch 8,19TB von 10.92TB nutzbar. Wenn deine HDDs nun aber 4K logische Blockgröße nutzen (Pool mit ashift=12 erstellt) und du die Volblocksize nicht selbst angehoben...
  13. Dunuin

    Proxmox VE 6.3 und ZFS - VMs sterben immer wieder

    Und dann am besten mal den ARC limitieren. Nach Faustformel braucht der ARC 4GB + 1GB RAM je 1TB Rohkapazität der Laufwerke wenn du keine Deduplikation nutzen willst. Demnach bräuchtest du eigentlich 16GB RAM alleine für den ARC aber ich würde vermuten, dass da 8 GB auch reichen würden. Müsstest...
  14. A

    ZFS USED twice as LUSED

    I am using my magic crystal ball and can see you have a raidz based zfs pool I also can see in there that the default ZVOL blocksize of 8k is used. If that's wrong, my crystal ball needs re-calibration. If it is true you need to create the zvol with a larger block size (64k, 128k) to avoid...
  15. Dunuin

    ZFS zvol on HDD locks up VM

    Most of the time its the opposite. People loose capaticity if they use raidz and they don't increase the volblocksize first. Sure. If you dont't need a higher volblocksize because of padding overhead a raidz would produce, it is better to keep the volblocksize as small as possible. Otherwise...
  16. Z

    Looking for setting a new PVE instance correctly

    Wow, very detailed info here. Took some time to digest it and search more info around to try to understand it the best I can. In a future that I move on to 4k LBA HDD disks, maybe I'll choose to create a new zpool with proper ashift settings and move data over there. But for now I think I'm...
  17. Dunuin

    Looking for setting a new PVE instance correctly

    It will be much higher internally. To write something SSDs will need to erase a complete row of cells and and write it again to store a minimal change. So SSDs can read single cells but only modify big rows of cells. So it could be that a row of some hundreds of KB or even some MB need to be...
  18. Dunuin

    ZFS Speicher Information

    Nein, volblocksize gilt nur für zvols, also die virtuellen Festplatten der VMs. Bei LXCs werden Datasets genutzt, die haben keine volblocksize sondern sondern eine recordsize. Die scheint aber standardmäßig auf 128K zu sein, also hoch genug, dass da ein raidz keinen Platz durch Padding verschwendet.
  19. D

    ZFS Speicher Information

    Danke für die extrem ausführliche Antwort. Prinzipiell ist das System darauf ausgelegt Server für private Zwecke zu hosten. (Hausautomatisierung, Nextcloud, dhcp, DNS usw..) Mit Ausnahme der Nextcloud also nicht für große Mengen Daten. Ich gehe aktuell davon aus das ich alle 2-3 Jahre eine...
  20. E

    Questions regarding blocksize and write amplification

    Hello, I have been running two different setups of ZFS Pools with Proxmox and VMs on them. Raid-z2 (6 disks, ashift=12, volblocksize=64k) <-- Virtualdisk (reports blocksize as 512 in VM) <-- NTFS/Ext4 (blocksize 4k) Mirror (6 disks, ashift=12, volblocksize=8k (default)) <--...