Search results for query: raidz1 padding

  1. Dunuin

    Questions about ZFS/Ceph...How should I move forward?

    Just don't use any raidz1/2/3 with a too low volblocksize and this shouldn't be a problem. If you can't increase the volblocksize because of your workload, don't use raidz1/2/3 at all and use a striped mirror. ZFS is local storage, that is synced via replication every minute or so. Ceph is a...
  2. Dunuin

    ZFS + PVE Disk usage makes no sense

    Please search the forum for "padding overhead". With the default ashift=12 + volblocksize=8K and 3 12TB disks in raidz1 you only get 14.4TB of usable storage for VM disks: 3x 12TB = 36TB raw storage -12TB parity data (-33%) = 24 TB usable storage Everything written to a zvol will be 133% in...
  3. T

    Where did my disk space go?

    "1.) check that you don't use raidz1, raidz2 or raidz3 with a too low volblocksize. Using raidz1/2/3 with the default 8k volblocksize will always waste alot of space when using zvols (or using VMs) because of padding overhead. Use the forums search function for more information, I explained that...
  4. Dunuin

    Windows VM : poor disk latency

    Raidz1 isn't great for latency/IOPS. Using Raidz1 of 3 disks would also mean that you are wasting lot of capacity due to padding overhead in case you didn'T increased the volblocksize from the default 8K to 16K. And keep in mind that a ZFS pool should always have atleast 20% of its capacity free...
  5. E

    Disk (SSD )Performance Question

    Sounds promising, I was proposed to use 4 way mirror whith nvme drives, using a separate vdev for the databases! Does a separate SLOG make sense in connection with 4-way-mirror?
  6. Dunuin

    Disk (SSD )Performance Question

    ZFS should be fine as long as you get proper hardware and don't screw up the storage setup. For example, don't think of using a raidz1/2/3 when running DBs as the primary workload because this would limit IOPS performance to the performance of a single drive and you would need to increase the...
  7. Dunuin

    Choosing ZFS volblocksize for a container's storage: Same logic as for VMs?

    You can change the recordsize to optimize it for a specific workload. For example a 16k recordsize should be great for a dataset storing a mysql DB that only writes 16k blocks. Especially when using deduplication where dedupcication with 16k records should be more efficient than way bigger...
  8. Dunuin

    ARC size suggestions

    Manually. If you want automated snapshots + automated pruning (so you can't forget to delete them after a few day) have a look at this script: https://github.com/Corsinvest/cv4pve-autosnap For long term backups you should use a PBS instead of snapshots...
  9. Dunuin

    ARC size suggestions

    Yes. With ZFS you have to keep some things in mind: 1.) you always should keep 20% of space free 2.) you shouldn't keep snapshots for too long as these will grow over time and prevent ZFS from freeing up deleted/edited stuff 3.) you need a complete TRIM/discard chain from guestOS, over virtual...
  10. Dunuin

    Proxmox Server aufrüsten/umrüsten

    Bei 4 SSDs mit 4K Sektoren im Raidz1 müsstest du den Pool auf einer Minimalen Blockgröße (volblocksize) von 32K laufen lassen, sonst verschwendest du zu viel Kapazität wegen Padding Overhead. Das sorgt dann aber für ordentlich SSD-Abnutzung und schlechte Performance sobald du irgendwie was...
  11. Dunuin

    Ceph vs ZFS - Which is "best"?

    ZFS raidz1 won't be a good choice for running databases as you need to increase the volblocksize to atleast 16K (atleast as long as using ashift=12) if you don't want to waste alot of capacity because of padding overhead. So a 3 disk raidz1 should be terrible for postgres with its 8K writes...
  12. LnxBil

    2tb of vms 4.4tb of storage used.

    just to precise this: not an exclusive either. It could also be all three of them and that is my guess too.
  13. Dunuin

    2tb of vms 4.4tb of storage used.

    Its usually either snapshots, padding overhead because of using raidz1/2/3 with too low volblocksize or you didn'T correctly setup discard/TRIM. Output of zfs list -o space would also be useful to see if snapshots or missing discard is the problem. As well as zpool get ashift and zfs get...
  14. Dunuin

    ZFS for VMs - where did my hdd space go?

    Like apoc already said its padding overhead. With ashift=12, a default volblocksize of 8K and a four disk raidz1 you only get 40% of the raw capacity of usable storage for VM disks. So with 16TB of raw storage only 6.4 TB or 5.82 TiB. Raw storage is 16TB. You loose 25% for parity so ZFS will...
  15. Dunuin

    Need to recover a file system on a QEMU harddisk stored on a ZFS pool

    Problem is padding overhead when using zvols on raidz. See here to learn how to calculate usable size and how to minimize capacity loss: https://www.delphix.com/blog/delphix-engineering/zfs-raidz-stripe-width-or-how-i-learned-stop-worrying-and-love-raidz Basically with a 8 disk raidz1 with...
  16. Dunuin

    [ZFS] Pool not showing in PVE

    With a raidz1 your IOPS performance will be only as fast as the single slowest disk. Your A400 are horrible slow as soon as the cache gets full. So your whole pool of 3 disks wouldn't be faster than a single A400 when it comes to IOPS performance. For throughput performance the performance will...
  17. R

    [ZFS] Pool not showing in PVE

    Thank you, for your answer. So, as a first precision, this is just a homelab mainly to learn and have fun, no entreprise service at all. Yes, I am aware of it and will only store data that I can lost on these (and I have an old syno NAS to backup things). That is why I have the main pool in...
  18. Dunuin

    [ZFS] Pool not showing in PVE

    I see alot of problems in general here: 1.) Your pools "fast" and "slow" are single disks, so there is no bit rot protection. When doing scrubs ZFS can tell you if data got corrupted or not but it can't do anything to repair it, as you don't got parity data. 2.) You mix SATA and NVMe in "rpool"...
  19. Dunuin

    SSD NVME Best Practices

    TrueNAS is also just using ZFS. Its the same as running ZFS directly on the PVE host. You can also use compression with ZFS on PVE, but that won't help much. You still got the problem of either padding overhead or very big blocksizes. Any kind of raidz1/2/3 or draid just isn't great if you want...
  20. G

    Turning on ZFS compression on pool

    Friend you helped me pacas I was seeing here in one that only left the 50% as you said and did not understand. The other I did a raidzO even after I did the replication more this congests the network well. I appreciate the help and congratulations for the knowledge! I will try to apply what you...