Search results for query: ZFS fragment

  1. Dunuin

    Playing with ZFS

    ZFS is a Copy-on-Write filesystem. It always needs some free space to operate optimal. Similar to a SSDs that gets slow too when getting full. After around 80% full it starts getting slower and will fragment faster which is bad as a ZFS pool can't be defragmented. And in case you completely fill...
  2. bbgeek17

    clones

    Lets avoid coming up with new terminology. ZFS is a COW system with all the benefits that it provides. There is a copy involved by definition. In the scope of clone/snapshot - the reference page below may be more illustrative. As you read it, keep in mind the default recordsize is 128KB and its...
  3. G

    Turning on ZFS compression on pool

    Friend you helped me pacas I was seeing here in one that only left the 50% as you said and did not understand. The other I did a raidzO even after I did the replication more this congests the network well. I appreciate the help and congratulations for the knowledge! I will try to apply what you...
  4. Dunuin

    Turning on ZFS compression on pool

    Raidz1 is still not a great option because you are either...: 1.) use the default 8K volblocksize where you loose 50% of your raw capacity, even if you don't see it. It will show you everywhere that you got 75% of your raw capacity as usable space but that is wrong as everything you write to a...
  5. Dunuin

    ZFS pool size full with incorrect actual usage.

    Usually you can delete snapshots by zfs destroy YourPool/YourDataset/or/Zvol@Snapshotname but in case your pool is completely full this might fail. Keep in mind that ZFS is a Copy-on-Write filesystem so in order to do any changes (for example edit or delete anything) you need to write new data...
  6. Dunuin

    Confused ...

    When using raidz there is padding overhead when using zvols with a too small volblocksize. Using the default volblocksize of 8K you will loose 50% of your raw capacity, so only 4 of 8 drives capacity will be usable. 12TB you will loose for parity, 36TB you will loose to padding overhead and only...
  7. Dunuin

    [SOLVED] Garbage Collection taking weeks to complete

    A ZFS pool uses Copy-on-Write and therefore should always have 20% of free space or otherwise it will become slow and will fragment faster (and there is no way to defragment it except for moving everything off that pool and copy it back again). Jup, should work. Just make sure to disable the...
  8. Dunuin

    Newbie question on ZFS - using multiple devices as a single logical unit

    The L2ARC is just a cache and can be lost without a problem. But ZFS knows more layers then shown in your pyramid. There are some vdevs like the "special" for storing metadata (and optionally small data blocks), the "dedup" for storing deduplication tables, "spare" for hot spares...and then...
  9. Dunuin

    [SOLVED] PBS disk full

    Btw...once a ZFS pool is full, it will switch into read-only mode. I guess thats why you can't delete stuff. ZFS is a copy-on-write filesystems, which means you need to make sure that it will never happen that a pool gets to 100%. After 80% it will become slow and start to fragment faster, after...
  10. Dunuin

    Reduce ram requirement/usage due to having ZFS

    Also read this: https://www.delphix.com/blog/delphix-engineering/zfs-raidz-stripe-width-or-how-i-learned-stop-worrying-and-love-raidz I guess you didn't increased your volblocksize so you are loosing 50% of your raw capacity. 25% loss of the raw capacity to parity and 25% loss to padding...
  11. Dunuin

    Ok, I have some VM's created

    As soon as you take your storage serious and do it more professional you need alot of drives because you can only use a fraction of the capacity. Lets say for example I got 8x 4TB HDDs and 4x 250GB SSDs. In theory I got 33TB of storage. But actually usable for data are only 6.5TB. First you...
  12. Dunuin

    Slow VMs Performance

    HDDs in general are bad as a VM storage because the more guests you run the more IOPS hit your pool. And HDDs can't handle more than maybe 100-200 IOPS where SSDs can handle up to factor 10 to 1000 of that. I would also guess the your pool is the bottleneck. But you can check that if you look at...
  13. itNGO

    Fragmentierung

    Hi, wenn dein ZFS auf SSDs ist, kümmer dich nicht weiter.... bei HDDs hilft eine dedizierte SLOG-SSD oder die Datasets ein mal bewegen.... ZFS-Fragmentierung hat auch wenig gemeinsam mit der klassischen Fragmentieren einer NTFS oder FAT oder Ext-Partition... ansonsten kannst du dich...
  14. aaron

    ZFS Defrag

    Since ZFS is copy on write, its usage of the disk space will fragment with time. With SSDs I really would not worry about the fragmentation as the access times will always be in the same range, opposite to HDDs which need to move their RW heads to get to the data.
  15. V

    Question regarding LVM/ZFS

    Well, good to know. This all started with my hardware raid card dying and I've rebuilt this server with whatever I had laying around. So, I've already been thinking about expanding storage. I think I'll just take it easy with this file server, make another proxmox node with more storage and...
  16. Dunuin

    Question regarding LVM/ZFS

    You overprovisioned it. So should be fine for now but nothing will prevent the pool from running full where your pool will stop working and switch to read-only. So you should monitor your pool with zfs list rpool and make sure your "USED" won't get above 4.8TB. At around 80% your pool starts...
  17. Dunuin

    ZFS no Storage

    Add more drives to your pool to increase the total capacity or try to free up some space. I would look what data on that vm-1500-disk-1 isn't super important and can be deleted. Then delete it and make sure that discard is working so that ZFS receives TRIM commands to be able to free up that...
  18. Dunuin

    Considering a server swap

    Like I said, ZFS will work with less RAM, just don't expect it to be super responsive if you give it not enough RAM. My backup TrueNAS server for example only got 16GB RAM for these 4x 8TB HDDs and atleast for big sequential writes this works fine with full 118MB/s (Gbit). But HDDs are really...
  19. M

    Remote (re)sync very slow

    If the receiving back-end is just pure storage, then I can recommend doing this. 1) Get x2 decent 50G SSDs, or w/e SSD you have lying around - anything will do tbh - Assuming /dev/sd[a-b] 2) 2 partition of type ZFS Solaris: 1 for the zil slog (5% number #1 sda1), 1 for the l2arc (95% of total...
  20. I

    Proxmox VE 6.3.3 - BACKUP error: Structure needs cleaning (os error 117)

    Hello everyone, When I run Proxmox backup to a Proxmox Backup Server, the backup fail2s with: ERROR: backup write data failed: command error: write_data upload error: pipelined request failed: Structure needs cleaning (os error 117) INFO: aborting backup job ERROR: Backup of VM 100 failed -...