Search results

  1. N

    [SOLVED] ZFS pool does not mount at boot-time

    I hope Proxmox team will correct me. /lib/systemd/system/pvestatd.service After=pve-cluster.service zfs-mount.service
  2. N

    [SOLVED] ZFS pool does not mount at boot-time

    It is old know issue. Check startup order and make ZFS start before pvestatd.
  3. N

    Recover destroyed ZFS filesystem

    As i know - ZFS have no tools for it.
  4. N

    ZFS high io. Again...

    If you did not know ZFS have its own mechanism to control HDD queue.
  5. N

    btrfs as a guest file system

    Just don`t forget sync=disabled in the machine crash you can lose some data.
  6. N

    ZFS high io. Again...

    I set all disks for ZFS so my disk schedule is noop.
  7. N

    ZFS high io. Again...

    You are very confused. I have posted information in this forum about how ZFS works. Lets do it again. Data read: ZFS looks at ARC cache ( for metadata and data ) -> then looks at L2ARC if possible -> then to the pool. You can configure ARC size and set the size for metadata and data in it...
  8. N

    ZFS high io. Again...

    Why noop is bad ?
  9. N

    ZFS high io. Again...

    ZIL - sync write device. ONLY for sync writes. If you don`t have external ZIL - ZFS will use same pool for ZIL ( double write ) How to disable the use of ZIL? #zfs set sync=disabled pool/name/or/sub Why the ZFS write speed outrun disks speed? As I told before write cache. But if you try to...
  10. N

    ZFS high io. Again...

    Are you sure it is for read and write cache? ARC Size: 99.97% 12.00 GiB Target Size: (Adaptive) 100.00% 12.00 GiB Min Size (Hard Limit): 100.00% 12.00 GiB Max Size (High Water): 1:1 12.00 GiB ARC Size...
  11. N

    ZFS high io. Again...

    ZFS ARC cache is for read only. How to manage write cache I still don`t know. You can try to limit IO and speed for VM.
  12. N

    ZFS high io. Again...

    ZFS pool write works by the slowest disk. So in mirror ZFS write speed will be like single disk ( mirror pool can be 2 disks or 10 disks, its not matter ). In your situation ( like I was too ) write speed jumps happens because of ZFS cache in RAM. It can load write data to RAM from application...
  13. N

    ZFS high io. Again...

    I had ZFS mirror pool of 2 disks and another pool raidz from 3 disks. Both had IO problems. After I upgraded to 2 x raidz2 of 6 disks ( 12 total ) IO problems gone.
  14. N

    Proxmox and SACK attack - CVE-2019-11477, CVE-2019-11478, CVE-2019-11479

    Ok sorry found here https://github.com/Netflix/security-bulletins/blob/master/advisories/third-party/2019-001.md
  15. N

    Proxmox and SACK attack - CVE-2019-11477, CVE-2019-11478, CVE-2019-11479

    Can someone help me and drop the link of git with SACK fix? I can find only RDS fix...