Search results

  1. N

    quick question about ZFS disks in a pool?

    OMG sda2 - 512M sdb2 - 512M sdc - 931.5G sdd - 931.5G Is it really the pool size like this?
  2. N

    Undelete data from ZFS pool

    Bad news. ZFS dont have undelete.
  3. N

    Speed up zfs list -t all

    The speed depends on your pool configuration. # zfs list -t all | wc -l 26496 # time zfs list -t all real 0m13.233s user 0m1.281s sys 0m11.137s ZFS pool is from 2 X raidz2 of 6 HDD ( total 12 ) before was mirror from 2 disks and raidz from 3 disks. Both was slow.
  4. N

    CVE-2019-11815 Kernel bug. Is it fixed?

    Don`t know why but this bug https://www.cvedetails.com/cve/CVE-2019-11815/ is serious on my opinion. The fix is small and already fixed https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=cb66ddd156203daefb8d71158036b27b0e2caf63 but the linux distribution community...
  5. N

    drop_caches don't finisched on pveperf

    There are talks about ZFS and drop cache google
  6. N

    Proxmox ZFS & LUKS High I/O Wait

    LUKS depends on CPU. ZFS depends on HDD/SDD slowest device speed. As of RaidZ2 you need 6 disks otherwise you will get allocation overhead. https://forum.proxmox.com/threads/slow-io-and-high-io-waits.37422/#post-184974 If you see no IO penalty inside VM - then don't worry to much.
  7. N

    drop_caches don't finisched on pveperf

    It happened to me too. Only reboot (maybe hard reboot) kills the process.
  8. N

    ZFS Device Fault - Advice on moving device to new Port

    ZFS don't care ports position. But it 'may' happens with error of cache pool data file. But I don't think it will happen. In that case import pool #zpool import -d /dev/disk/by-id/ pool_name and its done.
  9. N

    ZFS - Restores

    immediately If you set to Local-ZFS will effect Local-ZFS/vm-100-disk-1 and so on. And you can set individually for sub fs.
  10. N

    ZFS - Restores

    #zfs set sync=disabled pool/name
  11. N

    ZFS - Restores

    I suggest you to set sync=disabled to avoid double write. Single disk is single disk. ZFS don't have IO process priority. What you have to know 1. The data goes like this: Program -> ZFS write cache (not ZIL) -> disk 2. ZFS flush data from write cache to disk every ~5 sec 3. Then the write cache...
  12. N

    ZFS - Restores

    #zfs get all rpool
  13. N

    ZFS - Restores

    What is your ZFS pool configuration? On heady load (depends on how slow setup is) pool can be very unacceptable. For SSH use another pool for server OS to avoid IO waiting.
  14. N

    ZFS worth it? Tuning tips?

    I think pool and l2arc ashift mismatch can lead to poor performance.
  15. N

    qm agent timeout

    Executing qm agent command fstrim I get timeout very soon. I think agent request must wait longer for respond. # qm agent 102 fstrim VM 102 qmp command 'guest-fstrim' failed - got timeout # qm agent 102 fstrim VM 102 qmp command 'guest-fstrim' failed - got timeout # qm agent 102 fstrim VM 102...
  16. N

    Steps for moving VMs from 3.x to 5.x

    1. Manually upload VM disks to new machine. 2. Upload config files to new machines. 3. Edit config is something is needed. 4. Boot VM
  17. N

    my NVMEs suck

    For ZFS don't forget SYNC writes are written to ZFS LOG device. If you don't have external ZFS LOG device then the pool become as LOG device too. Thats mean double writes. If SYNC is important and you want good write performance then add single good SATA/NVME SSD as ZFS pool LOG device.