Search results

  1. N

    Vm's won't start most of the time

    Ok but it may be settable permanent to all runs.
  2. N

    Vm's won't start most of the time

    make time option controllable from console/GUI for timeout commands.
  3. N

    Vm's won't start most of the time

    Don`t know why Proxmox gets timeout but you can add daemonize and KVM will run in background.
  4. N

    Vm's won't start most of the time

    Run this comment directly to console (ssh). This is the same command I just removed daemonize option
  5. N

    Newbie questions about ZIL/SLOG and L2ARC

    You don`t need to limit ZIL in size. I just gave you are calculation of recommended ZIL size. It can be larger. How ZIL works? Then ZFS gets sync write request ZFS will put the data to ZIL and into write cache (RAM). After the data is flushed to pool from cache ZFS marks in ZIL about success...
  6. N

    Vm's won't start most of the time

    In console #qm showcmd ID Then copy results and try to start VM from console for debugging
  7. N

    Newbie questions about ZIL/SLOG and L2ARC

    ZIL is used for sync write before flushing data to pool. Data flush happens every ~5 seconds. If your system don`t do a lot sync writes then you don`t need huge ZIL. How big ZIL must be? Calculate like this ZIL_MAX_WRITE_SPEED * FLUSH_TIME. What happens then ZFS pool works at max load? ZFS...
  8. N

    quick question about ZFS disks in a pool?

    OMG sda2 - 512M sdb2 - 512M sdc - 931.5G sdd - 931.5G Is it really the pool size like this?
  9. N

    Undelete data from ZFS pool

    Bad news. ZFS dont have undelete.
  10. N

    Speed up zfs list -t all

    The speed depends on your pool configuration. # zfs list -t all | wc -l 26496 # time zfs list -t all real 0m13.233s user 0m1.281s sys 0m11.137s ZFS pool is from 2 X raidz2 of 6 HDD ( total 12 ) before was mirror from 2 disks and raidz from 3 disks. Both was slow.
  11. N

    CVE-2019-11815 Kernel bug. Is it fixed?

    Don`t know why but this bug https://www.cvedetails.com/cve/CVE-2019-11815/ is serious on my opinion. The fix is small and already fixed https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=cb66ddd156203daefb8d71158036b27b0e2caf63 but the linux distribution community...
  12. N

    drop_caches don't finisched on pveperf

    There are talks about ZFS and drop cache google
  13. N

    Proxmox ZFS & LUKS High I/O Wait

    LUKS depends on CPU. ZFS depends on HDD/SDD slowest device speed. As of RaidZ2 you need 6 disks otherwise you will get allocation overhead. https://forum.proxmox.com/threads/slow-io-and-high-io-waits.37422/#post-184974 If you see no IO penalty inside VM - then don't worry to much.
  14. N

    drop_caches don't finisched on pveperf

    It happened to me too. Only reboot (maybe hard reboot) kills the process.
  15. N

    ZFS Device Fault - Advice on moving device to new Port

    ZFS don't care ports position. But it 'may' happens with error of cache pool data file. But I don't think it will happen. In that case import pool #zpool import -d /dev/disk/by-id/ pool_name and its done.
  16. N

    ZFS - Restores

    immediately If you set to Local-ZFS will effect Local-ZFS/vm-100-disk-1 and so on. And you can set individually for sub fs.
  17. N

    ZFS - Restores

    #zfs set sync=disabled pool/name
  18. N

    ZFS - Restores

    I suggest you to set sync=disabled to avoid double write. Single disk is single disk. ZFS don't have IO process priority. What you have to know 1. The data goes like this: Program -> ZFS write cache (not ZIL) -> disk 2. ZFS flush data from write cache to disk every ~5 sec 3. Then the write cache...