Search results

  1. N

    Undelete data from ZFS pool

    Bad news. ZFS dont have undelete.
  2. N

    Speed up zfs list -t all

    The speed depends on your pool configuration. # zfs list -t all | wc -l 26496 # time zfs list -t all real 0m13.233s user 0m1.281s sys 0m11.137s ZFS pool is from 2 X raidz2 of 6 HDD ( total 12 ) before was mirror from 2 disks and raidz from 3 disks. Both was slow.
  3. N

    CVE-2019-11815 Kernel bug. Is it fixed?

    Don`t know why but this bug https://www.cvedetails.com/cve/CVE-2019-11815/ is serious on my opinion. The fix is small and already fixed https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=cb66ddd156203daefb8d71158036b27b0e2caf63 but the linux distribution community...
  4. N

    drop_caches don't finisched on pveperf

    There are talks about ZFS and drop cache google
  5. N

    Proxmox ZFS & LUKS High I/O Wait

    LUKS depends on CPU. ZFS depends on HDD/SDD slowest device speed. As of RaidZ2 you need 6 disks otherwise you will get allocation overhead. https://forum.proxmox.com/threads/slow-io-and-high-io-waits.37422/#post-184974 If you see no IO penalty inside VM - then don't worry to much.
  6. N

    drop_caches don't finisched on pveperf

    It happened to me too. Only reboot (maybe hard reboot) kills the process.
  7. N

    ZFS Device Fault - Advice on moving device to new Port

    ZFS don't care ports position. But it 'may' happens with error of cache pool data file. But I don't think it will happen. In that case import pool #zpool import -d /dev/disk/by-id/ pool_name and its done.
  8. N

    ZFS - Restores

    immediately If you set to Local-ZFS will effect Local-ZFS/vm-100-disk-1 and so on. And you can set individually for sub fs.
  9. N

    ZFS - Restores

    #zfs set sync=disabled pool/name
  10. N

    ZFS - Restores

    I suggest you to set sync=disabled to avoid double write. Single disk is single disk. ZFS don't have IO process priority. What you have to know 1. The data goes like this: Program -> ZFS write cache (not ZIL) -> disk 2. ZFS flush data from write cache to disk every ~5 sec 3. Then the write cache...
  11. N

    ZFS - Restores

    #zfs get all rpool
  12. N

    ZFS - Restores

    What is your ZFS pool configuration? On heady load (depends on how slow setup is) pool can be very unacceptable. For SSH use another pool for server OS to avoid IO waiting.
  13. N

    ZFS worth it? Tuning tips?

    I think pool and l2arc ashift mismatch can lead to poor performance.
  14. N

    qm agent timeout

    Executing qm agent command fstrim I get timeout very soon. I think agent request must wait longer for respond. # qm agent 102 fstrim VM 102 qmp command 'guest-fstrim' failed - got timeout # qm agent 102 fstrim VM 102 qmp command 'guest-fstrim' failed - got timeout # qm agent 102 fstrim VM 102...
  15. N

    Steps for moving VMs from 3.x to 5.x

    1. Manually upload VM disks to new machine. 2. Upload config files to new machines. 3. Edit config is something is needed. 4. Boot VM
  16. N

    my NVMEs suck

    For ZFS don't forget SYNC writes are written to ZFS LOG device. If you don't have external ZFS LOG device then the pool become as LOG device too. Thats mean double writes. If SYNC is important and you want good write performance then add single good SATA/NVME SSD as ZFS pool LOG device.
  17. N

    i/O priority on ZFS - is this possible?

    As I know ZFS is missing I/O priority. Its works like FIFO. Like Win98 system :-)

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!