Search results

  1. J

    ZFS, extremely slow writes, soft lockups

    Now I have two SSDs, one only for L2ARC and one only for ZIL and it works just perfectly! Thanks.
  2. J

    ZFS, extremely slow writes, soft lockups

    Could the reason for issues I am having is a "semi-faulty" SSD currently used for ZIL?
  3. J

    ZFS, extremely slow writes, soft lockups

    there are two guests with quite a number of writes (some webservers and data constantly coming in).
  4. J

    ZFS, extremely slow writes, soft lockups

    Thank you very much! So basically this SSD I have right now for ZIL and swap, will remain just for the swap purposes? Will that make sense since it is Intel's good quality SSD?
  5. J

    ZFS, extremely slow writes, soft lockups

    Hello, thank you! Would you recommend adding an additional drive and dedicate it as a ZIL drive only? I would use SSD, what size would be the best in my setup? Can I add an additional ZIL while the system is running? It is interesting that this setup has been running for about a year and a...
  6. J

    ZFS, extremely slow writes, soft lockups

    Hello, the outputs are: cat /proc/spl/kstat/zfs/arcstats name type data hits 4 24667005 misses 4 4694563 demand_data_hits 4 20857923 demand_data_misses 4 755489...
  7. J

    ZFS, extremely slow writes, soft lockups

    Hi, I have a server with 4.4-22 installed (updated since I read somewhere that it should help, but it doesn't) and sometimes have extremely slow writes in the guests. For example sometimes it works, and most of the time when a larger file (50MB+ size) is written, the guest goes into soft...
  8. J

    Noise with USB Audio DAC in Windows VM

    Sorry for "waking up" an old thread, but I exactly the same issue with USB sound card and backend noise along with the sound. jpbaril, can you please explain the steps you have done in a more detailed way? Thanks, Mat
  9. J

    Copy to ZFS, fstrim

    Problem with high disk usage solved by changing the ZFS block size to 16k (from 8k default) by setting the "blocksize 16k" in /etc/pve/storage.cfg. This should be in some FAQ or at least some notice about blocksize in case raidzx is created should be displayed ...
  10. J

    Copy to ZFS, fstrim

    I suppose it is, I selected thin provision when adding it to the Proxmox. This is what is set: For example, there is a VM with disk size 96 GB (Bootdisk size in Proxmox), but when doing zfs list, I get NAME USED AVAIL REFER MOUNTPOINT zfs-pool 363G 252G 33.4G /zfs-pool ...
  11. J

    Copy to ZFS, fstrim

    Hi all, I have two issues: 1. When I do a command like dd if=rawfile | pv | dd of=/dev/zfs-pool/vm-disks/vm-109-disk-1 to import 96 GB large disk to Proxmox (the disk is created with 96 GB size), the occupied ZFS space increases for about 188 GB. Is this normal behaviour? I have tried on two...