Recent content by javildn

  1. J

    upgrade zfs-0.7.0

    I have tested in both directions, I didn't run "zpool upgrade" on the 0.7.1 updated host, maybe that is why it works.
  2. J

    upgrade zfs-0.7.0

    Strange...I have been able to send and receive datasets between hosts with 0.6.5 and 0.7.1 without errors
  3. J

    Poor performance on ZFS compared to LVM (same hardware)

    Finally, the only way to get good performance (similar to LVM) was to upgrade ZFS 0.7.1 following these instructions: https://forum.proxmox.com/threads/upgrade-zfs-0-7-0.35943/page-2#post-180792 I know this is an unsupported method, so I hope zfs 0.7.1 will be fully supported soon.
  4. J

    upgrade zfs-0.7.0

    I have tested your script in a test box and it worked like a charm. The performance with zfs 0.7.1 has been increased a lot. Hope it will be added to pve-test repository soon.
  5. J

    Poor performance on ZFS compared to LVM (same hardware)

    Random write is a lot faster with 4k blocksize but random read is slower than before with 8k... :(
  6. J

    Poor performance on ZFS compared to LVM (same hardware)

    So, all my tests concluded is much more better use 4k volblocksize, is it safe or I could have another problems?
  7. J

    Poor performance on ZFS compared to LVM (same hardware)

    I haved created a new zvol with volblocksize=4k, asigned to the VM, cloned CentOS with dd to the new disk, and rebooted the VM with the new disk. The performance increased dramatically! Now I get expected IOPS. Maybe it would be a good idea to be able to change volblocksize when creating a new...
  8. J

    Poor performance on ZFS compared to LVM (same hardware)

    Hi! I understand your point, however I maded a new test: I created a new VM, I installed then Centos 7 without LVM and ext4. If I do the fio test inside the VM the results are same ~5000 IOPS. However if I stop the VM and mount the zvol directly in the host (mount /dev/mapper/vm-102-disk-1p2...
  9. J

    Poor performance on ZFS compared to LVM (same hardware)

    Running fio test on a zvol directly on the host gets higher values! About 2x/3x faster! Not as fast as LVM but I think it will be enough. Why the VM cannot get these IOPS? I noticed while running FIO test directly on the host, iowait raises up to 50% but when running FIO on the VM, iowait on...
  10. J

    Poor performance on ZFS compared to LVM (same hardware)

    I have tried it right now, FIO results inside VM are very similar with sync disabled. 4900-5000 IOPS # zfs get all | grep sync rpool sync disabled local rpool/ROOT sync disabled inherited from...
  11. J

    Poor performance on ZFS compared to LVM (same hardware)

    Similar result, 4590 IOPS :( [root@localhost ~]# mount | grep root /dev/mapper/cl-root on / type ext4 (rw,relatime,data=ordered) Fio: randfile: (groupid=0, jobs=8): err= 0: pid=10228: Mon Sep 4 09:54:14 2017 write: io=8192.0MB, bw=18364KB/s, iops=4590, runt=456801msec slat (usec)...
  12. J

    Poor performance on ZFS compared to LVM (same hardware)

    Both setups are similar, only difference is one server is ZFS backed and the other one is LVM. 1] According to Rhinox, zfs raid1 would have same write performance as single drive. 2] I know it, this is what I am testing, I expected to have some write penalty in ZFS, but my benchmarks shows 4x...