[SOLVED] [z_null_int] with 99.99 % IO load after 5.1 upgrade

Discussion in 'Proxmox VE: Installation and configuration' started by JohnD, Nov 16, 2017.

Tags:
  1. fabian

    fabian Proxmox Staff Member
    Staff Member

    Joined:
    Jan 7, 2016
    Messages:
    3,390
    Likes Received:
    523
    available on pvetest
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  2. JohnD

    JohnD Member

    Joined:
    Oct 7, 2012
    Messages:
    42
    Likes Received:
    2
    Thank you. Just updated one of my servers but i still have the exact same issue.
     
  3. mac.linux.free

    Joined:
    Jan 29, 2017
    Messages:
    106
    Likes Received:
    5
  4. Jean-François Dagenais

    Joined:
    Mar 2, 2016
    Messages:
    17
    Likes Received:
    3
    +1, really sad, we were upgrading Proxmox to get some of the performance improvements of 0.7.x, instead, things are slightly worst. How is this going un-noticed during testing? Could it have anything to do with compression=lz4 or having SSD cache disks?
     
  5. masterdaweb

    masterdaweb Member

    Joined:
    Apr 17, 2017
    Messages:
    78
    Likes Received:
    3
    I'm having this issue too.
     
  6. littlecake

    littlecake New Member

    Joined:
    Jun 2, 2013
    Messages:
    2
    Likes Received:
    0
    This means that proxmox is gone ?
     
  7. Jean-François Dagenais

    Joined:
    Mar 2, 2016
    Messages:
    17
    Likes Received:
    3
    We are starting to consider moving away from proxmox... sadly.
     
  8. mac.linux.free

    Joined:
    Jan 29, 2017
    Messages:
    106
    Likes Received:
    5
    We stay!
     
    markusd likes this.
  9. littlecake

    littlecake New Member

    Joined:
    Jun 2, 2013
    Messages:
    2
    Likes Received:
    0
    I changed repo to pvetest .. there is ZFS 0.7.7 and the problem seems to be gone , i am now waiting if the backups will be good , but looks good.. no z_null_int anymore..
     
  10. Phinitris

    Phinitris Member

    Joined:
    Jun 1, 2014
    Messages:
    83
    Likes Received:
    11
    @littlecake Can you please share your result about backups? Is the problem really gone after the upgrade to ZFS 0.7.7?
    Thanks.
     
  11. apollo13

    apollo13 New Member

    Joined:
    Jan 16, 2018
    Messages:
    18
    Likes Received:
    1
    I am now on:
    Code:
    root@vmx02:~# uname -a
    Linux vmx02 4.13.16-2-pve #1 SMP PVE 4.13.16-47 (Mon, 9 Apr 2018 09:58:12 +0200) x86_64 GNU/Linux
    root@vmx02:~# apt-cache policy zfs-initramfs
    zfs-initramfs:
      Installed: 0.7.7-pve1~bpo9
      Candidate: 0.7.7-pve1~bpo9
      Version table:
     *** 0.7.7-pve1~bpo9 500
            500 http://download.proxmox.com/debian/pve stretch/pve-no-subscription amd64 Packages
            100 /var/lib/dpkg/status
    
    and the issue with z_null_int seems to be gone. Now I can start doing performance testing :) Not sure though if it was the kernel, zfs or both
     
  12. JohnD

    JohnD Member

    Joined:
    Oct 7, 2012
    Messages:
    42
    Likes Received:
    2
    Issue is solved for me too. Great :)
     
  13. Roman Shein

    Roman Shein New Member

    Joined:
    Sep 11, 2017
    Messages:
    7
    Likes Received:
    0
    I just want to add that I needed upgrade to 4.13.16-2-pve for the 0.7.7 fix to work.
    Pure ZOL upgrade didn't work until after kernel upgrade to the latest. Even relatively new 4.13.13-6-pve kernel didn't work.
    Thanx Proxmox team for delivering the fix.
     
  14. fabian

    fabian Proxmox Staff Member
    Staff Member

    Joined:
    Jan 7, 2016
    Messages:
    3,390
    Likes Received:
    523
    the ZoL packages only contain the user space part - the kernel modules (which do the bulk of the actual work) are in the kernel ;)
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  15. Roman Shein

    Roman Shein New Member

    Joined:
    Sep 11, 2017
    Messages:
    7
    Likes Received:
    0
    You probably shall document it somewhere. I believe the "normal" ZFS implementation relies on DKMS mechanism, but it is not the case with Proxmox.
    Particularly in my case, I was very confused when "dmesg | grep ZFS" returned 0.7.6, while 0.7.7 was actually installed. Now I understand the reason.
     
  16. fabian

    fabian Proxmox Staff Member
    Staff Member

    Joined:
    Jan 7, 2016
    Messages:
    3,390
    Likes Received:
    523
    DKMS vs. precompiled is not a question of "implementation", but of packaging. Ubuntu also ships the modules pre-compiled, upstream offers both variants (for CentOS), the BSDs ship them pre-compiled (if they have them). if anything, pre-compiled seems more like the standard nowadays. for DKMS you would also need to check the actually loaded module and not the package version, so in the end you need to use "modinfo" anyhow.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice