[SOLVED] [z_null_int] with 99.99 % IO load after 5.1 upgrade

Thank you. Just updated one of my servers but i still have the exact same issue.
 
+1, really sad, we were upgrading Proxmox to get some of the performance improvements of 0.7.x, instead, things are slightly worst. How is this going un-noticed during testing? Could it have anything to do with compression=lz4 or having SSD cache disks?
 
I am now on:
Code:
root@vmx02:~# uname -a
Linux vmx02 4.13.16-2-pve #1 SMP PVE 4.13.16-47 (Mon, 9 Apr 2018 09:58:12 +0200) x86_64 GNU/Linux
root@vmx02:~# apt-cache policy zfs-initramfs
zfs-initramfs:
  Installed: 0.7.7-pve1~bpo9
  Candidate: 0.7.7-pve1~bpo9
  Version table:
 *** 0.7.7-pve1~bpo9 500
        500 http://download.proxmox.com/debian/pve stretch/pve-no-subscription amd64 Packages
        100 /var/lib/dpkg/status
and the issue with z_null_int seems to be gone. Now I can start doing performance testing :) Not sure though if it was the kernel, zfs or both
 
I am now on:
Code:
root@vmx02:~# uname -a
Linux vmx02 4.13.16-2-pve #1 SMP PVE 4.13.16-47 (Mon, 9 Apr 2018 09:58:12 +0200) x86_64 GNU/Linux
and the issue with z_null_int seems to be gone. Now I can start doing performance testing :) Not sure though if it was the kernel, zfs or both

I just want to add that I needed upgrade to 4.13.16-2-pve for the 0.7.7 fix to work.
Pure ZOL upgrade didn't work until after kernel upgrade to the latest. Even relatively new 4.13.13-6-pve kernel didn't work.
Thanx Proxmox team for delivering the fix.
 
I just want to add that I needed upgrade to 4.13.16-2-pve for the 0.7.7 fix to work.
Pure ZOL upgrade didn't work until after kernel upgrade to the latest. Even relatively new 4.13.13-6-pve kernel didn't work.
Thanx Proxmox team for delivering the fix.

the ZoL packages only contain the user space part - the kernel modules (which do the bulk of the actual work) are in the kernel ;)
 
You probably shall document it somewhere. I believe the "normal" ZFS implementation relies on DKMS mechanism, but it is not the case with Proxmox.
Particularly in my case, I was very confused when "dmesg | grep ZFS" returned 0.7.6, while 0.7.7 was actually installed. Now I understand the reason.
 
You probably shall document it somewhere. I believe the "normal" ZFS implementation relies on DKMS mechanism, but it is not the case with Proxmox.
Particularly in my case, I was very confused when "dmesg | grep ZFS" returned 0.7.6, while 0.7.7 was actually installed. Now I understand the reason.

DKMS vs. precompiled is not a question of "implementation", but of packaging. Ubuntu also ships the modules pre-compiled, upstream offers both variants (for CentOS), the BSDs ship them pre-compiled (if they have them). if anything, pre-compiled seems more like the standard nowadays. for DKMS you would also need to check the actually loaded module and not the package version, so in the end you need to use "modinfo" anyhow.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!