root@vmx02:~# uname -a Linux vmx02 4.13.16-2-pve #1 SMP PVE 4.13.16-47 (Mon, 9 Apr 2018 09:58:12 +0200) x86_64 GNU/Linux root@vmx02:~# apt-cache policy zfs-initramfs zfs-initramfs: Installed: 0.7.7-pve1~bpo9 Candidate: 0.7.7-pve1~bpo9 Version table: *** 0.7.7-pve1~bpo9 500 500 http://download.proxmox.com/debian/pve stretch/pve-no-subscription amd64 Packages 100 /var/lib/dpkg/status
I just want to add that I needed upgrade to 4.13.16-2-pve for the 0.7.7 fix to work.I am now on:
and the issue with z_null_int seems to be gone. Now I can start doing performance testing Not sure though if it was the kernel, zfs or bothCode:
root@vmx02:~# uname -a Linux vmx02 4.13.16-2-pve #1 SMP PVE 4.13.16-47 (Mon, 9 Apr 2018 09:58:12 +0200) x86_64 GNU/Linux
the ZoL packages only contain the user space part - the kernel modules (which do the bulk of the actual work) are in the kernelI just want to add that I needed upgrade to 4.13.16-2-pve for the 0.7.7 fix to work.
Pure ZOL upgrade didn't work until after kernel upgrade to the latest. Even relatively new 4.13.13-6-pve kernel didn't work.
Thanx Proxmox team for delivering the fix.
DKMS vs. precompiled is not a question of "implementation", but of packaging. Ubuntu also ships the modules pre-compiled, upstream offers both variants (for CentOS), the BSDs ship them pre-compiled (if they have them). if anything, pre-compiled seems more like the standard nowadays. for DKMS you would also need to check the actually loaded module and not the package version, so in the end you need to use "modinfo" anyhow.You probably shall document it somewhere. I believe the "normal" ZFS implementation relies on DKMS mechanism, but it is not the case with Proxmox.
Particularly in my case, I was very confused when "dmesg | grep ZFS" returned 0.7.6, while 0.7.7 was actually installed. Now I understand the reason.