Thank you for the clarification!
I was worried about data corruption on my affected hosts but since they were only storing data and not running any other applications at the time they should probably be fine.
I could observe the bug in "zfs send". A simple "dd if=/dev/null of=testfile bs=1M"...
Well... seems like a kernel update to pve-kernel-5.0.21-2-pve / 5.0.21-6 fixes the issue for now... but this kernel was only released yesterday or monday? A host I freshly installed on sunday still got the faulty 5.0.21-3. So this bug was out in the wild for several weeks? Could this have lead...
Hello,
PVE 6 currently ships the faulty ZFS SIMD patch for 5.0+ kernels which is known to cause FPU corruption.
See this issue: https://github.com/zfsonlinux/zfs/issues/9346
It was cherry-picked here...
The problem I reported in post #84 kind of returned with kernel 4.15.18-21-pve. Booting back into 4.15.18-20-pve makes the network functional again. Reloading the igb module does not help anymore.
https://forum.proxmox.com/threads/4-15-based-test-kernel-for-pve-5-x-available.42097/post-208861
You are right, I linked the wrong issue.
Unprivileged / user namespaces was discussed here: https://github.com/lxc/lxc/pull/2009#issuecomment-350181714
Setting the values on the host does not work anymore as these settings are no longer passed down into the CTs.
A privileged CT is what I...
Hello!
I tried to setup Gitlab in an unprivileged LXC.
As already discussed in other threads Gitlab wants to set some sysctls to specific values.
To be exact:
kernel.shmall = 4194304
kernel.sem = 250 32000 32 262
net.core.somaxconn = 1024
kernel.shmmax = 17179869184
In older versions of PVE...
The 4.15.17-3 test kernel with the intel out of tree drivers fixes the problem I have with my HP DL160 G6. The network is coming up normally again after booting.
My HP DL160 G6 has problems gettings its network up after updating to the 4.15.17-1-pve kernel.
The following lines are visible in dmesg regarding the network driver igb:
[ 1.311414] igb: Intel(R) Gigabit Ethernet Network Driver - version 5.4.0-k
[ 1.311415] igb: Copyright (c) 2007-2014...
Upgrading to 0.7.3 fixes the issue, although it is a kind of a hack in OMV. Fortunately as of ZoL 0.7.3 DKMS style packages for Debian are available in the build process. I think this issue can be marked as resolved.
Wasn't PR #6616 supposed to patch this on the zfs 0.7.X side without modifying the 0.6.5.X branch?
See these commits:
https://github.com/zfsonlinux/zfs/commit/48fbb9ddbf2281911560dfbc2821aa8b74127315
https://github.com/zfsonlinux/zfs/commit/829e95c4dc74d7d6d31d01af9c39e03752499b15
They fix...
Currently I'm using PVE 5.1 on my host which comes with ZoL 0.7.2. On my backup server I have running OMV3 which is based on Debian 8 Jessie and comes with ZoL 0.6.5.9. To backup my datasets I'm sending snapshots from my PVE host to OMV3.
Recently I've created a new dataset on ZoL 0.7.2, but I...
Yeah, I think ZFS 0.7.2 changed something, with 0.6.9 the events were still generated. I think we can mark this issue as resolved, ZED is working as it should.
Shouldn't the events also be visible when executing 'zpool events'? They are missing there too. Also I'm expecting 'ereport.fs.zfs.io' errors, not 'ereport.fs.zfs.data' errors. The affected disks can go to sleep as they are not directly used as storage for PVE. They contain datasets which are...
Weird... I received a scrub finished mail today, so maybe I'm wrong, ZED is working. However I know I should be receiving a lot more mails, cause I'm affected by this bug: https://github.com/zfsonlinux/zfs/issues/4713. The according "ereport.fs.zfs.io" events are not generated anymore, though...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.