For me, there are constant ZFS timeouts when doing a scrub on the target. By definition scrubs should not be a "higher priority" IO task. I think the timeout should be longer than a few seconds, or tunable - I get hundreds of replication failed emails whenever I scrub a pool.
Questions regarding ZFS and grub:
1. Can you provide a roadmap for grub-related upgrades? There are two issues tying people to grub as far as I know: First, old proxmox installs do not have a separate 512M EFI partition, so it seems a full reinstall may be required for these machines. Secondly...
ZFS-2.0.4 seems to make some kernel panics, and I see one almost daily when using "replication" in proxmox - receiver side.
This commit seems to fix the bug I am seeing:
https://github.com/openzfs/zfs/issues/11223
I have a similar stack to this...
Thanks for this!
Yes, I had found proxmox-boot-tool, but only one of my machines (installed recently) had a 512M volume - the ones installed with older Proxmox installers were smaller. So I was not ready to resize partitions in a boot recovery shell. But perhaps in the future...
Had a bad morning, because I read the ZFS 2.0 release notes wrong: the rule is that ANY use of zstd on your rpool will make grub fail to boot. I thought this would apply only to the root dataset or something like that, but it's really a big deal if you use zstd anywhere in the rpool, so don't do...
This replication lockup issue persists in the latest kernels and ZFS versions.
On the destination side, "zfs receive -F -- [poolname]" hangs for many days, and in some cases this job cannot be killed - the machine won't reboot cleanly and must be hardware reset.
This happens on a variety of...
I have replication running for most of my LXC hosts, but recently I found I found one that was "stuck" on the receiver side, in a zfs rollback. (The only way to see this is happening is to click "Replication" on every single host in the cluster.)
Consequences were:
1. All jobs on the sender...
Thanks! The checksum algorithm doesn't seem to matter, but on my slower Xeon, when you do
cat /proc/spl/kstat/zfs/vdev_raidz_bench and look at the "scalar" (non-SIMD) row:
gen_p (RAID-Z) is 1.13GB/sec, and gen_pq (RAID-Z2) is 290MB/sec, and gen_pqr (RAID-Z3) is only 132MB/sec. SIMD makes...
Would you please include RAID-Z2 in this test? My configuration is a 6-disk RAID-Z2 with Xeon 4110. Also, I'm seeing no difference in regular striped volumes, but the RAID-Z2 fio seqwrite performance is less than half.
A 50% performance regression is a big deal. The patch to re-enable SIMD on...
I'm pretty sure this build is suffering from much lower ZFS write performance in kernel 5.0/ZFS 0.8 - it's about half as fast for me.
On a fresh RAIDz2 with 6 disks, I'm getting 200MB/sec writes (uncompressed), whereas Proxmox 5.4 gives me 450MB/sec.
(ZFSOnLinux github has issue 8836 which...
I use nginx in a container to log to syslog via "localhost". It recently stopped working entirely, until I changed the config to use "127.0.0.1" instead of "localhost".
When I looked in my /etc/hosts I saw a "PVE" section with IPv6 config. This makes me suspect the problem is a change PVE made...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.