>afaik, i/o errors typically happen on read, not on write.
addon note for this:
https://www.enterprisestorageforum.com/hardware/drive-reliability-studies/
"The authors found that final read errors (read errors after multiple retries) are about...
yes, you are right, chances ar low, but i'm not not sure if we should really call it "over engineered" to check boot env for disk issues and to have "zfs|btrfs scrub" equivalent for bootenv.
for all those who worry, here are some ideas how check...
yes, but while zfs is getting regular scrub, silent bitrot could indeed happen on partition1+2 on system which is rarely getting touched/updated. such issue will hit you when you don't expect it. chances are low indeed, but the sectors of...
We explicitly mirror the UEFI partitions outside of ZFS though. And not noticing with many months of uptime means one did not apply any updates for the same period of time, as else IO errors on setting up a new kernel there would get noticed, and...
it seems newly created qcow2 on lvm do not initialize the storage with zero but assume the storage is all zero. so you provide the same data to the qcow2 on re-adding which was there before.
try deleting the windows vm and before recreating...