Zfs has encountered an uncorrectable i/o failure and has been suspended

jorhan

New Member
Mar 22, 2024
4
0
1
This is my environment:
  • PVE Version: 8.1.10
  • CPU: 6 x Intel(R) Core(TM) i5-9500 CPU @ 3.00GHz (1 Socket)
  • Kernel Version: Linux 6.5.13-5-pve (2024-04-05T11:03Z)
  • Boot Mode: EFI
  • Manager Version: pve-manager/8.1.10/4b06efb5db453f29
  • Storage: M.2 Predator GM7 M.2 2TB
Recently, I've been facing ZFS uncorrectable I/O failures. Despite having a new M.2 drive with good SMART status and a healthy zpool, I couldn't retrieve the log after each failure. I suspect it might be due to ACPI, so I disabled BIOS ACPI, but unfortunately, the issue persists.






螢幕擷取畫面 2024-04-15 002622.png螢幕擷取畫面 2024-04-15 002614.png1713111913526.jpg
 
I remember once researching this issue. IIRC it came down to power issues. Possibly your PSU isn't up to spec/stable? Or maybe its a power sleeping function in BIOS, USB ports sleep/hibernation etc..
 
ZFS is not suitable to run on single disk then problems. Specially for system OS partition. If you could put 2 disks for ZFS mirror to handle this type of errors.
 
Check for newer firmware for your drive.

Also if you don't absolutely need ZFS for root, reinstall with ext4 root + LVM and see if it helps.

You can limit the available drive size during the install and reserve space for a separate ZFS pool on a partition beyond root+LVM
 
  • Like
Reactions: esi_y
Just noticed in your S.M.A.R.T. values you have just 87 Power On hours & already 11 unsafe shutdowns. Thats more than 3 unsafe shutdowns a day. I don't know what's caused these unsafe shutdowns - but your system needs checking. However - I know from experience that Power On Hours is almost never correct. But you know how long you've actually been using it - and assuming you purchased it new - make the maths how long its actually been in use. From the Data units read/write, it really looks like its hardly been used, (especially for a ZFS drive). If you messed around with OS installation a number of times - this could also explain the unsafe shutdowns, so you can ignore my whole comment.

Another point, I would never run any ZFS system without a drive mirror in place.
 
Last edited:
I encountered this same condition yesterday. I had to punch the power button because the console was unresponsive. When it booted, it booted normally with a couple missing inodes on the root (ext4) file system but everything else was fine.
 
Last edited: