Receiving ZFS I/O Error on External Drives

Ethan97

New Member
May 10, 2025
2
0
1
Hello everyone!


I could use some help. I'm still new to ZFS and trying to troubleshoot an issue I'm experiencing.
tl;dr I'm receiving i/o errors on my external backup devices when I try to use the Send|Receive commands. It is possible that 80% of the HDDs are bad, but I would like to rule out user error on my part.

Last year, I assembled a Proxmox server and installed 5 HDDs in a raidz2 configuration to use as my "NAS"/fileserver. I created this pool following a guide (that I can no longer find).
These are the commands I used to create the pool:
1746905775471.png

I've been wanting to do something similar with my backups for a while and with the ongoing tariff situation in the US, I recently panic bought more HDDs to use along with two Amazon: Oyen Digital Fortis 5C 5-Bay USB-C External Drive Enclosure . The enclosure appears to be able to handle the HDDs based on the available options on the store page listing. There is nothing in the manual or the store page that states otherwise. The HDDs are installed and also configured as raidz2.
1746907797290.png

The issue that I am having is when I try to use the Send | Receive commands the data will start transferring and then hang. When I open a new shell and enter zpool status, the backup pool is suspended due to i/o errors. The zpool clear command will not work, and the only way to clear the error is to reboot the system.
The i/o errors only occur with the external enclosures and it occurs on multiple HDDs. The HDDs did come from the same batch so it is possible they are bad, but I find it kind of strange that 80% of them are having these errors. Which is why I am wondering if I've done something wrong with the configuration. I have tried reinserting the drives and moving them to different locations. The SmartCTL status on the Disks page of Proxmox lists all of the drives as "Passed".
1746907700158.png
1746906537680.png

From a suggestion I found, I tried increasing the zfs_arc_min and zfs_arc_max size to 50GB and 60GB respectively which seemed to help, but resulted in the same error. The Send|Receive stream reached 30GB transferred before faulting out. The command that I used to temporarily increase the arc size is "echo size > /sys/module/zfs/parameters/zfs_arc_mxx". This is the arc_summary report if it helps:
1746907925020.png
1746908034116.png
1746908056980.png
1746908074566.png
1746908091756.png
I will include the rest of the arc summary in the following post as I reached the attachment limit.

I've tried:
zfs create -o xattr=sa -o acltype=posixacl -o recordsize=1M -o compresion=zstd -o encryption=on -o keyformat=passphrase -o keylocation=location backupPool_A/backup
zfs send -R -v nas-pool/subvol-100-disk-0@04May25 | receive -o xattr=sa -o acltype=posixacl -o recordsize=1M -o compresion=zstd -o encryption=on -o keyformat=passphrase -o keylocation=location backupPool_A/backup

(removing backup)
zfs send -R -v nas-pool/subvol-100-disk-0@04May25 | receive -F backupPool_A/backup
zfs send -v nas-pool/subvol-100-disk-0@04May25 | receive -F backupPool_A/backup


Any help or suggestions that you can provide to troubleshoot the issue are highly appreciated!
 

Attachments

  • 1746907257485.png
    1746907257485.png
    58.5 KB · Views: 0
Last edited:
ZFS write errors typically indicate time-outs or drive/cable/controller errors. If a long SMART test does not give any errors (after a long time), then it's not the drive itself. Try different cables. Try connecting the drive via a different way (like internal) or try another enclosure. Make sure that the power is enough for the HDD. Maybe the whole combination of HDD and external controller is simply too slow for ZFS and time-outs cause the errors, in which case you could try ext4 or LVM instead of ZFS.
 
  • Like
Reactions: news