Sounds really sad, but I didn't solve the libdevmapper: ioctl/libdm-iface.c(1927) as I forgot to update the nano /etc/multipath/wwids. I was so happy that I immediately posted prematurely.I just stumbled over this ticket and I'm glad you solved your issue.
The resource busy is when the device is locked by another kernel system and can be further debugged by dmsetup. It is however not that easy to debug further, yet possible.
Reinstalling - like in windows - solves some issues and I see why you did it.
For the naming issue, we name your devices by scsi bus id that corresponds (in our case) to the shelf ID so that we have e.g. S1R1C1 (for shelf, row and column). With this it is easy to "see" a failed drive better.

My epiphany was when I realized that my OS disks shouldn't contain anything "except the os." "No storage," nada, niente, nothing just like an NYC CVS store. My OS disk sda and sdb in raidz1 would failover anyway to another Node path if down. The fresh OS contains "no data storage, no storage pools" other than the default rpool. The solution to ending this libdev problem was to use only storage pools outside of the OS disks. When I read that Multipath is used for "storage devices," and not for operating system disks I realized I already had the solution. I just removed the wwids of the Operating system from blacklist_exceptions. No errors and clean. # lsblk,# zpool status,#multipath -ll, and #multipath -v1, -V3,
multipath.conf - multipath daemon configuration file.
Ignoring Local Disks when Generating Multipath Devices
As for the naming issue that is definitely something to consider when trying to find which disk to replace. I don't trust those LED lights as I was in the dark for hours trying to replace a few spinners. For the user_friendly_names I had to set this to NO because the cluster uses the same names but hell I'm willing to try this naming process anyway since that would save some serious time.
In any event, everything is working perfectly.