The IP shown in .members
is the one the node name resolves too, and if you don't have an internal DNS infrastructure this is most likely coming from /etc/hosts
on the affected node itself.
So, if you edit that (directly or via the API) to its IP from the 192.168.x.y/z network and reboot the node you should be good again.
For the migration network specific issue you can also override the migration network in the Web UI under Datacenter -> Options -> Migration Settings, if you set the 192.168.x.y/z network explicitly there, the nodes should avoid the autodetection, and it should work again.
Correcting the IP resolution of the nodename makes sense nonetheless, as if the node isn't reachable by the other node via that (or takes an inefficient route) you'll always have trouble for some action in a cluster.