I think there is nothing preventing you from doing that.As I see that replication duration in our case takes just 1.6 to ~6 seconds per vm, I am thinking about to shorten the period to */1. Is this ok, or would you not recommend such short intervals?
...What a time to be operating a zfs-qemu clusterMigration:
- Enable support for Live-Migration with replicated disks
no, this is not possible as replication is not real-time (asynchronous replication.)True. Thats why I am wondering if also HA and automatic failover to the second node is possible without shared storage.
If you can do it sure, be sure to unplug the network used by corosync as this service is used to determine if a node is still alive and part of the cluster.Yeah, or maybe even do a livetest with a shutdown of a node or just unplugging it from the storage network, once our employes are not working. Thank you.
AFAIK this is normal. If you want certain guests to prefer one node over the other, you can define HA groups and set priorities for the nodes. So in a 2 node scenario, you could create 2 groups which favor one or the other nodes and place the VMs in those groups. Should the node come back, they should migrate back after a bit of time.One question I still have though. When the failed node comes back, is it normal / by design that I have to migrate the VMs back manually, or should they get migrated back automatically in this scenario of 2 node cluster without shared storage and with replicated datastores and with a Qdevice for third vote?