Most professional way would be using CEPH with some 10/40 Gbit NICs on an dedicated storage backend network with alot of enterprise SSDs and so on.
Easiest would be to use NFS. So one of your server will act as a NAS being the NFS server for the whole cluster storing all VM virtual disks on that NFS share and all other servers will just access the VM disks remotely using NFS shares. Downside is that if that single NFS server stops working all the other servers won't be able to operate until the NFS server is reachable again.
Another option would be to setup ZFS replication. In that case you want identical sized ZFS pools with identical names as a local storage in each server. And then replication will be used to sync data between the servers. But if you for example got 4 server each of them would need store a local copy of everything so you basically need 4 times the number of drives. Benefit would be that it is fast because it is still a local storage and not a real shared storage over the network with high latencies. But that also means you will always loose some minutes of data if a server fails because the ZFS pools will never be in perfect sync.
So for just 2 server ZFS replication might be fine, but if you plan to add more server in the future I really would recommend setting up CEPH instead.