yes, it is possible, but it is not recommended. I never test it with NVMe and they are normally good with parallel disk access. The problem with normals Disks (SATA/SAS) is the performance impacts.
Also is this setup more complex and what is the goal?
I would make two different ZFS mirrors...
This is not the source of the problem only the result.
The problem is that you are losing quorum in the cluster.
and yes this ends if you have HA enable in a fenced cluster.
This test should be run for 24 hours and also search for latency spikes.
The cluster can stop working if the latency in...
as the log shows, there are no members in this cluster left.
Why network connectivity are lost can have multiple reasons.
- Nic driver bug
- Network overload
- Switch problems
Can you provide a bit more information about your network config?
I guess the Thunderbolt conroller is not in an isolated iommu.
Fore passthrough you must pass the hole iommu group to the VM.
Also, ensure that all preconditions for PCIe passthrough are given.
if the controller and the LUN are detected correctly you should see the LUN at node level under the menu point "Disks".
If it is there and not in use you can go to the submenu "Directory" and create an ext4 formatted storage.
why do you use a 3-way mirror when you have a warm spare?
I would recommend you to use 2-way mirrors with 2 spare disks.
Also, it is not recommended to have a cow ontop a cow.
Why do you use qcow2 on top of ZFS? In the case of performance, it is very bad. one writes in the VM ends in 4...