Hello, I would like to share with friends my latest experience with a ZFS cluster, with 3 servers.
After creating ZFS and starting to migrate the vms' disks to the new mirror directory, after leaving all the vms working normally, the next day we had a problem.
It was necessary to restart one of the cluster nodes, and one of the vms did not start, we found that the disk was corrupted, preventing this vm from starting.
This happened again on another server, corrupting the VM disk.
Has anyone experienced this and what could cause these problems?
After creating ZFS and starting to migrate the vms' disks to the new mirror directory, after leaving all the vms working normally, the next day we had a problem.
It was necessary to restart one of the cluster nodes, and one of the vms did not start, we found that the disk was corrupted, preventing this vm from starting.
This happened again on another server, corrupting the VM disk.
Has anyone experienced this and what could cause these problems?