Hi, I'm really worn out.... I've been trying to get my server running again for two weeks. Two weeks ago I thought that it might make sense to change the existing (mdadm) RAID0 to a zfs RAID10. At the same time I wanted to rebuild the host to start fresh... from here the dilemma starts:
I knew that ZFS and a RAID10 don't have the same (write) performance as a pure RAID0, but I didn't think it would be that extreme.
Until now it was always like this:
1x nvme SSD as host disk
4x HDD WD Red CMR 5400rpm as VM disk storage (mdadm RAID0 with lvm-thin).
New should be:
1x nvme SSD as host disk
4x same HDD as zfs RAID10
I have had 6 VMs running on the server so far, all relatively quiet and really only read access is important. After the change I suddenly had to fight with exteremen IO delays (20% and higher), as soon as only some load came on the HDDs.
Example:
VM1 = fileserver with smb
VM2 = iobroker
If I transfer a bigger file to the fileserver, the webserver of iobroker is not responding anymore. Constant timeouts etc. Also the transfer performance drops partly strongly. Before without problems the Gbit line was used to capacity, now partly only 50Mb/s.
I wanted to go back to the mdadm RAID0, but now I have to fight with high IO delays. I first thought of defective HDDs, but SMART says that everything is ok. I have only run the small test so far. The thing is... i really dont know how i configured mdadm RAID0 with lvm-thin when i did it 6-7 years ago. So i just tried it with default values... maybe here lies the problem?
I know that HDDs are not ideal as VM disk storage, but as it is currently, seems a bit extreme to me. Can anyone point me in a direction of what I could try to track down the problem?
I knew that ZFS and a RAID10 don't have the same (write) performance as a pure RAID0, but I didn't think it would be that extreme.
Until now it was always like this:
1x nvme SSD as host disk
4x HDD WD Red CMR 5400rpm as VM disk storage (mdadm RAID0 with lvm-thin).
New should be:
1x nvme SSD as host disk
4x same HDD as zfs RAID10
I have had 6 VMs running on the server so far, all relatively quiet and really only read access is important. After the change I suddenly had to fight with exteremen IO delays (20% and higher), as soon as only some load came on the HDDs.
Example:
VM1 = fileserver with smb
VM2 = iobroker
If I transfer a bigger file to the fileserver, the webserver of iobroker is not responding anymore. Constant timeouts etc. Also the transfer performance drops partly strongly. Before without problems the Gbit line was used to capacity, now partly only 50Mb/s.
I wanted to go back to the mdadm RAID0, but now I have to fight with high IO delays. I first thought of defective HDDs, but SMART says that everything is ok. I have only run the small test so far. The thing is... i really dont know how i configured mdadm RAID0 with lvm-thin when i did it 6-7 years ago. So i just tried it with default values... maybe here lies the problem?
I know that HDDs are not ideal as VM disk storage, but as it is currently, seems a bit extreme to me. Can anyone point me in a direction of what I could try to track down the problem?
Last edited: