Hi everyone, Hi Darren, you are right,ZFS is not the issue, hardware is.
Maintaining an HCL is going to be tough due to all the setup options offered by PVE. Maye a tips and tricks sticky post ? I don't know.
I have seen a lot of issues on the forum regarding ZFS and Raid Adapters, performances issues, and even hosts crashes lately.
We ran into the exact same issues on several servers and we'd like to share some insights.
Speaking about what we have experienced :
Make use of a LSI/Avago Raid Card with JBOD mode ON and using ZFS Raid-Z gives us very poor performances, lots of IOdelay, and event hosts can freeze under reasonably high load. Even with full SSD pools.
This independant of caching, BBU etc.
For people using servers provided by majors vendors, many of them have LSI Chips based RAID adapters, or should we say AVAGO now, oh no, it's Broadcom sorry.
For the same vendor , there might be an HBA adapter available , with an AVAGO chip also. Ex : Lenovo 930-(x)i = RAID and 430-(x)i = HBA
It seems that only AVAGO/LSI based HBA adapters are supported by VMWARE vsan, not RAID/JBOD ones. And i goes for Hitachi, Dell, Lenovo, Cisco UCS boxes, as we know, maybe others too.
Looking at Broadcom website, both cards/chips architecture are quite different indeed, and what about the linux drivers, not the same also.
Maybe it can be the root cause of much pain for some of you because most of vendors will ship LSI RAID adapters by default with a JBOD option.
Regards,