Hello everyone,
In the past we've always used Dell PERC Hardware RAID controllers with Microsoft Hyper-V and VMware, but we recently switches to Proxmox and we started using ZFS in a lot of cases (we put the RAID controller in non-RAID mode here), where the performance is just better. For a new Dell PowerEdge R660xs server. I'm wondering what would be better when using ZFS in RAID-10 (async 12 and compression on and default ARC settings) in regards to performance?
In the past we've always used Dell PERC Hardware RAID controllers with Microsoft Hyper-V and VMware, but we recently switches to Proxmox and we started using ZFS in a lot of cases (we put the RAID controller in non-RAID mode here), where the performance is just better. For a new Dell PowerEdge R660xs server. I'm wondering what would be better when using ZFS in RAID-10 (async 12 and compression on and default ARC settings) in regards to performance?
- 2 Physical CPU's with the NVME lanes directly connected to the CPU?
- Should theoretically be faster
- They'll likely report x4 in this case.
- Perhaps NUMA could be a performance bottleneck?
- Memory should be faster than on-chip RAID controller memory?
- Should theoretically be faster
- A Dell PERC H965i RAID Controller with Write Cache
- I noticed that the Dell PERC H965i controller puts the NVME SSD's in x2 mode
- In theory NUMA / Interconnect could be a performance bottleneck here too as the NUMA / Interconnect from processor 2 would need to access processor 1 for accessing the hardware RAID controller or vice versa (depending on which PCI-E lanes the RAID controller is connected)?