Comparing file systems for VM storage

I know it is two years old but I think the relative difference in performance is still the same in 2017 - which means except for btrfs all seems more or less equal so this boils down to make your choice vis-a-vis the features you think is important to your usecase.
 
I know it is two years old but I think the relative difference in performance is still the same in 2017 - which means except for btrfs all seems more or less equal so this boils down to make your choice vis-a-vis the features you think is important to your usecase.

That and we tend to include SSD drives either as cache or use SSD drives for the entire data store these days. I didn't read closely, but curious of they enabled compression on ZFS.

Really though, while good to see benchmarks, storage is finding the right balance between capacity, performance, and integrity for each use case.
 
That and we tend to include SSD drives either as cache or use SSD drives for the entire data store these days. I didn't read closely, but curious of they enabled compression on ZFS.
According to the article he used defaults for all file systems which means compression was not enabled. Had compression been enabled I bet the numbers for zfs would have shown an increase of the performance by 20-30% (my experience).
 
That and we tend to include SSD drives either as cache or use SSD drives for the entire data store these days. I didn't read closely, but curious of they enabled compression on ZFS.

Really though, while good to see benchmarks, storage is finding the right balance between capacity, performance, and integrity for each use case.
So, if you use SSD drives on ZFS as the entire data store, would you still use a SSD for L2ARC and ZIL as well?