Intel SSD low IOPS

PaxonSK

New Member
May 2, 2016
2
0
1
44
Hello,

I have Proxmox 4 instalation with ZFS and I like to use ZIL and L2ARC, but I get these 2 types of SSDs:

INTEL SSDSC2BF120A5
INTEL SSDSC2BW240A4

with this SSD I still get ~100 fsync/second on different configurations (changing scheduler, disable write cache,...) on 3 different machines(SATA III , 6GB/s, direct attached or via RAID controller, AHCI) with xfs,ext4 with discard/iothread option.
By papers, this SSDs can make more higher iops (in x1k to x10k numbers, depends on block size etc.)
I tried to attach SSD into VM with Windows, but still get same low IOPS.

But interesting is, for me, this SSDs directly tested on Windows (not as VM, on same HW machines) gets mentioned nice papers IOPS performance.

I found some info about using JFS on SSDs and I tested it with pveperf on this SSDs and finally gets nice >15k IOPS (have someone good explanation why on JFS ?)
But creating JFS and create there some file used as loop block device for ZIL or L2ARC or as RAW disk attached to VMs is not good way.

I spend with this more than15hours, looking on internet everywhere (together with my friend, Google ) and I didnt find any info , where is problem or what to set

Have someone any explanation or any info why it work like this ?
 
so simple - man, my light finally shine ;)

take in consideration sync vs async - how it works, why it is so important in this topic.

thank you
 
these ssd are pretty slow for sync write. (as slow as an hdd)
This is almost the same performance for all consumer ssd drives

zfs journal (and also ceph), need sync write.

check this blog, they are a lot of bench.
http://www.sebastien-han.fr/blog/20...-if-your-ssd-is-suitable-as-a-journal-device/

That is a very good page, thanks for that link. I've been looking for data like this for some time.

Back when I bought my SLOG/ZIL drives two years ago (dual 100GB Intel S3700 in mirror) I couldn't find much information like this. I wound up going with the S3700's based on forums recommendations without much data to back them up, and also because the S3700 have capacitors so in the event of an unclean shutdown they write the disk buffers to the disk, and no data is lost.

At 35MB/s they are nowhere near the top of the list, but at least they are a lot faster than most of the even high end consumer drives on that list.

The question is, at what speed the pool can do these writes on its own. In other words, are they helping or hurting. I'd test myself, but I can't take my pool offline to do the test, as my running services depend on it.

It is a pity that all the best drives for this purpose are so large, as you only need a tiny drive for a SLOG. (and its a bad idea to share the SLOG drive with other partitions) So much space would be wasted.
 
That is a very good page, thanks for that link. I've been looking for data like this for some time.

Back when I bought my SLOG/ZIL drives two years ago (dual 100GB Intel S3700 in mirror) I couldn't find much information like this. I wound up going with the S3700's based on forums recommendations without much data to back them up, and also because the S3700 have capacitors so in the event of an unclean shutdown they write the disk buffers to the disk, and no data is lost.

AFAIK this (the buffer capacitors) are also the reason why those disks perform better for sync writes - they buffer them and can thus safely lie about their status (like a HW Raid with BBU cache). For regular (consumer) SSDs, sync writes perform very badly because SSDs are not meant to write individual small blocks of data fast, but need to spread lots of small writes over the whole array of flash chips to get good performance.
 
Interesting side note.

I have two pools on my Proxmox box:

rpool, the pool created by the proxmox installer, which contains the root file system and my container storage and VM drive images. This pool consists of two mirrored 512GB Samsung 850 EVO drives.

zfshome is my mass storage pool. It consists of 12x WD RED 4TB arranged as two six drive RAIDz2 vdev's with two mirrored 100GB Intel S3700 drives as SLOG/ZIL devices and two 512GB Samsung 850 Pro drives as L2ARC.

After reading this thread, and thinking about sync writes. last night I tested changing the setting on rpool from its default sync=standard to sync=always.

The theory would have been that since they are SSD's they should have less impact from the sync=always setting than hard drives would, but since we know from the link posted above, that consuemr drives tend to have poor sync write performance, the results are actually pretty poor. My overall server IO Delay went from an average of about 0.25% to 4+% sometimes climbing to 8%.

I expected based on this conversation that the results would be worse than with sync=standard, but I did not expect them to be that much worse.

I switched rpool back to sync=standard after testing.

Just figured I'd share this observation.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!