proxmox slow with ssd disks on lsi 9260-8i

sigmarb

Well-Known Member
Nov 8, 2016
69
5
48
38
Hi folks,

I'm looking for ideas to gain performance on my ssd raid on a lsi 9260-8i controller. proxmox 5.1. FS is ext4.

I have 2 x Samsung ssd as raid1 and 2 x WD SATA HDDs on same controller.


Code:
/dev/sdc1              457G   38G  419G   9% /sata
/dev/sdb1             208G  3.1G  205G   2% /ssd

Code:
dd if=/dev/zero of=/sata/6GB-test bs=1G count=6 oflag=direct
170,2 MB/s

dd if=/dev/zero of=/ssd/6GB-test bs=1G count=6 oflag=direct
48,1 MB/s

Removing ssd drives from raid controller for test and attach to onboard sata-port. write rate is ~ 400MB/s.

Any idea why it is that slow with the controller?

What i tried so far:

I switches the ports on the raid controller between sata/ssd. No difference. still same speed.
ssds have no smart errors. did short and long tests.

Tried all caching / read / write options on raid controller. As LSI recommends for ssds, io direct, write through.
Controller has latest firmware i could find from 02/2016.

Any helps is greatly appreciated.

Thank you!

Siegmar
 
I would take one of the SSDs out from the RAID and put it onto simple SATA port and test the same directly on the device itself, bypassing HD RAID and filesystem. If the speeds are the same, then your SSDs are just slow.
 
I would take one of the SSDs out from the RAID and put it onto simple SATA port and test the same directly on the device itself, bypassing HD RAID and filesystem. If the speeds are the same, then your SSDs are just slow.
Seems you missed the following text:
"Removing ssd drives from raid controller for test and attach to onboard sata-port. write rate is ~ 400MB/s."
 
What Samsung SSDs are these? Why are you using such a big blocksize? Please test with fio (enough examples in the forums, so please search for fio)
 
What Samsung SSDs are these? Why are you using such a big blocksize? Please test with fio (enough examples in the forums, so please search for fio)
750 Evo Samsung SSD.

Performance is almost the same with fio:

Code:
root@proxmox:/ssd# fio --max-jobs=1 --numjobs=1 --readwrite=write --blocksize=4M --size=5G --direct=1 --name=fiojob
fiojob: (g=0): rw=write, bs=4M-4M/4M-4M/4M-4M, ioengine=psync, iodepth=1
fio-2.16
Starting 1 process
fiojob: Laying out IO file(s) (1 file(s) / 5120MB)
Jobs: 1 (f=1): [W(1)] [100.0% done] [0KB/24576KB/0KB /s] [0/6/0 iops] [eta 00m:00s]
fiojob: (groupid=0, jobs=1): err= 0: pid=7048: Wed Jan 10 07:02:55 2018
  write: io=5120.0MB, bw=46334KB/s, iops=11, runt=113155msec
    clat (msec): min=8, max=685, avg=88.25, stdev=105.44
     lat (msec): min=8, max=685, avg=88.40, stdev=105.45
    clat percentiles (msec):
     |  1.00th=[    9],  5.00th=[    9], 10.00th=[    9], 20.00th=[    9],
     | 30.00th=[    9], 40.00th=[    9], 50.00th=[    9], 60.00th=[  120],
     | 70.00th=[  161], 80.00th=[  188], 90.00th=[  225], 95.00th=[  265],
     | 99.00th=[  383], 99.50th=[  523], 99.90th=[  635], 99.95th=[  685],
     | 99.99th=[  685]
    lat (msec) : 10=58.20%, 20=0.08%, 50=0.08%, 100=0.47%, 250=35.00%
    lat (msec) : 500=5.62%, 750=0.55%
  cpu          : usr=0.17%, sys=0.20%, ctx=1301, majf=0, minf=11
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=1280/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: io=5120.0MB, aggrb=46333KB/s, minb=46333KB/s, maxb=46333KB/s, mint=113155msec, maxt=113155msec

Disk stats (read/write):
  sdb: ios=132/20526, merge=0/128, ticks=13344/1002228, in_queue=1015552, util=99.70%
 
750 Evo Samsung SSD.

Don't use them - they're rubbish for a server. Never ever use entry level nor prosumer hardware in a server. A good list of SSDs is here:

http://www.sebastien-han.fr/blog/20...-if-your-ssd-is-suitable-as-a-journal-device/

Check your health on the SSD via smartctl, which is most probably not 99% anymore. I killed one of these devices in a few month with normal VM operation under PVE. The write amplification is huge and the performance is very, very bad for server hardware.

Again: don't use artificial blocksizes like 4 MB. Real life usage is much smaller. Please use e.g. the iometer-fio benchmark to get comparable values:

https://github.com/axboe/fio/blob/master/examples/iometer-file-access-server.fio
 
Don't use them - they're rubbish for a server. Never ever use entry level nor prosumer hardware in a server. A good list of SSDs is here:

http://www.sebastien-han.fr/blog/20...-if-your-ssd-is-suitable-as-a-journal-device/

Check your health on the SSD via smartctl, which is most probably not 99% anymore. I killed one of these devices in a few month with normal VM operation under PVE. The write amplification is huge and the performance is very, very bad for server hardware.

Again: don't use artificial blocksizes like 4 MB. Real life usage is much smaller. Please use e.g. the iometer-fio benchmark to get comparable values:

https://github.com/axboe/fio/blob/master/examples/iometer-file-access-server.fio
Maybe you missed this in my text: "ssds have no smart errors. did short and long tests.". And you may have missed this as well "Removing ssd drives from raid controller for test and attach to onboard sata-port. write rate is ~ 400MB/s."

So the drives are ok and can be fast.
 
Maybe you missed this in my text: "ssds have no smart errors. did short and long tests.".

Wearout or health are no errors, so please check it.

And you may have missed this as well "Removing ssd drives from raid controller for test and attach to onboard sata-port. write rate is ~ 400MB/s."

Again: You're using an enterprise grade raid controller with a rubbish SSD. Have you activated FastPath and SSD Guard? What about performance in Windows?

So the drives are ok and can be fast.

No, they are total rubbish for a server environment. Throughput doesn't matter, random IO performance matters and it cannot perform there.
 
Wearout or health are no errors, so please check it.



Again: You're using an enterprise grade raid controller with a rubbish SSD. Have you activated FastPath and SSD Guard? What about performance in Windows?



No, they are total rubbish for a server environment. Throughput doesn't matter, random IO performance matters and it cannot perform there.

I now got your point that you do not like this SSD drives and would not use it.
No smart IDs that would indicate an issue with the drives at all. I do not have windows available to test the controller. I will get the same controller (but with vendor brand from fujitsu) and will check if throughput differs.
FastPath and SSD-Guard is not available.
 
I now got your point that you do not like this SSD drives and would not use it.

I just want to step in here. Never use Samsung PRO SSDs if you expect reliability and performance in server use. We have good results with Samsung SM863.
(the same is true for all other vendors).

But just test this by yourself with fio (see link from above) and post your results.
 
That is sure clear you can not compare consumer-SSD with enterprise one. If nothing else, SM863 has power-loss protection (plus many other improvements). But you also have to pay more for it.

On the other side, out of those consumer-SSDs, Samsung "Pro" series is the better one. But "Evo" is low-end. I would never use TLC-based SSD for anything serious...
 
SOLVED. One of the SSD drives had very poor performance and just got replaced by samsung RMA.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!