Raid 10 8x 512gb SSD - Low performance

FcbInfo

Renowned Member
Dec 21, 2012
107
0
81
I'm using mdadm raid10 with 8x 512gb Crucial SSD.

root@server:~# pveperf
CPU BOGOMIPS: 48002.04
REGEX/SECOND: 765111
HD SIZE: 19.69 GB (/dev/mapper/pve-root)
BUFFERED READS: 585.00 MB/sec
AVERAGE SEEK TIME: 0.22 ms
FSYNCS/SECOND: 111.79
DNS EXT: 84.20 ms
DNS INT: 3.57 ms

Another time, I see a post here with someone saying FSYNCS/SECOND is good when we have value of 1000+

OMG, what is the problem? It's 8x SSDs.

Thank you guys for try to help-me!

@UDO, where is you? I love you!
 
/dev/pve/root / ext4 errors=remount-ro 0 1
/dev/pve/data /var/lib/vz ext4 defaults 0 1
/dev/md0 /boot ext4 defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0

root@server:~# pveperf /var/lib/vz
CPU BOGOMIPS: 48002.04
REGEX/SECOND: 748337
HD SIZE: 1839.92 GB (/dev/mapper/pve-data)
BUFFERED READS: 465.32 MB/sec
AVERAGE SEEK TIME: 0.25 ms
FSYNCS/SECOND: 75.90
DNS EXT: 79.71 ms
DNS INT: 4.15 ms
 
Last edited:
Try this:
/dev/pve/root / ext4 relatime,nodelalloc,barrier=0,errors=remount-ro 0 1
/dev/pve/data /var/lib/vz ext4 relatime,nodelalloc,barrier=0 0 1
/dev/md0 /boot ext4 relatime,nodelalloc,barrier=0 0 1
 
WOW...

CPU BOGOMIPS: 50398.44
REGEX/SECOND: 1025370
HD SIZE: 1839.92 GB (/dev/mapper/pve-data)
BUFFERED READS: 781.94 MB/sec
AVERAGE SEEK TIME: 0.10 ms
FSYNCS/SECOND: 3733.59
DNS EXT: 78.80 ms
DNS INT: 2002.68 ms


Than the problem is only because of wrong mount options? I'll learn something about these options!

Thank you mir... you are the best like the UDO!
 
WOW again!

My IO Delay goes down from 10% to 0.05% with all vms running.

I really need to study these options.. barrier, nodelalloc, relatime.

Thank you again mir!
 
mir, did you know if this options will works good with Raid-10 4x 3tb HDD (not ssd)?
This other server also have ext4 option.
 
mir, did you know if this options will works good with Raid-10 4x 3tb HDD (not ssd)?
This other server also have ext4 option.
Yes, whether HDD or SSD does not matter.

When using this mount option for SSD it is crucial that you run trim on the file system regularly!

On my servers I have this script:
Code:
cat /etc/cron.weekly/fstrim 
#!/bin/sh


PATH=/bin:/sbin:/usr/bin:/usr/sbin


ionice -n7 fstrim -v /


ionice -n7 fstrim -v /var/lib/vz

In your case you will need to add:
ionice -n7 fstrim -v /boot

The script needs the executable bit so chmod 755 /etc/cron.weekly/fstrim
 
relatime,nodelalloc,barrier=0

If this works for any type of hard disk, why this is not default option when we install proxmox? (These options is only for ext4 and that's why not default?)

I'll put this script in the weekly cron.

Thanks =)
 
@FcbInfo, storage tuning is important to known about, defaults can't fit any type of storage, thus you need to known your stuff :)
Try to study this link at RedHat tuning the IO sub system according to your type of storage to get the optimal perf. is very important ImHO and makes a huge difference
 
@FcbInfo, storage tuning is important to known about, defaults can't fit any type of storage, thus you need to known your stuff :)
Try to study this link at RedHat tuning the IO sub system according to your type of storage to get the optimal perf. is very important ImHO and makes a huge difference
Thank you for the help. I'll do it at this week.
 
Search the forum or internet for "nobarrier" (i.e. barrier=0) to a better understanding of the consequences and risks of this option. I often use it, but you should do once understood the issue :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!